Document template for MAINS - CORDIS



D1.1 - Elastic Optical Network Architecture: reference scenario, cost and planning

| Status and Version: | Elastic Optical Network Architecture: reference scenario, cost and planning. Draft 2 |

| Date of issue: | 24.6.2013 |

| Distribution: | Project Internal |

| Author(s): | Name | Partner |

| |Andrew Lord (Editor) |British Telecom |

| |Juan Fernandez-Palacios |Telefonica I+D |

| |Oscar González |Telefonica I+D |

| |Victor Lopez |Telefonica I+D |

| |Luis Velasco |UPC |

| |Jaume Comellas |UPC |

| |Gabriel Junyent |UPC |

| |Marco Quagliotti |TI |

| |Paul Wright |BT |

| |Aristotelis Kretsis |University of Patras |

| |Polyzois Soumplis |University of Patras |

| |Emmanouel (Manos) Varvarigos |University of Patras |

| |Kostas Christodoulopoulos |University of Patras |

| |Annalisa Morea |Alcatel Lucent |

| |Alexandros Stavdas |UoP |

| |Daniel Fonseca |Coriant |

| |Ori Gerstel |Cisco |

| |Matthias Gunkel |Deutsche Telekom |

| |Michael Parker |Lexden Technologies |

| |Norberto Amaya-Gonzalez |University of Bristol |

| |Alexandros Stavdas |University of Peloponese |

| |Michela Svaluto |CTTC |

| |

|Checked by: |Juan Pedro Fernandez-Palacios |TID |

Abstract

D1.1 of Idealist covers the initial ground on all of the relevant sub topics within WP1 of Idealist. This includes operator reference networks and static traffic matrices for analysis. A summary of the state of the art for relevant flexgrid algorithms and definition of future work is provided as well as discussion of plans for the prototype planning tool. There is a definition of the CAPEX and OPEX models (including operational and power consumption considerations) to be used in the techno-economic analysis of the different data and control plane alternatives identified in WP2 and WP3. The deliverable also covers the key Use Cases so far identified for extensive modelling.

Contents

1 Executive summary 5

1.1 Purpose of Idealist 5

1.2 Methodology 5

1.3 Benefits of EON: Use Cases 5

1.4 Reference Networks 8

1.5 Techno-economic modelling 9

1.6 Network Planning, modelling and optimisation 11

1.7 Conclusions and further work 12

2 Introduction 14

2.1 Purpose and Scope 14

2.2 Reference Material 14

2.2.1 Reference Documents 14

2.2.2 Acronyms 18

2.2.3 Definitions 19

2.3 Document History 19

3 Network architecture and Use Cases 20

3.1 Rationale behind Elastic Optical Networks 20

3.2 Use cases for Elastic Optical Networks 24

3.2.1 ST-1: Multi-Layer Restoration 24

3.2.2 ST-2: IP over Elastic/FixRate optical networks 26

3.2.3 ST-3: Flexgrid Optical Networks for DataCentre Federations 27

3.2.4 ST-4: The disaster recovery (DR) 29

3.2.5 ST-5: Flexgrid in metro-regional networks - serving traffic to BRAS servers 30

3.2.5.1 Current Metro architecture 30

3.2.5.2 Scenario A: evolutionary approach 31

3.2.5.3 Scenario B: Fixed-based approach with adaptation layer 31

3.2.5.4 Scenario C: Flexgrid-based approach with Sliceable BVTs at the remote BRAS 32

3.2.5.5 Evaluation of the Scenario C use-case 33

3.2.6 ST-6: Scalable core networks with Architecture on Demand nodes 33

4 Reference networks 36

4.1 Introduction 36

4.2 Telecom Italia National Reference Core Network 37

4.3 Deutsche Telekom national reference IP core network 38

4.3.1 Topology and Network Architecture 38

4.3.2 A and B network split 39

4.3.3 Traffic matrix and forecast 40

4.4 Telefónica National Transport Network 40

4.5 British Telecom Reference Networks 42

4.6 Telecom Italia Sparkle European Network 43

4.7 Summary 44

5 Techno-Economic Analysis 46

5.1 Introduction 46

5.2 CAPEX Model 46

5.2.1 Summary of STRONGEST model 46

5.2.2 Idealist CAPEX model 47

5.3 Target cost for Sliceable Bandwidth Variable Transponders 57

5.3.1 Case study definition 57

5.3.2 Case study results 59

5.4 OPEX model 61

5.4.1 Cost of floor space 62

5.4.2 Field operations and repair model 62

5.5 Energy model for high data rate transponders 64

5.5.1 Energy model for fixed data rate devices 64

5.5.2 Energy model for elastic transponders 70

6 Network planning 74

6.1 State of the Art 74

6.1.1 Clustering of nodes for hierarchical traffic grooming 74

6.1.2 Off-line RSA models 75

6.1.3 Dynamic RSA 75

6.1.4 Spectrum Reallocation 76

6.1.5 Elastic Spectrum Allocation for Variable Traffic 76

6.1.6 Available Planning tools 77

6.2 Algorithms for network planning 78

6.2.1 Algorithms for off-line network planning 78

6.2.2 Algorithms for network clustering 80

6.2.3 Transmission configurations selection and RSA under physical layer impairments 82

6.2.4 Specifically-designed recovery for Flexgrid 83

6.2.5 Large-scale optimization techniques 83

6.2.6 Algorithms for in-operation network planning 83

6.3 Architecture of network planning tool 85

6.3.1 Off-line network planning tool (MANTIS) 86

6.3.2 In-Operation network planning tool (PLATON) 88

6.4 Problems to be implemented in PLATON 91

6.4.1 Single Layer Flexgrid Network Design Problem 91

6.4.2 After Failure Repair Optimization (AFRO) 92

6.4.3 Spectrum defragmentation (SPRESSO) 92

7 Conclusions 94

8 Annex 1 - Detailed reference network information 96

Executive summary

1 Purpose of Idealist

Elastic optical networks are more flexible than existing, fixed alternatives: they offer the scope to use different signal modulation formats and different spectrum allocations, even in a dynamic way. The flexible options being discussed are varied and as such EON covers a broad range of solutions ranging from mixed line rates (MLR) over fixed grid to sliceable bit rate variable transponders (SBVTs) over a fully flexible optical spectrum.

The Idealist EU project intends to find out if EONs can be beneficial to carriers, and if so, under which network scenarios and applications or Use Cases. Additionally, Idealist will pinpoint the optimum EON for each case, quantifying how much benefit will be gained, in terms of CAPEX and a range of OPEX measures. The final outcome of Idealist will be a clear recommendation of the value of EON, the most fruitful situations to consider using it and the actual benefits in doing so.

D1.1 is the first deliverable from Work Package 1 of Idealist and as such, sets out the roadmap to reach this goal. In this Executive Summary, a complete overview of the deliverable is presented, including methodology, work accomplished so far, and outlook.

2 Methodology

To carry out its plan, Idealist is adopting the following methodology:

i) Seek the most likely opportunities in carriers’ networks, known as Use Cases

ii) Define a range of national and international reference networks including traffic profiles (both static and dynamic).

iii) Develop techno-economic modelling facilities to enable both CAPEX and OPEX-based assessment of each of the solutions in a comparative way.

iv) Development of a wide range of modelling tools to plan the network off-line and provision resources in real time. In this way, both the elastic and non-elastic approaches will be compared fairly.

v) Bring (i) – (iv) together in comprehensive modelling to do a broad technology comparison between EON, flexgrid and more conventional alternatives.

3 Benefits of EON: Use Cases

Although there is perhaps an acceptance that EON has benefits, there is a debate about whether a full flexgrid implementation is required to achieve those benefits. Given this, a key, current industry debate relates to when is the most appropriate time for carriers to: (a) install flexgrid ready components, and (b) enable them and start using the technology. The key flexgrid components are the liquid-crystal on silicon (LCoS) based wavelength selective switches (WSS’s), which allow arbitrary spectrum demarcation, although they can be readily used in fixed grid mode, and the flexgrid capability can be software enabled (and paid for!) at a later date, when required. This is a useful alternative for carriers, who might only refresh their main transmission infrastructure every few years.

An alternative strategy is to continue with a fixed grid 50 GHz solution but use mixed line rates and higher modulation formats to carry demands. For high rates of 400 Gb/s and above, the fixed grid option requires the use of inverse multiplexing, in which the total rate is split into smaller chunks. For example, 400 Gb/s could be transmitted as 100 Gb/s, DP-QPSK in four, not necessarily adjacent fixed grid spectrum slots.

It is commonly accepted though, that eliminating the arbitrary 50 GHz boundaries, allow a more efficient use of spectrum, and this results in an increase in network capacity. One early Idealist study, based around the Spanish network, shows the impact on increasing capacity, in terms of delaying further network build. The key result, in the figure below, shows that fixed grid upgrades are required in 2019, whereas flexgrid can support traffic growth until 2024. If current fixed grid networks have sufficient capacity until 2019, this suggests that although flexgrid is seen to have significant advantages, there isn’t an immediate need for it, and so flexgrid ready components could be installed when carriers next refresh their DWDM capability.

[pic]

Figure : Number of new Fiber Links WSON vs. flexgrid evolution models in Telefonica Spain reference network

However, one issue hidden by this result is the number of transponders required, and this becomes significant in fixed grid, where large demands have to be inverse multiplexed as described earlier. This implies that there is a cost impact in delaying the move to flexgrid, especially if we start to see the development of cost effective 400 Gb/s and 1 Tb/s transponders.

This example shows the complexities of the question, the answer to which depends on the specific scenarios and solutions being compared. Consequently it is impossible to get a definitive answer in the short published papers usually published in the journals or conferences.

The only sound way to tackle a question as complex as this, is to do it within a large, multi-partner project such as Idealist. In this context, it is possible to construct a full range of scenarios, reference networks, EON variants, planning and operational algorithms, techno-economic models – and then combine then to thoroughly compare the options and draw clear conclusions. One key element is the definition of Use Cases that span the full range of applications of potential interest. This section will summarise these Use Cases.

A range of Use Cases have been compiled, with extensions being added as the project progresses. The existing Use Cases fall broadly into two categories – medium and longer term. The following table summarises them and indicates the status of each one.

|Use-Case |Summary |Timescale |Status |Contributor |

| |Resilience mechanisms relying on the IP layer exclusively | | | |

| |are not efficient. | | | |

| |Restoration is a multi-layer problem to be triggered from | | | |

|Multi-Layer |IP routers and their TE functionality. |ST |To be studied |DT |

|Restoration |The introduction of CoS in an EON allows to adapt the | |in WP1 | |

| |line-rate to a given restoration path and allow a fast | | | |

| |recovery of the high-priority traffic | | | |

| |In an IP-o-EON, multi-layer planning and operation is an | | | |

| |essential feature. | | | |

| |A joint optimization of the packet flow in both, the | | | |

| |working and the backup paths over the IP/MPLS layer in | | | |

|IP-o-EON |association to RSA and CoS differential is an important |ST |To be studied |ALU |

| |design asset. | |in WP1 | |

| |Federated Data-Centres is emerging as an important part of| | | |

|Federated Data-Centres|the IT infrastructure. | | | |

|based on EONs |Static bandwidth provisioning is using inefficiently in a | | | |

| |high-CAPEX infrastructure. |ST |Ongoing study |UPC |

| |EONs facilitate to dynamically adjust bandwidth based on | | | |

| |the actual traffic flow needs in real time. | | | |

| |Disaster recovery mandates a flexible and reconfigurable | | | |

| |network to provide maximum (but typically not full) | | | |

| |service recovery. | | | |

| |EONs play a critical role in DR plans since the BVTs | | | |

|EONs in disaster |allows one to optimize the line-rate for a given distance,|ST |Ongoing study |CISCO |

|recovery |and they provide cost benefits compared to fixed-grid | | | |

| |solutions with given regenerator placement. | | | |

| |IP functionality is implemented in the Broadband Remote | | | |

|EON in Metro Networks |Access Servers (BRAS) which are usually located at the | | | |

| |second level of aggregation | | | |

| |The introduction of the flexgrid approach in the Metro | | | |

| |Area Networks (MAN) would support support BRAS |ST |Ongoing study |TID |

| |centralization and hence IP router machinery reduction. | | | |

| |The work would compare a flex-grid solution in a realistic| | | |

| |metro-regional network scenario to traditional approaches | | | |

| |based on fixed-grid WDM systems that are used today | | | |

|Scalable core networks|The current WSS solutions have scalability problems | | | |

|with Architecture on |The AoD concept is proposed where an optical backplane | | | |

|Demand nodes |facilitate to obtain a node architecture where the |LT |Ongoing study |UoBristol |

| |specific functionality comes from the optical sub-systems | | | |

| |that are selected at will. | | | |

4 Reference Networks

The general characteristics of the Reference Networks collected in WP1 are summarized in the first table below. At the present stage the IDEALIST project relies on six networks of different types and geographic scopes, from Nationwide to European Continental. All the networks are available with topological details and often with the characteristics of the fibers and other valuable features (for instance optical amplifier positions and span length) that allows us to perform an accurate network design. Topological features like node degree and link length (average and maximum values) are reported in the second table below. For most of the reference networks the traffic demand is also available but limited to the static version, as this is related to information from transport networks today. Dynamic traffic can be generated by traffic engineering modelling, whilst waiting for real data that is expected to be collected during the progress of the project.

Main Features of Reference Networks

|Operator |Location |Segment covered |Main features |

|TI |Italy |Core (National) |Flat National 44 nodes network, mainly but not exclusively for |

| | | |carrying IP backbone traffic; mainly G655 and G652 and few G653. |

|DT |Germany |Core (National) |Flat 12 PoPs National Core, physically installed twice (12+12 |

| | | |nodes) to serve exclusively the IP core network, fiber is wholly |

| | | |G.652. |

|TID |Spain |Core (National part) |Two levels National optical network, 30 nodes National Core, |

| | | |G.652 fiber. |

|TID |Spain |Core (Regional part) |5 Regional networks (30 nodes each); G.652 fiber everywhere. |

|BT |Great Britain |Core, Metro and Aggregation |1113 nodes network connected by a G652 fibre infrastructure. No |

| | | |inherent hierarchy but sites classified as Core, Metro or |

| | | |Aggregation. |

|BT |Great Britain |Core (National) |22 nodes flat core network with G652 fiber links. |

|TI |Europe |Core (Continental) |Flat 49 nodes network. Fibers on the links are G.652 or G.655. |

Topological characteristics of Reference Networks

|Operator |Location |Nodes |Nodal degree |Links |Link length [km] |

| | | |average |max | |average |max |

|TI |Italy |44 |3.2 |5 |70 |174 |482 |

|DT |Germany |12 |3.3 |5 |20 |243 |485 |

|TID |Spain (National) |30 |3.7 |5 |56 |148 |313 |

|TID |Spain |30 |3.5 |5 |53 |73 |185 |

| |(5 Regions) |(each Reg.) | | | | | |

|BT |Great Britain |1113 |3.5 |17 |1956 |24 |295 |

|BT |Great Britain |22 |3.2 |4 |35 |147 |686 |

|TI |Europe |49 |2.9 |5 |69 |393 |1212 |

5 Techno-economic modelling

Idealist has taken as its starting point for techno-economic analysis, the CAPEX model constructed in the STRONGEST project. This contains cost figures for a wide range of transport functions and allows a good costing exercise for an existing network. In D1.1 this is updated with cost information for Layer 3 components and there has been a change of cost baseline from 10 Gb/s non-coherent to 100 Gb/s coherent transponder.

The main techno-economic interest for Idealist is to achieve a detailed cost comparison of flexgrid / elastic networks as compared to fixed grid. This suggests that most of the activity will be focused on the optical layer, but it is felt strongly that the IP client layer will have a significant role to play. This will particularly be the case when we consider IP-over-flexgrid type architectures, potentially using (Sliceable) Bit Rate Variable Transponders, which give the opportunity to share IP bandwidth very flexibly.

The modelling here also assumes that technology will move forwards and also reduce in cost as it sells in volume. To this end, Idealist is focusing on 3 timeframes – 2013, 2015 and 2018. Cost estimates beyond this time are impossible. Transponders with data rates from 100 Gb/s to 1 Tb/s are assumed to appear as this timeline unfolds, initially with fixed bit rate, but eventually with bit rate variability and ultimately sliceability emerging.

The table below gives the first glimpse of the cost figures for flexgrid transponders, although, as seen from the table, there is clearly a lot more work to do in conjunction with WP2, who are actually developing the flexible transponders alluded to here.

|Bandwidth Variable Transponders in flexgrid |

|Interface type |Specification |Available |Cost (ICU) |Required slot |

|Transponder 1 |100G, 50GHz, 2000km AUA* 40G, 50GHz, 2500km |2013 |1.44 |2 |

|Transponder 2 |400G, 75GHz, 500km AUA 200G, 75GHz, 2000km AUA 100G,|2015 |1.76 |4 |

| |75GHz, 2500km | | | |

|Transponder 3 |1000G, 175GHz, 500km AUA 500G, 175GHz, 2000km |2018 |2.00 |6 |

|Transponder 400G |400G, 100GHz, 1000km |2018 |1.20 |2 |

|100G Muxponder, 2 x 40G + Transponder1 |

|100G Muxponder, 10 x 10G + |Specification |Available |Cost (ICU) |Required slot |

|Transponder1 | | | | |

|400G Muxponder, 10 x 40G + |T.B.D. |2015 |T.B.D. |T.B.D. |

|Transponder2 | | | | |

|400G Muxponder, 4 x 100G + |T.B.D. |2015 |T.B.D. |T.B.D. |

|Transponder2 | | | | |

|400G Muxponder, 10 x 40G + |T.B.D. |2018 |T.B.D. |T.B.D. |

|Transponder 400G | | | | |

|400G Muxponder, 4 x 100G + |T.B.D. |2018 |T.B.D. |T.B.D. |

|Transponder 400G | | | | |

* AUA = Also Useable As

The deliverable also provides the basis for a range of OPEX related models that will be further developed in IDEALIST. These include cost of accommodation / floor space, cost of operations and repair in the field, cost of maintenance including spares, and the cost of using energy. With respect to energy usage, D1.1 presents an in-depth analysis of the energy consumption in various kinds of fixed and flexgrid transponders – information that will be essential to allow fair comparison between them. An example of the kind of modeling and analysis attempted is given in the table below which shows energy consumption values for 400 Gb/s and 1 Tb/s transponders based on coherent technologies, including details of the components required to make them.

| |Component |400Gb/s transponder |1Tb/s transponder |

| | |Unit |Power consumption (W) |Unit |Power consumption (W) |

|Client side |Client card (@10Gb/s) |4 |24 |10 |24 |

| |Framer/Deframer |1 |100 |1 |200 |

|E/O modulation |Drivers |2x4 |2 |4x4 |2 |

| |Laser |2x1 |6.6 |4x1 |6.6 |

|O/E receiver |Local oscillator |2x1 |6.6 |4x1 |6.6 |

| |Photodiode +TIA |2x4 |0.4 |4x4 |0.4 |

| |ADD |2 |80 |4 |90 |

| |Management power |20% total power |20% total power |

6 Network Planning, modelling and optimisation

WP1 has a strong algorithm and simulation focus, with a wide range of tools available and under development to solve a wide range of problems. The key problems to be solved here relate to the assignment of optical spectrum across specific network routes to meet traffic demands. This Routing and Spectrum Assignment (RSA) is an NP complete problem, and therefore highly computationally intensive. It therefore requires carefully crafted heuristic approaches to give solutions in realistic time scales, especially as the number of network nodes increases.

RSA divides into two distinct categories of problem:

• Off-line planning. Here there is no requirement for decisions in real time. We have a network with some traffic requirements and we wish to optimise the location of equipment and the routing and spectrum allocations for the various traffic demands. Time can be taken to explore different scenarios to yield the best solution for the given requirements.

• On-line modelling. Here, the network is up and running, and new real-time RSA decisions need to be made. Time is limited, and only one or a small number of new demands are to be scheduled.

One critical issue that determines which category of problem is required, relates to the dynamicity of the traffic. Static traffic works well with slower algorithms, although even static traffic grows and provisioning the new circuits brings in an element of dynamics, albeit small. Dynamics can of course arise from quite natural causes, such as time of day traffic variations and restoration following link or node failures. Often these variations cause the traffic to either grow or reduce everywhere simultaneously, and so don’t necessarily require RSA operations.

It is fair to say that carriers are not currently experiencing traffic that is dynamic enough on the timescales that would impact on the optical layer, but a future dominated by cloud, data centre and ultra-high definition TV services could change that situation, or at least modify it. Nevertheless, Idealist is assessing the benefits that an elastic optical network can bring to dynamic Use Cases, and this requires development of suitable algorithms.

Looking beyond the nearer term Use Case opportunities, an important flexgrid research question involves looking at End-to-End architectures and re-optimising them for flexgrid. This is important because flexgrid might perform best in flatter architectures, because it is better able to handle a large range of optical demands. An early part of this architectural study involves segregation of the whole network into metro clusters interconnected via core nodes. Other early work is looking at incorporating physical layer limitations (both noise / reach and nonlinearity related) into the RSA algorithms. Also, a study of large-scale optimisation techniques has begun, to deal with the extremely high numbers of variables in some of the large problems involving many degrees of freedom and many nodes. Finally, a range of approaches to solving the online problems has been initiated – ranging from real time RSA through to defragmentation problems.

Simulation tools are being developed to handle this huge computational challenge in a robust way. There are two distinct tools to handle the offline and online processing paradigms:

• MANTIS. Predominantly offline processing tool. Provides a repository for networks and algorithms – thus allowing easy comparison of different approaches and a ready-made benchmarking tool. Potential availability as a Cloud service via a web interface.

• PLATON. Online processing tool. Addressing problems such as flexgrid design, post-repair optimisation and spectrum defragmentation.

Whilst a version of MANTIS existed pre-Idealist, PLATON is a new tool, being developed within the Idealist project to specifically address problems requiring real time decisions. The following table shows the intended development plan for PLATON.

|Task |%Done |

|AoD |Architecture on Demand |

|AWG |Arrayed Waveguide Grating |

|BRAS |Broadband Remote Access Server |

|BVT |Bitrate Variable Transponder / Transceiver |

|CANON |Clustered Architecture for Nodes in an Optical Network |

|CoS |Class of Service |

|DAC |Digital to Analog converter |

|DSP | Digital Signal Processing |

|DP |Dual Polarisation |

|EON |Elastic Optical Network |

|FEC |Forward Error Correction |

|ICU |Idealist Cost Unit (based on a 100G coherent transponder) |

|LCoS |Liquid Crystal on Silica |

|LSP |Label Switched Path |

|MAN |Metropolitan Area Network |

|MGOXC |Multi Granular Optical Cross Connect |

|MMTTR |Minimum Mean Time To Repair |

|MTBF |Mean Time Between Failures |

|MTTR | Mean Time To Repair |

|OFDM |Optical Frequency Division Multiplexing |

|OLA |Optical Line Amplifier |

|OTN |Optical Transport Network |

|OXC |Optical Cross Connect |

|PLI |Physical Layer Impairments |

|QAM |Quadrature Amplitude Modulation |

|QoS |Quality of Service |

|QoT |Quality of Transmission |

|QPSK |Quadrature Phase Shift Keying |

|ROADM |Reconfigurable Optical Add Drop Multiplexer |

|RSA |Routing and Spectrum Assignment |

|RWA |Routing and Wavelength Assignment |

|SBVT |Sliceable Bitrate Variable Transponder |

|SCU |STRONGEST Cost Unit based on 10G Transponder |

|SSON |Spectrum Switched Optical Network |

|WSON |Wavelength Switched Optical Network |

|WSS |Wavelength Selective Switch |

1 Definitions

7 Document History

|Version |Date |Authors |Comment |

|Draft 1 |20.5.13 |Andrew Lord |1st draft after Barcelona Plenary. |

| | | |Contains placeholders for all |

| | | |contributions |

|Draft 2 |17.6.13 |Andrew and others |Draft after first round of inputs |

|Draft 3 |24.6.13 |Andrew and others |Draft after main review |

|Final version |26.6.13 |Juan |Final review and quality check |

Network architecture and Use Cases

1 Rationale behind Elastic Optical Networks

Optical spectrum optimization, although it is an important benefit, might not be enough to justify the EON approach. Further benefits from elastic optical networking may come from: joint IP and optical transport optimization, multilayer restoration, regeneration minimization, power consumption reduction, transponders optimization (SBVT), operational simplification (programmable transponders) and others.

Optical Spectrum Optimization

It is well known that one of the benefits of flexgrid is the optical spectrum optimization due to its ability to adjust the spectrum reserved for a channel to the actual spectrum needs of the optical signal. However, a key question is: when will optical spectrum become a problem for network operators?

This section studies the evolution of a national optical transport network, comparing a strategy that follows current fixed grid architecture (WSON strategy) and an alternative strategy where a flexgrid architecture is implemented (SSON strategy). Specifically, it is intended to determine the year when the installed capacity will be exhausted and new optical links would need to be deployed. This information will be useful in order to check if the capacity gain given by the introduction of flexgrid is a necessity in the coming years and if it is worth implementing flexgrid in a real network.

In this study we will use the term WSON to refer to the fixed grid-based architecture. This WSON evolution model represents the continuity of the already deployed infrastructure. On the other side, the SSON model represents a scenario where flexgrid capability is present. There are two possible cases in the SSON model: 1) it is possible to activate the flexgrid functionality in the WSSs by a software upgrade; and 2) the deployed WSSs cannot implement flexgrid capability and would need to be replaced. This can be considered as a greenfield scenario. This section considers the case where WSSs are upgradable.

The reference network is the Spain Reference Transport network publicly available in [10] and described in Section 4. The study assumes an initial ROADM deployment and an initial set of 10 Gb/s and 40 Gb/s given by the reference scenario. After the initial deployment, the network grows, on the one hand, following the WSON model and on the other hand, following the SSON model. The study compares the number of new long-haul links that need to be activated following each strategy.

A heuristic RWA/RSA algorithm for WSON and SSON architectures respectively has been used for the path calculation and resource (wavelength/spectrum) assignment of the optical channels. The algorithm for the WSON model is the Adaptive Unconstrained Routing Exhaustive (AUR-E) [1] RWA algorithm. For the SSON case such an algorithm has been adapted to solve the RSA problem.

Figure 1 shows the number of activations of new links needed up to a given year in both evolution models. It can be seen that the capacity of the deployed network will be exhausted by 2019 if the WSON evolution model is followed. However, by the activation of the flexgrid functionality the lifetime of the network is extended by 5 years. This life extension in the SSON evolution model is due to both the use of high efficiency modulation formats beyond 100 Gb/s and the adaptive spectrum assignment.

[pic]

Figure 1: Number of new Fiber Links WSON vs. flexgrid evolution models

Electronic regeneration minimization beyond 100Gb/s

A short-term driver for flexgrid might be the appearance of 400 Gb/s client signals and cost effective 400 Gb/s transponders. In this case study, for the Telefónica core transport network, the number of 400 Gb/s demands that could be needed in the following years, with a 30% yearly traffic increase, maintaining the same traffic distribution, is obtained. Figure 2 shows the number of forecast demands for 400 Gb/s channels per year.

[pic]

Figure 2: Number of new Fiber Links WSON vs. flexgrid evolution models

There are two main choices for 400 Gb/s transmission:

• 400 Gb/s transmission based on OFDM-DP-QPSK over 125 GHz. The reach of each subcarrier modulation format (DP-QPSK) would enable long haul deployments. However, as 125 GHz is needed, it is only feasible using flexgrid technology.

• The other option is to split the demand into two wavelengths with DP-16QAM 200 Gb/s over the standard fixed 50 GHz grid. However, in this case, the reach is lower. Taking into account the length and number of ROADMs on the long haul routes in the Telefónica of Spain Network, approximately at least 30% of the 400 Gb/s transmission using DP-16QAM would need regeneration. Such regeneration can be avoided by using OFDM-DP-QPSK over flexgrid.

A similar analysis was also done over Telecom Italia network, where different technological scenarios characterized by transponders that use different modulation formats and number of carriers for each channel (e. g. single or multiple carriers organized as superchannels) were investigated. The considered scenarios for this analysis classified as homogeneous (all the traffic is carried by only on type of transponder) are reported in Table 1 where the main characteristic of each scenario is summarized.

Table 1: Transponder used in homogeneous scenarios.

|Transponder |

|type |

|RT 100G |

|NY 200G |

|GAL 200G |200 |32 |PM-16QAM |

|TI |Italy |Core (National) |Flat National 44 nodes network, mainly but not exclusively for |

| | | |carrying IP backbone traffic; mainly G655 and G652 and few G653. |

|DT |Germany |Core (National) |Flat 12 PoPs National Core, physically installed twice (12+12 |

| | | |nodes) to serve exclusively the IP core network, fiber is wholly |

| | | |G.652. |

|TID |Spain |Core (National part) |Two levels National optical network, 30 nodes National Core, |

| | | |G.652 fiber. |

|TID |Spain |Core (Regional part) |5 Regional networks (30 nodes each); G.652 fiber everywhere. |

|BT |Great Britain |Core, Metro and Aggregation |1113 nodes network connected by a G652 fibre infrastructure. No |

| | | |inherent hierarchy but sites classified as Core, Metro or |

| | | |Aggregation. |

|BT |Great Britain |Core (National) |22 nodes flat core network with G652 fiber links. |

|TI |Europe |Core (Continental) |Flat 49 nodes network. Fibers on the links are G.652 or G.655. |

Table 3: Topological characteristics of Reference Networks

|Operator |Location |Nodes |Nodal degree |Links |Link length [km] |

| | | |average |max | |average |max |

|TI |Italy |44 |3.2 |5 |70 |174 |482 |

|DT |Germany |12 |3.3 |5 |20 |243 |485 |

|TID |Spain (National) |30 |3.7 |5 |56 |148 |313 |

|TID |Spain |30 |3.5 |5 |53 |73 |185 |

| |(5 Regions) |(each Reg.) | | | | | |

|BT |Great Britain |1113 |3.5 |17 |1956 |24 |295 |

|BT |Great Britain |22 |3.2 |4 |35 |147 |686 |

|TI |Europe |49 |2.9 |5 |69 |393 |1212 |

Techno-Economic Analysis

1 Introduction

This section reviews and updates the CAPEX model first developed in the STRONGEST project, including updated information for Layer 3 components and a change of cost baseline from 10 Gb/s transponder to 100 Gb/s coherent transponder. There is a first study in the potential cost of a Sliceable Bandwidth Variable Transponder (SBVT). It also provides the basis for a range of OPEX related models that will be further developed in Idealist. Finally there is an in-depth analysis of the energy consumption in various kinds of fixed and flexgrid transponders – information that will be essential to allow fair comparison between them.

2 CAPEX Model

1 Summary of STRONGEST model

The cost model to be applied in Idealist takes as a reference the cost model developed in the STRONGEST project that in its final version was published in [17].

As the STRONGEST project was focused on the whole core network, including all the network layers and technologies from layer 1 to layer 3, the cost model was developed for equipment in four technology areas: IP/MPLS, MPLS-TP, OTN and WDM, the last one in two versions: fixed and flexgrid. The multi-layer model, including possible interconnection schemes, is shown in Figure 20.

The STRONGEST model adopted as cost units relative to the price of one non-coherent transponder at 10 Gb/s with a reach of 750 km on a compensated SSMF fibre. This cost unit was recognized as the STRONGEST Cost Unit (SCU). All other devices or subparts of pieces of equipment are expressed with reference to that unit. This enables the decoupling of the cost values from any actual currency, and facilitates the collection of data because price data is considered highly confidential by most of the involved players. The model deals with price and not cost values. A practical reason for that is that price data is easier to collect than industrial cost (i.e. the pure cost without profit margins); fortunately the price values, however, determine how much an operator actually has to pay for deploying a network and this is exactly what it is required for techno-economic evaluations. Nevertheless, in general, the terms ‘price’ and ‘cost’ are used, and continue to be used in IDEALIST interchangeably.

The main characteristics of the STRONGEST CAPEX model are presented below:

IP/MPLS, MPLS-TP and OTN are modelled with blocks organized at three levels: chassis, cards, and interfaces (or transceivers). Chassis are characterized by their capacity in terms of slots, the cards in terms of capacity (throughput) and type and number of ports, and the interfaces in terms of capacity (client rate), framing (i.e. Ethernet or POS) and transmission characteristics (grey or coloured, for coloured cards reach). Cards of any type are supposed to always require one slot in the chassis.

The WDM layer is modelled in two versions: fixed grid and flexgrid.

The WDM fixed grid model includes transponders and equipment that work according to the standard 50 GHz grid. The model covers only current and short-term devices and the highest rate transponder is 400 Gb/s in 50 GHz and 150 km reach, suitable only in the metro context. Concerning ROADM, four models are considered: (i) a basic one with add drop on lines and use of AWG and interleavers; (ii) a colourless one; (iii) a colourless and directionless one; and (iv) a fully flexible one (colourless directionless and contentionless). The basic blocks are identical for all the models and are the following ones: amplifiers (pre, boost), AWG, interleaver, 1x9 WSS.

Flexgrid includes transponders and equipment that work according to flexgrid as defined by the ITU-T. In particular a set of reconfigurable transponders are modelled as a transponder that can accept a change in the rate of the client signal and as a consequence the transmission rate without changing the optical allocated bandwidth. This can be considered a first step in introducing flexibility into the optical network; in fact it allows one to allocate a bandwidth different from the one allowed by the fixed grid, but it doesn’t permit a change in bandwidth allocation for the connection.

For all the layers, components with different capacities and functionalities are considered to be available over different time periods. With some exceptions these periods are: the actual at the time of model finalization (year 2012), the short-term (2015) and the mid-term (2018).

[pic]

Figure 20: Summary of basic building blocks of STRONGEST CAPEX model

2 Idealist CAPEX model

IDEALIST is focused on the optical layer and in particular on flexgrid devices and networks, so the area of interest is restricted to the optical transmission and switching in layer 1. Nevertheless, as the main client of the optical networks is currently the IP network and it is expected that it will continue to be true over the time duration of the Idealist project, in order to allow making significant techno-economic evaluations at the network level, the IP/MPLS part of the CAPEX model is retained and updated. According to this assumption, the Idealist CAPEX model covers three types of equipment: IP/MPLS, WDM fixed grid, and flexgrid for elastic optical networking (flexgrid and flexible rate).

Reference Cost Unit

With regard to the Cost Unit, i.e. the cost used as a reference for all the elements included in the model, this has been updated for the Idealist model. As nowadays (year 2013) the state-of-the-art in the transponder technology is now coherent 100 Gb/s (this being the highest rate device commercially supplied by many vendors with DP-DQPSK modulation, a baud rate of 32 Gbaud, with soft decision FEC, and a reach of about 2000 km on SSMF fibre), the Idealist Cost Unit (ICU) is defined as the cost of such a 100 Gb/s device (rather than non-coherent 10 Gb/s). All other devices and subparts of equipment are priced with reference to the ICU.

In the case where an Idealist case study requires OTN or MPLS-TP, the CAPEX model of STRONGEST is applied with the use of an appropriate cost conversion factor. Such a factor is calculated as follows: 12.5 SCU = 1 ICU.

Time References

Two time horizons in addition to the present starting period (year 2013) are chosen to define the commercial availability of future equipment and devices. They are a short-term period (year 2015) and a medium-term period (year 2018). In fact it is not important which year exactly in the future the aforementioned equipment will be placed on the market; rather, the important thing is to note that we assume that there will be two successive technology generations as from today. These two steps correspond to the availability of slots and interfaces at 400 Gb/s and 1 Tb/s respectively.

The current period is characterized by 200 Gb/s on line cards and up to 100 Gb/s electrical interfaces on electrical switching equipment, and by non-coherent 100 Gb/s in 50 GHz for transponders in the optical domain. In the short-term (year 2015) both line cards and interfaces at 400 Gb/s are expected to be available in electrical switching equipment, and fixed and flexgrid transponders at 400 Gb/s are estimated to be ready in the optical domain. SBVT at a 400 Gb/s line-rate will also assumed to be available, but the details of such devices is still under definition in the project. In medium-term (year 2018) the speed of line cards and interfaces on electrical switching equipment and the optical layer is expected to be 1 Tb/s. At the optical layer the 1-Tb/s line speed is assumed to be available for BCT, BVT and SBVT.

Idealist CAPEX model Excel file

In the appendix of this document an Excel format file is included, composed of four sheets which include the three technologies (IP/MPLS, WDM fixed grid, EON flexgrid) plus a sheet dedicated to transceivers.

CAPEX model for IP/MPLS

The IP/MPLS model is organized (as in the STRONGEST Model) into three levels: the basic node, the line cards and the transceivers. The basic node includes the chassis (single- or, for core routers only, multi-chassis) the physical and mechanical assembly, the switch, power supplies, cooling, and control and management plane hardware and software. It also provides a specified number of bidirectional slots with a nominal (maximum) transmission speed (also named “slot capacity” in the model). Into each slot, a line card of the corresponding (or lower) speed can be installed. Each line card provides a specified number of ports at a specified speed and occupies one slot of the basic node. In each port, a transceiver can be plugged in. Depending on how it is configured, a router port can forward packets based on IP addresses, on MPLS labels, or (if it is an Ethernet card) on Ethernet media access control (MAC) addresses. Transceivers can be grey or coloured; in the case of coloured transceivers, if optical compatibility is satisfied, the output signal from the transceiver does not require an OEO to be transmitted on an optical infrastructure. Innovative devices, such as BVT / flexgrid transceivers or those implementing SBVT functionality (with client interfaces embedded in the line card) are also modelled.

Two categories of routers can be distinguished: a single-chassis router for metro nodes (in particular two sizes are considered: 10 and 20 slots) and a scalable multi-chassis router for large core nodes, with up to 72 chassis, with a single 16 slot capability shelf on each chassis.

This means that core routers are capable of a minimum of 16 (one shelf) to a maximum of 1152 (72 shelves) slots for hosting line cards. Table 4 reports the cost for the metro routers and for some core routers. The cost P of multi-chassis core routers (routers that require two or more shelves) can be computed according to the formula (1) selected from [17] and properly revised taking into account that the reference cost unit is different for the Idealist model. Equation (1) is derived according to the modular structure of equipment supplied by a specific vendor.

[pic] (1)

Where

[pic]

C is the total switching capacity in Tb/s required to the router, and K the capacity of a fully equipped shelf, which depends on the reference year, where in particular K= 3.2 Tb/s in 2013, K= 6.4 Tb/s in 2015, and K= 16 Tb/s in 2018.

It is important to note that the cost of a core router has a significant increase from 1 chassis (P=4.3) to 2 chassis (P=22.9), but for more than two shelves the cost has an almost linear growth as Figure 21 shows for a number of shelves up to 20.

[pic]

Figure 21: Basic router cost in function of the number of shelves.

Table 4: IP/MPLS Basic Node cost

|Basic Node for Metro Router |

|Capacity |Capacity |Capacity | | |

|Slot @ 200 Gb/s |Slot @ 400 Gb/s |Slot @ 1 Tb/s |Provided slots |Cost (ICU) |

|Year 2013 |Year 2015 |Year 2018 | | |

|2 Tb/s |4 Tb/s |10 Tb/s |10 Slot |0.42 |

|4 Tb/s |8 Tb/s |20 Tb/s |20 Slot |0.83 |

|Basic Node for Core Router |

|Capacity |Capacity |Capacity | | |

|Slot @ 200 Gb/s |Slot @ 400 Gb/s |Slot @ 1 Tbps |Provided slots |Cost (ICU) |

|Year 2013 |Year 2015 |Year 2018 | | |

|3.2 Tb/s |6.4 Tb/s |16 Tb/s |16 Slot |4.30 |

|6.4 Tb/s |12.8 Tb/s |32 Tb/s |32 Slot |22.87 |

|9.6 Tb/s |19.2 Tb/s |48 Tb/s |48 Slot |28.92 |

|12.8 Tb/s |25.6 Tb/s |64 Tb/s |64 Slot |44.05 |

|16 Tb/s |32 Tb/s |80 Tb/s |80 Slot |50.07 |

|… |… |… |… |… |

|230.4 Tb/s |460.8 Tb/s |1152 Tb/s |1152 Slot |8329.02 |

Table 5: Line cards cost

|Line cards for 200 Gb/s slot |

|Interface type |Available |Cost Core (ICU) |Cost metro (ICU) |

|20 x 10 GE / MPLS-TP |2013 |2.56 |1.99 |

|5 x 40 GE / MPLS-TP |2013 |2.88 |2.14 |

|2 x 100 GE / MPLS-TP |2013 |2.74 |2.35 |

|Line cards for 400 Gb/s slot |

|Interface type |Available |Cost Core (ICU) |Cost metro (ICU) |

|10 x 40 GE / MPLS-TP |2015 |2.56 |1.99 |

|4 x 100 GE / MPLS-TP |2015 |2.88 |2.14 |

|1 x 400 GE / MPLS-TP |2015 |2.74 |2.35 |

|Line cards for 1 Tb/s slot |

|Interface type |Available |Cost Core (ICU) |Cost metro (ICU) |

|10 x 100 GE / MPLS-TP |2018 |2.88 |2.14 |

|2 x 400 GE / MPLS-TP |2018 |2.74 |2.35 |

|1 x 1000 GE / MPLS-TP |2018 |2.91 |2.56 |

Table 6: Transceivers cost

|Transceivers grey, short reach |

|Interface type |Available |Cost (ICU) |

|10G |2013 |0.008 |

|40G |2013 |0.032 |

|100G |2013 |0.080 |

|400G |2015 |0.256 |

|1T |2018 |0.512 |

|Transceivers, coloured in 50 GHz fix grid |

|Interface type |Available |Cost (ICU) |

|40G, 2500 km |2013 |0.48 |

|100G, 2000 km |2013 |1.00 |

|400G, 150 km |2015 |1.36 |

|Bandwidth Variable Transceivers, coloured in flexgrid |

|Interface type |Available |Cost (ICU) |

|Transponder 1 (100G + AUA) |2013 |1.44 |

|Transponder 2 (400G + AUA) |2015 |1.76 |

|Transponder 3 (1000G + AUA) |2018 |2.00 |

|Transponder 400G |2015 |1.20 |

|Sliceable Bandwidth Variable Transceiver in flexgrid |

|Interface type |Available |Cost (ICU) |

|SBVT 400G type 1 |2015 |T.B.D. |

|SBVT 400G type 2 |2015 |T.B.D. |

|SBVT 1T type 1 |2018 |T.B.D. |

|SBVT 1T type 2 |2018 |T.B.D. |

Concerning the line card capacity, the value chosen for the three time horizons are:

• Current with 200 Gb/s of line card capacity and up to 100 Gb/s of port capacity (two ports at 100 Gb/s or 5 ports at 40 Gb/s are configurations available for exploiting the whole slot capacity).

• Short term with 400 Gb/s for both line card and port capacity

• Medium term with 1 Tb/s for both line card and port capacity.

Assortments of transceivers per each period and their cost are as in Table 6, with three main categories: bandwidth constant in fixed grid, bandwidth variable in flexgrid, and sliceable bandwidth variable in flexgrid.

The cost coloured transceivers in flexgrid are assumed to be identical to the cost of WDM transponders with the same optical characteristics. Transceivers (or integrated functionality) that implement SBVT are also introduced, but at this stage of the project since the device is still under development, the price is not yet assessed (in the table it is marked - To Be Defined (TBD)).

CAPEX model for WDM fixed grid

The basic standard 50 GHz WDM Layer [18] is assumed to use coherent transmission exclusively and have SSMF (G652) fibre as the reference physical medium.

The WDM fixed grid model is composed of the following building blocks:

• Transponders with line rates from 40 Gb/s to 400 Gb/s (with different commercial time availability), and the maximum transparent reach for each type. The 10 Gb/s transponders are not included because they use incoherent on–off-keying modulation formats and need a dispersion compensation link - a scenario that is not compatible with the main WDM model assumptions. In the 50 GHz grid a transponder with a data rate of 1000 Gb/s has a very limited reach and this makes it unfeasible for long-haul transmission systems: the model does not include any 1000 Gb/s transponders.

• Muxponders with different configurations covering all the line rates.

• Regenerators: for each transponder, a corresponding regenerator is available.

• Optical amplifiers: in an optical transmission line the amplification span is assumed to be 80 km.

• WDM nodes:

1. WDM terminal, which has a nodal degree of 1.

2. Fixed OADM, which is not remotely configurable.

3. OXC which is remotely configurable, commonly denoted as a ROADM.

For all the WDM nodes, solutions with 40 and 80 channels at 50 GHz in the C-band are modelled, as well as 160 channels in an extended C+L-band. Some vendors offer WDM systems that exploit the C bandwidth further than the basic 4 THz, and this allows the possibility of 88 or even 96 channel systems at 50 GHz; the cost of all the devices operating in the C band is assumed to be the same regardless of the number of channels at 50 GHz the WDM system can handle. Basic components for WDM terminal and fixed OADM nodes are arrayed waveguide gratings (AWGs), interleavers, and optical amplifiers. The different types of OXCs are characterized by the three basic following features: directionless (any client port can be connected to any node/line port); colourless, (the port of client interface can be tuned to any wavelength); and contentionless (the node architecture does not imply any wavelength switching contention).

Basic components for the OXC are: 1x9/9x1 WSS, 9x9 WSS, 1x20/20x1 WSS, optical amplifier, and passive optical splitter and combiner. The impact of the cost of splitters and combiners are considered negligible in such a WSS-based architecture. Details on how basic components can be combined together to build up an OXC, and the associated formulas to calculate the cost of the equipment are given in the Idealist CAPEX Excel file, where eight different OXC models are proposed: four for smaller sized nodes (node degree less than 10), and 4 four larger sized nodes (nodal degree less than 21). The four models for the smaller sized node implement: the basic (i.e. add and drop are on the line) colourless; the colourless and directionless; and the full flexible (colourless directionless and contentionless) ROADM, respectively. The same holds for the higher sized node. All the OXC architectures are realised with WSSs both at the input and output, implementing the so-called route and select architecture (see [19]). Cost of OXCs with architectures different from the ones included in the Idealist CAPEX Excel file, such as the broadcast and select ROADM architectures, can be also evaluated, as they can be built up relying on the building blocks defined by the model.

Figure 22 gives an example of a fully flexible colourless, directionless and contentionless route-and-select ROADM made with the components listed above; the shown architecture allows a node of up to degree 20 to be built up, if 1x20 (and 20x1) WSS modules are used. This fully flexible ROADM requires one add/drop chain for each network degree and this implies the use of 6 WSS modules for each node degree (2 modules for the input line, plus 2 modules for output lines, plus 2 modules on the add/drop chain - one on the add section and the other one on the drop section) plus a number of WSS modules on each add/drop chain, depending on the whole number of channels to be added and dropped in the node.

This leads to the following formula for the cost computation for the fully flexible route-and-select OXC:

[pic]

[pic] (2)

where N is the node degree (up to 20 in case of 1x20 WSS), Amp_boost, Amp_pre, Add_amp, Drop_Amp are the cost of amplifiers (supposed equal in the model, see Table), WSS(1x20) is the cost of WSS modules (both types 1x20 and 20x1) and AD(%) is the Add Drop percentage (evaluated on the whole traffic crossing the node) with granularity of 20%. If the add drop percentage is 100% (all the traffic is terminated in the node, i.e. no pass through traffic) we have that 100(%)/20(%) = 5, such that the number of additional modules are 5x2xN; if the add/drop percentage is equal to or less than 20%, the number of modules are 2xN. (This approach is always valid for systems in the C-band with up to 100 channels as is assumed in the model. For systems with 80 channels or less, this leads to an extra dimensioning issue when adding the add/drop WSS modules; in particular for add/drop percentages greater than 80%, from the formula above we require five modules on each add and on each drop chain; but five modules are capable of 100 channels, and so they will exceed the capacity of the 80 channel system.

[pic]

Figure 22: OXC c/d/c

An extract of the cost parameters in the CAPEX WDM fixed grid model is given in the following tables. Table 7 includes the basic fixed grid transponders and muxponders. Muxponders provide interfaces at 10 Gb/s, and this allows interconnection of WDM equipment to other equipment, for instance IP/MPLS routers, that require to be interconnected by 10G grey interfaces.

Table 8 includes all the basic components needed to build up optical line amplifiers (OLAs), WDM terminals, OADMs and ROADMs of different types. The cost value for the OLA is given for devices designed to span 80 km, and for both the versions operating in C-band only and in C+L-band. DGE (dynamic gain equalizer) functionality has to be inserted at least after every 4 spans and is assumed to be always present in terminals and nodes. Amplifiers located at nodes are all considered have the same price, even if they have different characteristics (e.g. preamplifier for line in, booster for line out, amplifiers for add or drop lines). Concerning the OXC basic components, WSS modules for both filtering directions (i.e. WSS 1x20 and WSS 20x1) are assumed to have the same price. The passive components that can be used as a splitter or as a combiner are considered to have a negligible impact on the whole cost of an optical node (which includes many costly WSSs and amplifiers) such that their cost is set to 0.00.

In Table 7 and Table 8 the last column includes the required slots on a chassis of the corresponding component.

The STRONGEST model provided a framework to organize the node by the use of chassis only for IP/MPLS, MPLS-TP and OTN equipment, while for WDM the concept of chassis was not modelled. The costs of the common parts of basic node were spread and attributed to the subparts. The Idealist CAPEX model now also introduces for WDM equipment the chassis as an element to host the node subparts. In particular three types of chassis are introduced.

A small chassis called the Basic chassis, comprising one shelf of 10 slots that can host up to 6 slots for optical components and 4 slots for common services (power supply, communication and control). The Basic chassis can be used for OLAs and Terminals. A dual shelf chassis has to be used in the situation where a system requires more than 6 slots.

Two types of dual shelf chassis are defined: Master chassis and Slave chassis that can be combined together to build up multi-chassis nodes. A piece of equipment that needs more than 6 slots, always requires a Master and then a number of Slaves, depending on the capacity in terms of slots required by the complete system. A ROADM can require tens of chassis in the case of high degree nodes or a high percentage of terminated traffic.

The capacity of the Master chassis in terms of slots is lower than the Slave chassis, because on the Master the service cards require more space. The capacities are 14 for the Master and 16 for the Slave.

Cost of the chassis is given in Table 9. To take the chassis cost into account, it is necessary to evaluate the total requirements for the complete system in term of slots and then determine the chassis configuration required and the cost as a consequence. The resulting total cost is the cost of the components plus the cost of the chassis.

Table 7: WDM fixed grid transponders and muxponders

|Transponder for fixed grid 50 GHz |

|Component Type |Available |Cost (ICU) |Required slot |

|40G, 2500 km |2013 |0.48 |1 |

|100G, 2000 km |2013 |1.00 |2 |

|400G, 150 km |2015 |1.36 |3 |

|Muxponder for fixed grid 50 GHz |

|Component Type |Available |Cost (ICU) |Required slot |

|40G Muxponder, 4 x 10G, 2500 km |2013 |0.40 |1 |

|100G Muxponder, 2 x 40G, 2000 km |2013 |1.07 |2 |

|100G Muxponder, 10 x 10G, 2000 km |2013 |0.87 |2 |

|400G Muxponder, 10 x 40G, 150 km |2015 |1.60 |4 |

|400G Muxponder, 4 x 100G, 150 km |2015 |1.44 |4 |

Table 8: Basic WDM line and node components

|Basic line system, terminal and node components for fixed grid 50 GHz |

|Component Type |Available |Cost (ICU) |Required slot |

|OLA (bidirectional, C band, 80km span) |2013 |0.15 |2 |

|DGE functionality (one every 4 spans) |2013 |0.16 |2 |

|OLA (bid., 80km span for C + L Band) |2013 |0.30 |2 |

|Opt. Amplifier (unidirectional, any type, for nodes) |2013 |0.06 |1 |

|AWG (40 channel) |2013 |0.07 |1 |

|Interleaver (80 channel) |2013 |0.04 |1 |

|WSS 1x9/9x1 (unidirectional) |2013 |0.32 |1 |

|WSS 1x20/20x1 (unidirectional) |2013 |0.48 |2 |

|WSS 9x9 (unidirectional) |2013 |3.84 |2 |

|Splitter/combiner (any type) |2013 |0.00 |1 |

Table 9: WDM chassis

|Chassis for hosting equipment parts |

|Component Type |Available |Cost (ICU) |Slot capacity |

|Single shelf Basic Chassis (for OLA and Terminal) |2013 |0.2 |6 |

|Dual shelves Master Chassis (for ROADM) |2013 |1 |14 |

|Dual shelves Slave Chassis (for ROADM) |2013 |0.8 |16 |

CAPEX model for flexgrid elastic optical networks

The CAPEX model for flexgrid systems is under definition as the work of developing the interfaces and node components are ongoing mainly within the WP2.

This first and incomplete version of the CAPEX model for flexgrid systems is based on the following assumptions.

The transponders that can be used in flexgrid EON belong to the three types:

1. Fixed grid transponders (the same as in the fixed grid model and included in Table 7)

2. Bandwidth Variable flexgrid transponders (in line with the ones in the STRONGEST model - they can change the client rate but not the allocated optical bandwidth)

3. Sliceable Bandwidth Variable Transponders SBVT

Bandwidth Variable transponders and SBVT are reported in Table 10 with their characteristics known up to now. The acronym AUA means “Also Usable As” and refers to the feature of transponders classified as 1, 2 and 3 in the table to work with different client rates (by means of a change in the modulation format and/or in the baud rate) and resulting reach. SBVTs are devices that map N client signals into M optical signals with N≥M, where the optical signals can have different transmission characteristics to each other (carried rate, modulation format, bandwidth allocation) and in general a not contiguous allocated spectrum resulting from the M different optical modulated signals. On the line side all the signals (slices) of a SBVT are put on the the same physical output (a couple of fibers, one for ech transmission direction) and filtering to is demanded to the interconnected devices, for instance LCoS WSS for filtering and routing to differentiated destinations.

SBVTs as other components suitable for flexgrid are under specification in IDEALIST and their charcteristics are not known at the time this deliverable is due. It was decided by the project that almost a couple of devices should be developed in each timeframe scenery and this is the reason in Table 10 a couple of devices for both Year 2015 (400 G) and Year 2018 (1T) are inserted. The two alternatives for each timeframe will consent comparative techno-economic evalautions within flexgrid technologies in addition to the standard comparison with fixed grid.

In lack of specifications on SBVT some assumption on technical characteristics was done on hypothetical basic SBVTs expected available in 2015 and 2018. This assumption does not reflect any ongoing development in WP2 but constitute a first reference for cost parameters to be used in preliminary studies.

Concerning the nodes for flexgrid, the basic assumption is that they are of the same type as the OXC considered for fixed grid, but with flexgrid capability and this according to [18] implies the capability to handle the channel bandwidth with a central frequency resolution of 6.25 GHz, and to allocate bandwidth in slices of 12.5 GHz. WDM terminals can be obtained with WSS modules and are considered as an OXC of degree 1. The fixed OADM is not useful in a flexgrid context, as without flexibility in reconfiguration there is no benefit. The impact on the cost of flexgrid nodes implies an increased cost with reference to the same components or equipment that are appropriate for fixed grid. In particular the cost increase is due to the complexity in the management of the bandwidth portions, and less due to the switching technology (LCoS for instance is already deployed for fixed grid in some networks which are flexgrid-ready). In the STRONGEST model, this cost increase is simply evaluated as increasing percentage and was estimated at 30%. Within IDEALIST the same approach is retained but the increasing percentage is considered too high and the suggested increasing percentage for CAPEX parameters is 20% with reference to the correspondent parameters for fixed grid component. IDEALIST are developing in WP2 node subparts that could not be modelled with the previously described framework. As for SBVT case the CAPEX model concerning flexgrid nodes will be updated during the project when novel node architectures and related components will be specified.

Table 10: Transponder for flexgrid

|Bandwidth Variable Transponders in flexgrid |

|Interface type |Specification |Available |Cost (ICU) |Required slot |

|Transponder 1 |100G, 50GHz, 2000km AUA 40G, 50GHz, 2500km |2013 |1. |2 |

|Transponder 2 |400G, 75GHz, 500km AUA 200G, 75GHz, 2000km AUA 100G,|2015 |1.76 |4 |

| |75GHz, 2500km | | | |

|Transponder 3 |1000G, 175GHz, 500km AUA 500G, 175GHz, 2000km |2018 |2.00 |6 |

|Transponder 400G |400G, 100GHz, 1000km |2018 |1.20 |2 |

|100G Muxponder, 2 x 40G + Transponder1 |2013 |1,52 |2 |

|100G Muxponder, 10 x 10G + Transponder1 |2013 |1,28 |2 |

|400G Muxponder, 10 x 40G + Transponder2 |2015 |2,00 |3 |

|400G Muxponder, 4 x 100G + Transponder2 |2015 |1,84 |3 |

|400G Muxponder, 10 x 40G + Transponder 400G |2018 |1,44 |3 |

|400G Muxponder, 4 x 100G + Transponder 400G |2018 |1,28 |3 |

|1T Muxponder, 10 x 100G + Transponder3 |2018 |2,24 |6 |

|1T Muxponder, 2 x 400G + Transponder3 |2018 |2,08 |6 |

|Sliceable Bandwidth Variable Transponders in flexgrid |

|Interface type |Specification |Available |Cost (ICU) |Required slot |

|SBVT 400G basic |Input: up to 10 client interfaces (10400 Gb/s) |2015 |3.00 | |

| |Output: up to 4 slices on C band (100400 Gb/s) | | | |

| |Max overall rate on output slices: 400 Tb/s | | | |

| |Perfor. on 400G slice: 400G, 150 GHz, 2000km | | | |

|SBVT 400G type 1 |T.B.D. |2015 |T.B.D. |T.B.D. |

|SBVT 400G type 2 |T.B.D. |2015 |T.B.D. |T.B.D. |

|SBVT 1T basic |Input: up to 10 client interfaces (100 - 400G) |2018 |4.00 | |

| |Output: up to 10 slices on C band (100G-1T) | | | |

| |Max overall rate on output slices: 1T | | | |

| |Perfor. on 1 T slice: 1T, 375 GHz, 2000km | | | |

|SBVT 1T type 1 |T.B.D. |2018 |T.B.D. |T.B.D. |

|SBVT 1T type 2 |T.B.D. |2018 |T.B.D. |T.B.D. |

IDEALIST CAPEX model details are included in this Excel file:

[pic]

3 Target cost for Sliceable Bandwidth Variable Transponders

The aim of this study is to identify the target cost of 400 Gb/s and 1 Tb/s SBVTs to reduce, by at least 30%, transponder costs in a core network scenario. This target cost is calculated in relation to estimations for non-sliceable transponders of 400 Gb/s and 1 Tb/s. In light of our results, cost savings of 30% are feasible for 1 Tb/s transponders in the next nine years with a higher cost than non-sliceable transponders. Savings of 30% for 400 Gb/s case are possible in the short-term before 1 Tb/s SBVTs can appear in the market. Feasibility of such savings with a target cost higher than current non-sliceable transponder shows that SBVT can be a reality.

1 Case study definition

The objective of this study is to calculate the cost that a SBVT should have in order to achieve a percentage cost reduction in transponders for a backbone network. To this end, two node models are compared: (a) current model without sliceable transponders and (b) node with SBVT (Figure 23). Previous architectures to interconnect client equipment and SBVT are possible choices for implementation, but they are out of the scope of the paper. The main difference between both models is that the non-sliceable transponder model requires least one interface for each destination, while the SBVT transponder reuses hardware and optical spectrum to transmit to multiple destinations.

We consider coherent modulation formats in the model without sliceable transponders (40Gb/s, 100Gb/s, 400Gb/s and 1Tb/s), while 400Gb/s and 1 Tb/s SBVTs are used for the second model. Let us remark that the maximum traffic rate is the same in both models. Therefore, when studying 400 Gb/s SBVT model, there are no 1 Tb/s transponders in the non-sliceable transponder model.

|[pic] |[pic] |

|(a) Non-sliceable transponders model |(b) SBVT model |

Figure 23: Models for the study with and without SBVT

A network model based on the Spanish backbone is used for this study (Figure 24) [20]. The network is made up of 20 edge nodes that aggregate traffic from transit and access routers and forwards it over wavelength channels requested to the photonic mesh. In this analysis OXCs are assumed to switch the optical channels in an elastic manner, but they do not have any influence on transponder requirements, as they do not implement transponders themselves. We assume that there is enough optical resources in both situations with and without SBVTs and the objective of this work is to reduce the investment in transponders.

The initial traffic matrix is created based on information for the Telefonica backbone network in 2012. The link dimensioning is done using an over-dimensioning factor of 30%. For instance, given a traffic demand of 35 Gb/s, links are dimensioned for 45.5Gb/s (35Gb/s x 1.3). Traffic is incremented yearly by a 50% factor in order to compare the cost performance of the different architectures proposed over the next 10 years. In order to assess the cost of non-sliceable model, the STRONGEST cost model is used in this study [17]. Table 11 contains information about the relative unit cost used for the non-sliceable transponders of different bit rates.

|[pic] | |

| | |

| | |

| | |

| |TxP parameters |

| |Cost |

| | |

| |40Gb/s, 2500km, 50 GHz |

| |6 |

| | |

| |100Gb/s, 2000km, 50 GHz |

| |15 |

| | |

| |400Gb/s, 75GHz, 500km  |

| |22 |

| | |

| |1000Gb/s, 175GHz, 500km  |

| |25 |

| | |

| | |

|Figure 24: Reference network based on Spanish national backbone [11] |Table 11: Non-sliceable transponders cost [12] |

2 Case study results

Figure 25 shows the target cost of the SBVT (bars) to achieve 30% overall savings in transponder costs for 400 Gb/s and 1 Tb/s SBVTs across the whole network. The cost for non-sliceable transponders is also shown in Figure 25, to illustrate that in some instances this 30% overall savings are possible even when unit cost of SBVT are higher than the cost of non-sliceable transponders.

[pic]

Figure 25: Target cost for SBVT (400Gb/s and 1Tb/s) and reference costs for non-sliceable transponders

Figure 26 shows the possible cost increment of SBVT in comparison with non-sliceable transponder cost. The target cost of 1-Tb/s SBVT steadily increases for the next three years to reach a peak of 140% the cost of a non-sliceable transceiver in 2015 (a total traffic demand of 12.5 Tb/s). For 400 Gb/s, the peak target cost is reached in 2013 (a total traffic demand of 5.5 Tb/s) and then it steadily drops so that in 5-6 years the cost of the 400 Gb/s SBVT should be similar to the cost of a fixed grid transponder to achieve overall 30% savings in the network. Therefore, when the price of 400 Gb/s SBVTs approaches the price of non-sliceable transponders, it would make sense to migrate some nodes to 1 Tb/s SBVT. Finally, we should note that we have not applied any discount or price degradation in our model, so costs remain the same along time. It is reasonable to expect decreasing non-sliceable 400 Gb/s and 1Tb/s transponder costs.

[pic]

Figure 26: Cost increment of the SBVT (400Gb/s and 1Tb/s) in comparison with non-sliceable transponders

In addition to the target costs for 30% overall savings, we have also studied the maximum overall savings that can be achieved when SBVT costs are the same as those of non-sliceable transponders. To this end, we have computed transponders costs across all the network using non-sliceable transponders in three scenarios: (1) no savings, (2) 30% savings and (3) 50% savings. On the other hand, we have computed the required number of SBVTs to handle the same capacity and multiplied this number by the cost of non-sliceable transponders. Figure 27(a) and (b) show the results for 400 Gb/s and 1 Tb/s cases, respectively.

In light of these results, we can say that 50% savings is not possible in the case of 400 Gb/s, as shown in Figure 27(a). In the case of 1 Tb/s, more than 50% savings could be possible if the technology were available before 2018 (or when 42 Tb/s traffic is reached).

[pic]

(a) SBVT capacity 400Gb/s

[pic]

(b) SBVT capacity 1Tb/s

Figure 27: Comparison of transponders cost for Non-Sliceable model without savings and with 30% and 50% and minimum cost of SBVT

4 OPEX model

The operational expenditure (OPEX) is the amount of money that network operators spend on an ongoing, day-to-day basis in order to run their business. According to [21] the OPEX for an operator can be divided in to seven general categories: network operation, interconnection and roaming, marketing and sales, customer service, charging and billing, IT and general support, and service development. In particular, the OPEX category where Idealist technologies can play a role is the network operation, which includes OSS operation, maintenance and repair of the network elements, equipment and software licenses, rental of network resources, costs for site rental and electricity.

First, in order to have an insight to give an order of magnitude estimate for the network operation related to OPEX, [Yankee09] estimates that for a fixed line operator, the network operation takes a significant part of expenses, accounting for 39% of the total OPEX. Thus, it is key to be able to drive down network related OPEX.

In this section of deliverable D1.1, the main components of network operation related OPEX in which Idealist solutions can play a role, are modelled. The Idealist OPEX model will take into account:

• Cost of floor space

• Cost of field operations and repair

• Spare parts (stock) maintenance.

• Energy (including cooling)

1 Cost of floor space

Every network component occupies a certain amount of space and needs to be placed in a central office, or even in a field location (e.g. cabinet). The extent to which this impacts different operators varies enormously, depending on whether they are an incumbent (with existing buildings) or a new entrant. However, in order to be able to compare different network configurations and equipment, a simplified model needs to be applied. The Idealist OPEX model will consider a fixed price per square metre. The suggested value is the mean value of rental per square metre in the country of the network. The Idealist CAPEX model includes the size of the components.

2 Field operations and repair model

The field operations and repair are a big part of a network operator’s expenses. Nowadays, there is a trend in carriers outsourcing the operations and maintenance to third-party vendors that specialize in specific network technologies. In this way, the third-party operation and repair provider is able to leverage its field force across multiple carriers. In this subsection, an operation and repair model is explained, together with a methodology to estimate, on the one hand the workforce needed to achieve a given service guarantee, and on the other hand a methodology to calculate the amount of spare parts (stock) that needs to be maintained.

The operations and repair model can be used to measure the OPEX savings that can be achieved by using the Idealist control plane technologies developed in WP3, which aim at enhancing the automation of the network and improve the network resilience.

Operators and service providers typically consider two main models:

Flat Fee Model: There is a fixed monthly cost during the year, which includes any number of events that may exist in the month.

Event Base Model: The cost is computed by the number of events multiplied by the price per event. An event can be the failure of a network component (e.g. transponder, router card) or an installation.

Idealist will consider a flat fee model based on the number of operation and repair teams that are need to be maintained to achieve a desired repair time.

Repair Time (Service Level) Availability

There are two main availabilities considered:

• 8x5 - the operation and repair teams are available 8 hours a day, 5 days a week.

• 24x7 - the operation and repair teams are available 24 hours, 7 days a week.

Within the availability period, the Idealist operations model will consider a repair time of 5 hours.

Number of repair teams Model

The goal of this model is to obtain the minimum number of repair teams needed to be maintained, in order to achieve a given Minimum Mean Time To Repair (MMTTR). Each repair team has an associated monthly cost (salaries), and can give an estimate of the labour cost of repair of the communication network. The model assumes that upon each failure detected in the network, the failure should be repaired within the desired MMTTR. Note that the MMTTR will depend on the resiliency mechanism of the network. In this sense, Idealist control plane solutions aim to maximize this value, in order to reduce the size of the repair teams.

The model takes as input the number of components subject to failure, the mean time between failures (MTBF), as well as the mentioned MMTTR.

Centralized reparation model

This model is based on a centralized location of the repair teams. This means that the failure location is not taken into account. In this model, independent of the location, a fixed Mean Time To Repair (MTTR) has been assumed. The MTTR is measured from the moment that the repair team starts working on the failure (not the moment of the failure).

Optionally, location and travel time can be considered. In this case, the MTTR can be calculated as a travelling time (depending on the location) plus actual repair time. The travelling time is the time that the repair team needs for going from their base location to the failure location. The actual repair time is the time that the repair team needs to work on the failure.

Related to the MTTR is another important parameter - the Minimum MTTR. This is the actual time by which the failure must be solved. Based on this value the repair team could delay the start of the repair process. This additional delay can help to minimize the number of repair teams, as the same team could work on several nearby faults or close in time failures (or operations).

With the intention of understanding better the relationship between MTTR, MMTTR and number of repair teams, Figure 28 below shows the different cases that can be defined:

[pic]

Figure 28: Relationship between MTTR, MMTTR and number of repair teams

In this case it is observed that the second failure is completed with the minimum time to repair, although its repair starts when the repair of the first failure finishes, therefore just one repair team would be necessary to repair the two failures, saving thus one repair team.

[pic]

Figure 29: Impact of second failure

Also, each repair team has a certain availability. Previously we mentioned the two available models, 8x5 and 24x7. In the 8x5 it is assumed that the repair teams are only available from Monday to Friday, eight hours a day. Thus, in the best case, one repair team is needed. On the other hand, in the 24x7 case, in the best possible scenario, 4 repair teams are needed, each of them working 40 hours a week.

The Mean Time Between Failures (MTBF) is another essential variable of the model. This variable provides information about the frequency that failures occur.

Modelling of the number of repair teams

Based on the number of failures, MTBF, MTTR, MMTTR and availability model (8x5 or 24x7) a random generation of failures needs to be done. The random generation of failures is based on an exponential distribution with mean MTBF. This distribution describes the time between events as a Poisson process, i.e. a process in which events occur continuously and independently at a constant average rate.

5 OPEX reduction by stock improvement with Sliceable Bandwdith Variable Trannsponders.

1 Rationale of the study

Part of the cost of maintenance and repair of the network is related to keeping a stock of spare parts. Whenever there is a failure in a network element, there may be damaged parts that need to be replaced. In order to be able to repair the network elements, a stock of spare parts for replacement needs to be maintained. Such stock can be maintained either by the network operator or a third party supplier. In any case, maintaining such stock is mandatory and translates into a yearly cost.

In the case of optical transport networks, spare transponders need to be stocked. Following current model, transponders of different rates are needed, and thus a number of each rate needs to be stored. However, with the newly proposed Sliceable Bandwidth Variable Transponders, it is possible to reduce the number of transponders and reduce the variety of transponders. This study aims to analyze whether equipping a network with Sliceable Bandwidth Variable Transponders instead of fixed rate transponders of multiple rates reduces the needs in terms of stock maintenance.

2 Methodology

This study is based on a centralized stock model. It is assumed that a central warehouse has stored all the transponders and that, in the event of a damaged part event, the transponder is shipped from such central location.

The study has been performed for the Spain Reference Network shown in section 4.4. The number of transponders needed has been first obtained for two cases, one in which different kinds of transponders are used (e.g. 40 Gbps and 100 Gbps), and a second case with 400 Gbps sliceable BVT. The, a MTBF (Mean Time Between Failures) has been selected. This is the mean time between two failures of a transponder. It has been assumed that all transponders have the same MTBF. Then, a set of simulations have been made where the transponders are set to fail according to an exponential distribution with mean MTBF. Every time an element fails, it is replaced and a new element is requested to factory. The MDT (mean delivery time) is the time to receive a new transponder from the vendor. In the study 3 months has been assumed. Then, it has been obtained which is the number of stock that needs to be maintained to guarantee a certain availability.

The steps of the study are summarized bellow:

1. Number of transponders needed is obtained

2. Failures in every transponder are distributed randomly in time based on an exponential distribution with mean MTBF.

3. At the same moment that failure happens, a spare transponder is taken from the stock and a replacement request to factory is made.

4. Stock accounting:

a. One is added to the stock counter of the given transponder if a failure happens.

b. One is subtracted to the stock counter when the replacement happens (a new transponder arrives from factory to the stock).

5. From the previous steps, the peak value of the minimum stock number to be maintained at the warehouse is obtained.

6. Once the maximum stock is known, steps 2-4 are repeated, reducing the maximum stock in one each time, with one remark:

a. When a failure happens, if the stock is zero, one is added to the number of failed cases. Otherwise, one is added to the successful case.

7. The percentage of the successful cases is obtained for each stock value, until the stock is one.

Finally, the study has been repeated with several sets of random variables, in order to achieve a higher accuracy.

3 Results

The starting point of the study is the set of results obtained in 5.3.2 for 40G and 100G fixed rate transponders and 400G SBVT transponders in the Spain Reference Network. In this first study, the traffic of the first year has been used to obtain the total number of transponders needed, accounting for 144 40G transponders and 16 100G transponder for the fixed case, and 26 400G SVBT for the other case.

When there is a mix of fixed rate transponders, in this case studies, 40G and 100G, there is a need to keep two types of transponders in stock. In particular, the results, presented in Figure 30, show that, to achieve a 99% availability, 10 40G transponders (7% regarding the total 40G transponders) and 2 100G transponders (25% regarding the 100G transponders) are needed. In contrast, just a total stock of 3 SVBTs is needed.

If the requirement is more restrictive, for example a 99’99% availability, the number of 40G transponders to be stocked raises to 17, and the number of 100G transponders in the warehouse is 5. In the sliceable case, a total stock of 5 SVBTs is needed.

[pic]

Figure 30 Stock of 40G transponders

[pic]

Figure 31 Stock of 100G transponders

[pic]

Figure 32 Stock of 400G SVBT transponders

The results are summarized in table

| |99% Avail |99’99%avail |

|Fixed case |10 40G, 2 100G |17 40G, 5 100G |

|SBVT case |3 400G SBVT |5 SBVT |

Table 12 Stock for 99% and 99,99% availability

In section in 5.3.2, the cost model estimates a relative price of 6 for the 40G transponders, of 15 for the 100G transponders and between 22 and 35 for the 400G SVBT. Based on those numbers, in the best case, for a 99,99% availability, the relative cost of the stock for the fixed case is 177, while for the SBVT case is 110, that is a 37% cost reduction. In the worst case, the price would be similar. The results shown in this preliminary study are obtained with the traffic for year 1 in the SVBT study. Next steps will consider different traffic mixes, as well as more transponder types.

6 Energy model for high data rate transponders

Energy efficiency and carbon management are becoming increasingly important for ICT companies as they focus on driving down environmental footprint. It is expected that the next generation of optical networks will bring about greater energy efficiency than legacy networks. With this intent in mind, within the Idealist project we want to investigate the energy impact of the next-generation of high data-rate networks and study the possible improvement of the energy consumption due to the introduction of both elastic and sliceable transponders.

The present deliverable only presents the energy model associated to rate-adaptive transponders (working on a fixed 50 GHz grid), because we suppose that the capability of reducing the spectral occupancy of the optical signal does not have an impact on its power consumption. Concerning the power models for sliceable transponders, these will be the subject of further investigation during the project.

The architectures of the transponders, which are taken into account here, are only based on coherent transmission because it has been demonstrated to be the technology of choice, satisfying both the long reach and high data rate requirements [23]. All considered transponders exploit the two polarization modes for transporting the signal; for this reason the signal is said to be dual polarisation (DP), which prefix is used before the declaration of the modulation format used for encoding information.

1 Energy model for fixed data rate devices

In this section we present the energy model relative to four high-rate transponders. Two among them are already commercially available for current optical networks and transports up to 40 and 100 Gb/s, called in the following TSP40 and TSP100. The two other transponders are possible solutions proposed by vendors to take their place in the next generation of optical networks for coping with the increase in the traffic, and will transport up to 400 and 1000 Gb/s, called hereafter TSP400 and TSP1000. In this project, transmission is supposed be bidirectional, such that all transponders are composed of an emitter and receiver supporting exactly the same capacity.

Figure 30 and Figure 31 depict two types of possible transponder realization that can be considered in the Idealist project and are used for building up the power model in the following sections. In Figure 30 we have presented the muxponder, where various client cards are connected to a transponder by black-and-white connections. The capacity of such client cards is lower than that of the transponder, and client cards with different capacities can be connected to it. For simplicity purposes, in the description presented we consider that all the line-cards have the same capacity: equal to 10 Gb/s when 40 and 100 Gb/s transponders are considered; and 100 Gb/s for 400 Gb/s and 1 Tb/s transponders. Inside the architecture of these transponders a framer/deframer has to be included so as to distribute the incoming data to the modulators to be coded and sent.

In Figure 31 the transponder is assumed to be directly plugged into the router. This architecture is called uplink and a backplane (that can be an OTN or IP router) guarantees the framer/deframer operations. In this structure the transponder does not include the client side part, comprising the client cards and framer/deframer (cf. Table 12).

In the rest of this document, we only describe a transponder based on muxponder, remembering that the uplink is easily deduced by maintaining unchanged the transceiver/receiver parts and by eliminating the client side section by the consumption assessment.

|[pic] |[pic] |

|Figure 33 Architecture of a transponder realized with a muxponder, |Figure 34 Architecture of a transponder directly connected to an |

|where N client cards are plugged on the transponder. |electric backplane. This configuration is called uplink. |

40Gb/s transponders

The 40 Gb/s transponder is connected to N line cards (XFP) working at any capacity going from 2.5 to 40 Gb/s. For simplicity we gas assumed to have all the line cards equal and working at 10 Gb/s, hence N = 4. On the emitter side, we have a framer that aggregates data from the line cards and adds to them the appropriate forward error correction (FEC) overhead, providing two electrical signals at 20 Gb/s, which drive the modulators to create a BPSK optical signal per polarization. The baud rate is called the speed at which each symbol is transmitted; a DP-BPSK signal transmits one bit per symbol per polarization, giving for a 40 Gb/s signal a baud rate (R) of 20 Gbaud.

Figure 32 shows the scheme of a possible DP-BPSK transponder, with the emitter (upper part of the figure) and receiver (bottom part). During propagation, the signal is affected by phase noise and the symbols shift along the unitary ring. For this reason, four photodiodes are required at the receiver so as to detect the BPSK points in the bidimensional space (I and Q coordinates) on each polarization. A local oscillator is required for enabling the coherent reception at the desired wavelength. After the photodiodes, the electrically converted signals are sampled and digitalized by an Analog to Digital Converter (ADC). Digital signal processing (DSP) then allows the reconstruction of the transported information (by eliminating the effects of chromatic dispersion, CD, polarization mode dispersion, PMD, and non-linear effects). A FEC decoder is used for correcting the incurred errors. Finally, the signal is deframed and sent to the respective client cards.

In Idealist we have assumed a reach of 2500 km for a 40 Gb/s signal. This is possible by considering a simple hard-FEC, whose power consumption values are assumed in Table 12 (left side).

|[pic] |[pic] |

|Figure 35: Architecture of a 40 Gb/s DP-BPSK transponder. |Figure 36: Architecture of a 100 Gb/s DP-QPSK transponder. |

100 Gb/s transponders

The architecture of a 100 Gb/s transponder is very similar to that of a 40 Gb/s one. In this case, the number of client cards N = 10 since we assume we have 10 Gb/s line cards. On the emitter side, the framer now has to provide four electrical signals at R = 25 Gb/s (each symbol is made by two bits per polarization), which will drive the modulators creating a QPSK optical signal per polarization. For detecting the bidimensional constellation on the two polarizations, four photodiodes are required on the receiver side (I and Q coordinates). The same as for the 40 Gb/s transponder, a local oscillator, ADC and DSP, and FEC are present in the receiver. The deframer sends data onto the 10 respective line cards. Figure 33 shows a schematic of the 100 Gb/s DP-QPSK transponder.

In Idealist we have assumed a 2000 km reach for a 100 Gb/s signal. If the same FEC is used for the 40 and 100 Gb/s transponders, the reach ratio between 40 and 100 Gb/s is 2.5 (linear factor) [24]. The improvement of the 100 Gb/s reach is made possible by introducing a soft decision FEC, allowing us to cover distances up to 2000 km [25]. The soft decision FEC is more energy greedy than soft FEC, as shown Table 12, where the power consumption for the 100 Gb/s transponder is estimated.

The power difference of the Framer/Deframer for the 40 and 100 Gb/s transponders is due to their different size: more capacity is supported, the device has to guarantee more operations resulting in a higher power consumption. The power consumption of various devices is obtained by considering the scheme presented in [26].

Table 13: Consumption values of 40 Gb/s and 100 Gb/s transponders based on coherent technologies and able to reach distances aimed by the IDEALIST project

| |Component |40 Gb/s transponder |100 Gb/s transponder |

| | |Unit |Power consumption (W) |Unit |Power consumption (W) |

|Client side |Client card |4 |3.5 |10 |3.5 |

| |(@10 Gb/s) | | | | |

| |Framer/Deframer |1 |40 |1 |50 |

|FEC |FEC |1 |4 |1 |27 |

|E/O modulation |Drivers |2 |2 |4 |2 |

| |Laser |1 |6.6 |1 |6.6 |

|O/E receiver |Local oscillator |1 |6.6 |1 |6.6 |

| |Photodiode +TIA |4 |0.4 |4 |0.4 |

| |ADC |4 |2 |4 |2 |

| |DSP |1 |60 |1 |60 |

| |Management power |20% total power |20% total power |

| |Total power (W) |173.8 |243.4 |

It is possible to provide fewer power greedy transponders if:

a) It is possible to use only one laser instead of having two different lasers for the emitter and the local oscillator. By doing this, we are obliged to have the same wavelength at the emission and the reception side. This means that if the transponder is used for regeneration purposes, signal re-colouring will not be possible. This reduction of functionality makes 40 Gb/s and 100 Gb/s transponders 3.7% and 2.7%, respectively, less power greedy.

b) No soft FEC is considered. In this case the FEC part of the 100 Gb/s transponders will only consume 7 W [26] but the optical reach will decrease from 2000 km to 1000 km.

c) It is possible to reduce the amount of operations ensured by ADC, DSP and FEC. With the model presented in Table 12 the ADC+DSP+FEC blocks consume together 95 W; simpler blocks at 30 W can be provided (reducing the chromatic dispersion and polarization mode polarization compensation blocks and using just hard-FEC), but the maximum reach ensured then will only be up to 500 km.

Considering the points b) and c) into account, we have deduced intermediary power consumptions associated with intermediary reach values as shown in Figure 34.

[pic]

Figure 37: Power consumption values for short reach 100 Gb/s transponders.

Higher modulation formats

To cope with the increase of data-rates, transponders using more complex modulation formats are required. For such formats, symbols are phase and amplitude modulated; this is possible thanks to the use of digital-to-analogue converters (DACs) placed before the modulators. In this manner it is possible to switch from QPSK (also named 4QAM) to 16, 32 and 64QAM. The choice of the symbol rate and of the modulation format will determine the physical performance of the optical signal and its total capacity. In the Idealist project we have supposed that 400 Gb/s optical signals have the following features: 75 GHz of spectral occupancy and can cover around 500 km without regeneration. To satisfy such requirements, a possible choice coping with such features will use two subcarriers, with each signal transporting 200 Gb/s with a 16QAM modulation format. The introduction of soft-FEC is mandatory for high-rate transponders, requiring a signal overhead of around 27%. Hence the signal baud-rate is now 32 GBd and 37.5 GHz of channel spacing is the minimum channel slot associated to this channel. With such hypotheses and considering as reference the 100 Gb/s DP-QPSK signal, the transmission reach reduces by a linear factor of 5, hence 400 Gb/s will cover 400 km without regeneration [24]. This reach is slightly shorter than the one aimed at in the project, but it is also optimistic because it does not take into account the increased filtering penalties due to the reduction of the channel bandwidth (from 50 to 37.5 GHz) while using the same baud-rate.

With respect to the 1 Tb/s transponders, in Idealist the target channel occupancy is 175 GHz and the optical reach has to be of the same order as for 400 Gb/s. To increase the channel data-rate two options are possible: the first one relies on the use of more complex modulation formats whilst keeping a constant symbol-rate; conversely the second option consists in increasing the symbol-rate whilst keeping the same modulation format. With the first option, since the optical reach is highly dependent on the modulation format, the reach of 1 Tb/s transmissions will be highly impacted and the 400 km reach will not be achieved. Conversely, by increasing the symbol-rate the reach is not impacted in the same manner if the filtering functions are larger than the signal width (a demonstration of the reach sensitivity with the modulation format and symbol-rate has been shown in [28] for signals up to 100 Gb/s).

Keeping this in mind, the transponder architecture realizing 1 Tb/s transmission is based on four subcarriers working at 250 Gb/s, obtained with the use of a 16QAM modulated signal with a baud-rate increased to 40 GBd and the channel spacing increased to 43GHz; the larger spectral slot will ensure the same filtering penalties as those observed for 400 Gb/s channels.

For spatial capacity (size) issues in high capacity transponders (due to the integration of a card with multiple carriers), some devices have been integrated together, as is the case for the FEC, ADC, DSP and DAC. The device obtained by their integration is called ‘ADD’ in Table 13. To reduce the number of line cards connected to the transponder, 100 Gb/s client cards have been considered, whose power consumption is estimated be 24 W [27]. To estimate the power consumption of the framer/deframer, we have supposed that its power consumption scales linearly with the amount of processed data; the power consumption values for 400 Gb/s and 1 Tb/s transponders are deduced from the ones already available for the 40 Gb/s and 100 Gb/s transponders. We assumed that the ADD power consumption will depend on the amount of data transported by each subcarrier, which is assumed to scale with the amount of data in the same manner as that of the framer.

In Table 13, the number of devices required for each transponder is indicated by a multiplication product: the first factor denotes the number of subcarriers used to carry the required data-rate, while the second factor refers to the number of devices necessary per subcarrier. With all these assumptions in mind, we have estimated the power consumption of high rate transponders and depicted them in Table 13.

Table 14: Consumption values of 400 Gb/s and 1000 Gb/s transponders based on coherent technologies and able to reach distances aimed by the Idealist project

| |Component |400 Gb/s transponder |1 Tb/s transponder |

| | |Unit |Power consumption (W) |Unit |Power consumption (W) |

|Client side |Client card |4 |24 |10 |24 |

| |(@10 Gb/s) | | | | |

| |Framer/Deframer |1 |100 |1 |200 |

|E/O modulation |Drivers |2x4 |2 |4x4 |2 |

| |Laser |2x1 |6.6 |4x1 |6.6 |

|O/E receiver |Local oscillator |2x1 |6.6 |4x1 |6.6 |

| |Photodiode +TIA |2x4 |0.4 |4x4 |0.4 |

| |ADD |2 |80 |4 |90 |

| |Management power |20% total power |20% total power |

| |Total power (W) |481.9 |1061.4 |

In Figure 35 we have depicted the power consumption and efficiency associated with the different data-rates (the values of 200 Gb/s transponders have been extracted from Table 13 when only one subcarrier has been considered). From Figure 35 we notice that the energy efficiency (energy per bit) weakly improves with the increase of the rate for bitrates higher than 200 Gb/s. This is due to the fact that higher capacity transponders are obtained by increasing the number of subcarriers and the energy improvement per subcarrier is only due to the integration of some component and little functionality.

[pic]

Figure 38: Power consumption and power efficiency relative to the different datarate transponders.

Table 14 summarizes the power consumption relative to the muxponder and uplink (no client side consumption is considered) for the different data-rates.

Table 15: Power consumption relative to transponders realized by a muxponder or an uplink architecture.

| |40Gb/s |100Gb/s |400Gb/s |1Tb/s |

|Uplink |119.8 |149.4 |285.9 |621.4 |

2 Energy model for elastic transponders

The increase of the traffic transported in the optical network will produce an increase of the energy consumption; an increase that will not be sustainable either in terms of costs or in terms of energy availability. In contrast to traffic that often exhibits fluctuations, optical networks have tended to be quasi-static, as all network elements are fully powered for the peak traffic (including over-provisioning), without considering the actual transported capacity. It appears evident that the first step for improving the energy efficiency in such a network is to consider the possibility of adapting the number of, or the power consumption of, optoelectronic devices as a function of the traffic to be actually carried. Rate adaptive devices allow a dynamic reconfiguration of the modulation format and/or symbol rate of the optical signal, and this property can be used for enhancing the energy efficiency of future optical networks.

It has been demonstrated in [28] that considering format-flexible transponders have the same power consumption independent of the chosen modulation format. The main advantage of format-flexible transponders relies on the large scale of transmission distances, hence on the possibility of skipping intermediary regenerators. In contrast, because in Idealist we consider only uncompensated transmission links, the symbol-rate adaptation does not offer many opportunities to trade data-rate for reach [28]. However, symbol-rate adaptation offers much better energy saving opportunities than format-adaptive transponders, because it has been demonstrated that the power consumption of optoelectronic devices strongly depends on the symbol-rate [29].

In the next sections we present the energy model for symbol-rate adaptive transponders. The energy models presented have been aligned with the modulation format schemes of the 40, 100, 400 and 1000 Gb/s transponders presented in Table 12 and Table 13. Lower symbol rates are obtained by scaling down the clock reference of the electronic devices (which are required to have tunable clock references). The proposed model can be applied for any data-rate. In order to obtain any payload once the modulation format is fixed, we assume that suppose the symbol-rate of each subcarrier is scaled down so as to obtain the desired transponder capacity.

Power scaling as a function of the symbol-rate

In a transponder, not all devices exhibit a power consumption that depends on the rate of transported data; namely only the ones providing electronic processing do so. Moreover, their power consumption is composed of two parts: one that is static (Pstatic) and another that is dynamic (Pdynamic).

In many systems, the amount of static power has been shown be between 30-50% of the total power of the device. This static power allows the electronic maintenance of logic levels in the device, but also the various leakage currents (more and more important with the decrease of feature sizes and the lowering of the threshold levels). The dynamic power consumption of the device depends on the frequency at which the device operates. This power is directly proportional to the square of the voltage and to the clock frequency.

If only the frequency of the device is tuned, from equation (3) we observe that the dynamic power (and the total power) scales linearly [29]. If it is possible to adapt together with the frequency also the voltage of the device, the relationship between the power and the frequency becomes cubic [30].

In coherent-based transponders, a large part of the power consumption is due to the data processing by ASICs and/or FPGAs, noticeably the framer, FEC codec and DSP blocks. In 40 Gb/s transceivers, such devices consume 73% of the whole power consumption, while for 100 Gb/s and higher capacity transponders this ratio will be about 90%.

Linear power scaling model

As said before, the dynamic power scales linearly with the clock frequency, as demonstrated in [29], where the authors presented a prototype of a real-time bandwidth-variable coherent muxponder aggregating multiple 10GigE clients onto a symbol-rate-variable DP-QPSK optical signal. Thanks to this technology, a linear power consumption of the device with respect to the actual carried traffic has been demonstrated. In particular, it has been demonstrated that the power consumption of the DSP unit versus the total bit-rate has the following relationship:

[pic]

With R being the baud-rate of the device. We assume, in a first approximation, that this linear dependency is the same whatever the electronic device. Hence we rewrite Table 12 and Table 13 and deduce the power dependency of the rate-adaptive transponders when a linear scaling power is assumed, and by considering that the static power is around 50% of the whole transponder power.

Table 16: Linear power dependency for the different high rate transponders (muxponder and uplink) considered by the Idealist project

| |Linear power consumption as a function of the data-rate (W) |

| |40 Gb/s |100 Gb/s |400 Gb/s |1 Tb/s |

|Uplink |70.56+1.37*R |89.16+1.35*R |150+3.00*R |325.44+2.51*R |

Cubic power scaling model

From equation (3) we notice that a better power reduction can be obtained if, jointly with the frequency, also the voltage is scaled down. At low frequencies, the driving voltage can be lowered; such adaptive schemes, known as Dynamic Frequency and Voltage Scaling (DVFS), yield a steeper dependence on the clock rate, typically proportional to f 3 [30].

We assume that the power savings computed on a single component will be proportional with the savings observed on the whole device. Following this hypothesis, we proceed in computing the savings for the component and then translate it to the entire block.

Different to the frequency variations where no limitation on the minimal frequency values is assumed, this is not the case for the driving voltage. Indeed voltage values are restricted to a range of values ensuring correct device functioning: V [pic][Vmin; Vmax], with Vmin ranging from Vmax/3 and 2/3·Vmax. Indeed too low voltage values mean that correct biasing of the electronic component is no longer ensured.

In the device data-sheets it is possible to find the power consumption of a given component. Such power, called Pdevice, is obtained for a specific clock rate (FclockMax) and driving voltage (VdrMax). The power savings are computed with respect to Pdyn = Xdyn Pdevice.

[pic] (3)

With

[pic]

We assume that for the minimal (maximal) possible frequency will correspond the minimal (resp. maximal) voltage, hence we compute the linear relationship between fclock and V(fclock). Figure 36 represents the linear scaling between the clock frequency and the driving voltage, while equation (4) expresses such relationship and equation (5) provides the scaling factor.

[pic] (4)

(5)

[pic]

Figure 39: Linear relationship between the driving voltage and clock frequency.

The total dynamic power consumed by a device as a function of the used clock frequency is obtained by equation (6):

[pic] (6)

During the Idealist project we will define the technologies that will be used for the elastic transponders and define the range of voltages that allow for the correct operation of the devices, and define the power consumption as a function of the data-rate.

Network planning

1 State of the Art

This section will give a brief overview of several algorithms applicable to network planning in Flexgrid networks. Since those algorithms need to be implemented in planning tools, we review some commercially available tools.

1 Clustering of nodes for hierarchical traffic grooming

Traffic grooming facilitates to minimize the overall cost and justify the (long term) reservation of a lightpath to transport traffic aggregates of smaller granularity packet flows through a wavelength switched core network. Among other techniques, node clustering has been proposed as means for efficient traffic grooming (e.g. [31] - [33] ) since it can provide an intuitive node grouping exploiting topological conditions to achieve reduction of long paths and sharing of resources improving utilization and overall efficiency. Recently [34] a clustering methodology has been implemented to maximize the efficiency of flexgrid technology over core networks aggregating traffic from a large number of aggregation nodes. According to [34] the network is partitioned into metro areas, i.e. locations are grouped into a number of metro areas aiming at the maximization of the aggregation traffic that will be conveyed by the optical core network utilizing flexgrid technology.

In other works (e.g. [35], [36]) partitioning of network nodes into different groups or clusters has been assumed as an inherent feature of multi-domain networks. Thus, in both [35], [36] the network topology is fixed determined by administrative domain criteria and optimization is achieved through appropriate grooming, wavelength allocation and routing of traffic over the existing network infrastructure. In [32], [33] however, the selection of network partitions/clusters is part of the problem in order to improve overall network efficiency. In [33] like in [36] the authors aim to exploit multi-granular optical cross-connects (MGOXC) capable of grooming wavelengths at the waveband level. Clustering in this case addresses the problem of heterogeneous networks, where node clustering is based on node functionality (waveband grooming capabilities or not) and number of hops required to reach an MGOXC node. No optimization in cluster formation is performed in their work. Finally, the work in [32] proposes a method to partition the network into clusters with a single gateway node (named hub), where connection termination and traffic grooming is performed. However, the work in [32] assumes a star topology of clusters, a connection oriented traffic model and traffic aggregation performed at Layer 2.

In [31], CANON has been proposed, an architecture that decomposes a Core/Metro network into a number of clusters which can be viewed as a superposition of ring topology networks. Two node classes are introduced in CANON: the Metro-Core Edge Nodes (MENs) and the Core Transit Node (CTN), the latter serving as a gateway between clusters. All MENs with traffic towards the same destination cluster are sharing the same wavelength(s) in a TDMA fashion. The collision-free launching of slots in the CANON ring topology network is scheduled under the centralized arbitration of the CTN, by means of a MAC protocol. This operation results in statistical multiplexing directly at the optical layer leading to traffic profile smooth-out [37]. In [31], unlike [32], no electronic processing and frame (de)aggregation is performed at CTNs.

2 Off-line RSA models

To properly analyse, design, plan, and operate Flexgrid networks, efficient methods are required for the Routing and Spectrum Allocation (RSA) problem. Specifically, the allocated spectral resources must be, in absence of spectrum converters, the same along the links in the route (the continuity constraint) and contiguous in the spectrum (the contiguity constraint).

Due to the spectrum contiguity constraint, RWA problem formulations developed for Wavelength Division Multiplexing (WDM) networks are not applicable for RSA in Flexgrid optical networks and they need to be adapted to include that constraint. Several works can be found in the literature presenting Integer Linear Programming (ILP) formulations of RSA [38]-[40]. In [38] the authors address the planning problem of a flexgrid optical network, where a traffic matrix with requested bandwidth demands is given. To solve the problem, an ILP formulation that aims at minimizing the spectrum used to serve the traffic matrix is proposed. The RSA formulation cannot be solved in practical times and, therefore, the authors present a decomposition method that breaks the previous formulation into two sub-problems: a demand routing sub-problem and a spectrum allocate sub-problem. Since both sub-problems are solved sequentially, global optimality cannot be guaranteed. In [39] the authors study the RSA problem by providing a different ILP formulation but with the same objective as in [38]. Finally, in [40] the authors formulate the RSA problem and propose an effective heuristic algorithm to obtain near optimal solutions.

In light that the contiguity constraint adds huge complexity to the RSA problem, the concept of channels were introduced in [41] for the representation of contiguous spectral resources. The use of channels allows removing the spectrum contiguity problem from mathematical formulations. Channels can be grouped as a function of the number of slots, e.g., the set of channels C(2) = {{1,1,0,0,0,0,0,0}, {0,1,1,0,0,0,0,0}, {0,0,1,1,0,0,0,0}, … {0,0,0,0,0,0,1,1}} includes every channel using 2 contiguous slots, where each position is 1 if a given channel uses that slot. The size of the complete set of channels C that need to be defined is ∑|C(·)|≤|S|·n, where S represents the set of frequency slots and n is the number of different amounts of contiguous slots that connections can request, e.g. if connections can request for either 1, or 2, or 4, or 16 contiguous slots, n=4.

Using pre-computed set of channels authors in [41] addressed the off-line RSA problem in which enough spectrum needs to be allocated for each demand of a given traffic matrix. To this end, they presented novel ILP formulations of RSA based on the assignment of channels. The evaluation results revealed that the proposed approach allows solving the RSA problem much more efficiently than previously proposed ILP-based methods and it can be applied even for realistic problem instances, on the contrary to previous ILP formulations.

3 Dynamic RSA

RSA algorithms are also used both to dynamically provision connections at arrivals of requests (dynamic RSA algorithms). A novel approach for the RSA problem was proposed in [42] facing the routing and the spectrum allocation problems separately. They included the spectrum availability in an adapted Dijkstra’s shortest path routing algorithm. Regarding the spectrum allocation, any heuristics similar to that used for wavelength assignment (first fit, best fit, etc.) can be applied.

The benefits of the above RSA algorithm are: 1) only one graph G (N, E) containing the availability of every frequency slot needs to be stored and maintained, 2) the routing algorithm runs once for each connection request, 3) the complexity of the proposed extension for spectrum availability to the routing algorithm is negligible, and 4) the spectrum assignment is performed after the shortest route is found, adding flexibility to use any heuristic.

4 Spectrum Reallocation

The problem of fragmentation has been studied in the literature in the context of WSON and two main strategies have been proposed to reallocate (including rerouting and wavelength reassignment) already established paths: periodic defragmentation and path-triggered defragmentation. The former strategy focuses on minimizing fragmentation itself all over the network at given period of time, whereas the latter focuses on making enough room for a given connection request if it cannot be established with current resources allocation. Periodic defragmentation, requiring long computation times as a result of the amount of data to be processed, is essentially performed during low activity periods, e.g. during nights. Conversely, path-triggered defragmentation, involving only a limited set of already established connections, might provide solutions in shorter times and can be run in real time. It is worth noting that incoming connection requests or tear-downs arriving while network re-optimization or defragmentation operations are running must be kept apart until they terminate.

A novel approach, called SPectrum REaLLOcation (SPRE (LLO)->(SSO)) [42], was developed in the context of STRONGEST; it follows the path-triggered spectrum defragmentation in Flexgrid networks whenever not enough resources have been found for a connection request. Every link in the routes from a set of k-shortest routes connecting source and destination nodes is checked to know whether the amount of available frequency slots is equal to or higher than the required for the incoming connection request. If enough frequency slots are available in one of the shortest routes, the SPRESSO mechanism is triggered to find a set of already established path reallocations so to make enough room for the connection request in the selected route (newP). Otherwise, the connection request is blocked.

It is worth highlighting that a path can be hitlessly reallocated by shifting its Central Frequency (CF) using the push-pull technique described in [43].

5 Elastic Spectrum Allocation for Variable Traffic

The introduction of flexgrid opens new functionalities to be developed at the optical layer, such as the adaptation of lightpaths through appropriate Spectrum Allocation (SA) schemes in a response to bandwidth variations, in particular, expansion/reduction of the spectrum when the required bit rate of a demand increases/decreases, respectively.

Authors in [44] defined two schemes for variable SA in flexgrid, namely: Semi-elastic SA and Elastic SA, which put some restrictions on the assigned CF and SA. In the semi-elastic SA the assigned CF is fixed but the spectrum width may vary, whereas in the elastic SA both the assigned CF and the allocated spectra are flexibly selected at each time interval. Note that the elastic SA can be implemented by performing sequentially first CF shifting (e.g. using the push-pull technique [43]) and then the semi-elastic SA. They formulated an off-line Multi-Hour Routing and Spectrum Allocation (MH-RSA) optimization problem and showed that the elastic SA scheme with Expansion/Reduction minimizes the amount of un-served bit-rate, since it provides gains double than that of the semi-elastic SA scheme.

6 Available Planning tools

Today a number of companies offer network planning and operation tools for fixed-grid WDM and IP networks. These tools are being used by network operators and equipment vendors to meet service level agreements, achieve capital expenditure savings, maximize network lifetime, and gain insight into the capabilities of their network.

Some of the most important functionalities these products provide are the following:

• Optimize routing and equipment placement to efficiently meet traffic demands

• Produce equipment configuration and equipment requirements

• Perform capacity planning, determining how to expand the network (e.g., purchase links) to handle traffic growth

• Analyse the impact of failures and plan protection strategies to maximize resiliency

• Analyse and minimize equipment costs

• Evaluate various what-if networking scenarios, validating network changes or situations before deploying the production network

• Perform traffic analysis and engineering

• Visualize the above information

Some commercial such tools are presented in Figure 37.

• Cariden MATE portfolio consists of a tightly integrated set of products (Design, Live, Collector) that support planning, engineering, and operational tasks for IP/MPLS networks. OPNET’s SP Guru Network Planner enables planning and design of multi-technology, multi-vendor IP/MPLS networks.

• Opnet’s SP Guru Transport Planner is an advanced network planning solution that enables service providers and network equipment manufacturers to design resilient, cost-effective DWDM, OTN, and SONET/SDH optical networks. It contains advanced network design algorithms that minimize investment costs and optimize operational efficiencies.

• IP/MPLSView is WANDL's multi-vendor, multi-protocol, and multi-layer solution for IP and/or MPLS networks, for design & planning, management & monitoring, and service creation & provisioning. NPAT (Network Planning & Analysis Tools) is WANDL's solution for ATM, Frame Relay, TDM, Voice, and Optical Transport networks providing cross-vendor support for all stages of network planning, design, and analysis.

• Aria Networks IP/MPLS-TE Operational Planning solution provides planning operation that analyze even the largest IP/MPLS networks.

• VPIsystems Multi-layer Transport Optimization solution allows to visualize, understand and optimize transport networks, providing capacity analysis and planning, re-optimization, survivability analysis and greenfield planning.

• The Infinera Network Planning System (NPS) provides users with offline graphical modelling, planning, and configuration capabilities for designing optical network solutions. Other companies (e.g., like Huawei, BTI, Alcatel-Lucent Bell, Nokia Siemens, Cisco) also offer similar solutions which are however more integrated with their products.

• Nokia Siemens Networks provides network providers with SURPASS TransNet and SURPASS TransConnect, in order to building efficient transport systems.

• The Cisco Transport Planner is a fully comprehensive DWDM network design and design management tool. Cisco Transport Planner uses the latest in optical transport technologies from the Cisco Optical portfolio.

These tools offer functionality for the IP and optical WDM layers, most of the times separately. This is because changes in the two network layers occur at different time scales, as the WDM layer is very static and does not really need to be operated at real time.

The introduction of flexgrid technologies will force tools to consider the specifications of this new technology. However, the adaptability of this new architectural paradigm brings the optical layer closer to the IP layer. The IP layer can request and control the bandwidth that the BV- transponders use, meaning that operators no longer need to massively over-provision to accommodate possible fluctuations in the IP layer. This implies that future tools will have to more closely integrate these two layers in order to achieve more efficient resource utilization, including resources used for protection/restoration purposes, and save in capital and operational expenditures.

Note that the tools in Figure 37 are standalone applications and there is no provisioning for them running on the Cloud.

[pic]

Figure 40: Network planning tools overview.

2 Algorithms for network planning

1 Algorithms for off-line network planning

National IP/MPLS networks have been designed using a multilayer approach to take advantage of the optical longer reach. In that approach, the IP/MPLS layer performs routing and flow aggregation whereas the optical layer, based on the wavelength division multiplexing technology, transports those aggregated flows into optical connections. However, the Flexgrid technology featuring a finer granularity, allows performing grooming also at the optical layer and hence, the aggregation level of the incoming flows can be reduced.

Taking advantage of the above fact, in [45] we propose a new network architecture consisting of a number of IP/MPLS areas performing routing and aggregating flows to the desired level, and a flexgrid-based core network connecting the areas among them.

The results presented next were obtained from solving a close-to-real problem instance consisting of 1113 locations, based on the BT network. Those locations (323) with a connectivity degree of 4 or above were selected as potential core locations. A 3.22 Pb/s traffic matrix was obtained by considering the number of residential and business premises in the proximity of each location. Locations could only be parented to a potential area if they were within a 100 km radius. Finally, four slot widths (50, 25, 12.5 and 6.25 GHz) were considered.

We used the cost model produced in the STRONGEST project [17]: Ten IP/MPLS router types were considered, their capacity ranging from 4 to 57.6 Tb/s and the number of slots where cards are plugged in ranging from 10 to 144. Different cards for access (48x1, 14x10, 3x40 and 1x100 Gb/s), internal (14x10, 3x40 and 1x100 Gb/s), and 400 Gb/s MF-TP ports were considered. Two types of WSSs (1x9 and 1x20) were used to build BV-OXCs. Finally, four types of 3Rs (up to 10, 40, 100, 400 Gb/s) were taken into consideration.

Figure 38 presents CAPEX results as a function of the number of opened IP/MPLS areas. Figure 38a reports aggregated costs of the IP/MPLS areas, including IP/MPLS routers, cards, and ports (excluding MF-TPs which have been considered part of core networks) costs. Briefly, costs decrease as much as 23.5% with the number of opened IP/MPLS as a consequence of cards and ports costs decrement.

Figure 38b shows disaggregated costs for the flexgrid core network as a function of the slot width considered. We found that MF-TPs and 3Rs costs are almost the same irrespective of the slot width. Notwithstanding, MF-TPs costs remain constant regardless the number of opened IP/MPLS areas whereas 3R costs first increase exponentially with the number of areas up to a point where they decline significantly. The sharp increment is as a result of that in the number of aggregated flows in the core network to connect an increasing number of areas while their on-average capacity is still high enough so to need the highest capacity 3Rs (and shortest reach). Once the on-average capacity of the aggregated flows decreases optical signals reach increases significantly and 3Rs are seldom needed. In contrast, BV-OXC costs depend remarkably from the used slot width; starting from the same values, all costs show an upwards trend being those for the 50GHz slot steeper.

When all costs are aggregated (Figure 38c) we observe 31.3% savings when all areas were opened and the finer slot width was used, comparing to the case where only 50 areas were opened and 50GHz slots were used. Note that the latter represents the case where just super-channels are added to fixed-grid DWDM-based networks. Saving climb to 43.8% compared to the case where all areas are opened and 50GHz slots are used.

Figure 39 gives insight on the solutions for the IP/MPLS areas. Figure 39a illustrates the switching capacity and the number of IP/MPLS routers installed on all the areas of each network as a function of the number of IP/MPLS areas. A significant decrement, being as high as 19.2% in switching capacity and 12.0% in terms of number of routers, is shown. Figure 39b focuses on the number and on-average capacity of internal ports while Figure 39c provides disaggregated values for 10, 40, and 100 Gb/s internal ports. A noticeable reduction in the on-average port capacity with the number of opened areas is shown, reaching 52.3% when all areas were opened. In terms of number of internal ports, the reduction is as high as 23.6% as a result of a reduction in the number of 100 Gb/s ports which are gradually substituted by 10 and 40 Gb/s ports as the number of opened areas increases (Figure 39c).

[pic]

Figure 41: CAPEX vs. amount of opened IP/MPLS areas. a) Aggregated and breakdown IP/MPLS CAPEX. b) Flexgrid core network CAPEX for each considered slot width. c) Core network costs breakdown.

[pic]

Figure 42: Details of the solutions against the amount of opened IP/MPLS areas. a) Installed capacity and number of IP/MPLS Routers. b) On-average capacity and aggregated number of area internal ports. c) Number of area internal ports.

However, having a larger number of relatively small metro areas implies a reduction in the traffic aggregation at the IP/MPLS layer, thus resulting in higher variability of the traffic flows data-rate offered to the optical layer along the day. We study the impact of aggregation level and traffic variability on the efficiency of adaptive spectrum allocation in supporting time-varying traffic demands in [46]. In particular, we investigate the relation between aggregation and the performance of the elastic SA policies proposed in [44]. To this end, we evaluate the performance of elastic SA assuming different dimensions of the Flexgrid-based core network, which in turn translates into the level of aggregation of the traffic to be carried by lightpaths in the core network.

2 Algorithms for network clustering

Taking into account the impact of node clustering on traffic grooming and its potential performance gains in the framework of Idealist we will develop methodologies for the optimal segmentation of a mesh connectivity network into a set of interconnected “closed-loop” sub-networks. In contrast to other approaches, we address a different problem, where network clusters exploit a ring interconnection, simplifying routing and resiliency, without assuming any restrictions on the installed fibre capacity. Additionally, multi-layer, traffic grooming at sub-wavelength granularities is targeted exploiting traffic multiplexing directly at the optical layer. It is worth noting that this optimization problem differs significantly from other clustering techniques [32], [47] since i) clusters are decomposed to closed-loop topology networks that are length limited due to physical layer constraints, ii) clusters are interconnected through gateway nodes with no further grooming at sub-wavelength level, and iii) a set of optimization parameters are taken into account.

The global optimization problem of partitioning a mesh network into clusters is NP-hard, as shown for the relaxed problem in [32]. Thus, for networks with more than a few nodes, it is important to develop heuristics that return solutions in polynomial time. Such heuristics should compute a number of different solutions (clustered topologies respecting the restrictions discussed above), which should be evaluated against a fitness function in order to obtain the optimal result. It is worth noting that the selection of an optimal solution should ultimately be subject to the minimization of the actual resulting network cost. However, an exact actual cost value cannot be computed and used at this stage, since this cost depends on the final planning and dimensioning of the overall network including the inter-cluster network, which will be determined at a second stage. Additionally, the computation of a single value expressing network cost addressing in a general way all network cases is hard to achieve, since this cost may take different parameters into account under different scenarios. For example link distances as well as node dimensioning may affect the overall fibre deployment cost as well as construction works and operation expenses, which may differ for different operators facing different requirements. Since general analytical models for determining an exact actual cost value do not exist, actual cost could is related to the required overall number of transceivers, which is an objective cost factor demonstrating the resulting network complexity and the amount of required resources to serve the specific traffic demand ([32], [36]).

Adopting the ring topology the RWA and resiliency problems are inherently resolved simplifying the clustering problem. Thus, the methodology could be exploited by problems that address i) transformation of random mesh graphs into closed-loop clusters and ii) implementation of node grouping heuristics to optimize cluster selection not based on topological conditions alone (total distance, mean distance from a central point etc.) but on a set of conditions that optimize traffic aggregation and overall network cost and resource utilization. As a case study we will investigate the migration of mesh IP/WDM networks to CANON. The improved efficiency of CANON, due to its inherent grooming features, compared to other network architectures that employ different switching modes over general mesh network topologies has been shown in past publications [31], [37] and references therein). However, an open issue that remains is how to exploit CANON in an evolutionary manner, migrating from a legacy network infrastructure, which has been designed implementing a partial-mesh topology. In this context the problem that remains to be addressed is to take as input a given network topology and a known distribution of average traffic demand between pairs of source destination nodes (traffic matrix) and design a clustered network, where the existing nodes should operate either as MENs or CTNs implementing the CANON architecture. Intra-cluster interconnection of MENs should re-use existing links with appropriate re-organization when required e.g. new point-to-point links between two consecutive MENs of a cluster may be constructed by concatenating/splicing existing fibres or additional (e.g. dark) fibre links re-using parts of the existing infrastructure. Similarly, CTN interconnection could be achieved implementing a mesh inter-cluster network over the existing infrastructure. Finally, the new links should be dimensioned so that they can transport the aggregated traffic between clusters. The main objective of this process should be to maximize the traffic aggregation gains, i.e. to minimize the amount of resources required to serve the traffic demand, hence the overall network cost.

3 Transmission configurations selection and RSA under physical layer impairments

Optical connections in flexgrid networks span over many and long links, physical layer impairments (PLIs), accumulate and affect the quality of transmission (QoT). Accounting for PLIs is a challenge for algorithm designers, especially with respect to their exact modelling and the interdependencies introduced. Flexgrid networks are expected to use coherent detection and DSP, implying that impairments, particularly those related to dispersion, will be substantially reduced or fully compensated. However, the additional degrees of flexibility available in flexgrid networks make the minimization of these effects complicated, since they also depend on the transmission parameters that are tuneable.

To formulate the physical layer effects and the transponders’ tuneability in a flexgrid network we assume that each connection has a specific optical reach, defined as the length it can transmit to with acceptable QoT (e.g., BER). The optical reach depends not only on the connection’s transmission configuration, but also on the presence of adjacent interfering connections, their transmission configurations and guard bands used. The number of combinations of possible configurations can be huge; also, PLIs analytical models may not capture all effects or experimental measurements may be limited for some of the options. So, it seems that the only viable solution is to resort to some sort of simplification that captures PLIs in a coarser but safe manner, reducing the parameters and the solution space without eliminating good solutions.

We propose such a simplification that fits well with the above described requirements. We calculate the transmission reach assuming that a connection suffers worst case interference (four-wave-mixing, cross-phase modulation, cross-talk) by the adjacent connections for a given transmission configuration and guard band distance. To be more specific, assume that a flexgrid transponder of cost c can be tuned to transmit r Gb/s using bandwidth of b spectrum slots and a guard band of g spectrum slots from its adjacent spectrum connections to reach l km distance with acceptable QoT. This defines a physical feasibility function l=fc(r,b,g) that captures PLIs and can be obtained experimentally or using analytical models [48]. Note that defining the rate r and spectrum b incorporates the choice of the modulation format used. Using the functions fc of the available transponders (BVT) we define (reach-rate-spectrum-guard band-cost) transmission tuples, corresponding to feasible configurations of the BVT. The term “feasible” is used to signify that the tuple definition incorporates PLI limitations, while the cost parameter is used when there are BVT of different capabilities and costs. The above definition is very general and can be used to describe any type of flexible or even fixed-grid optical network. Using the above methodology the feasible transmission options of the BVT can be enumerated so as to incorporate physical layer effects. The planning algorithm takes these as input when examining the options for serving the demands.

In the context of Idealist, we have developed ILP and heuristic algorithms [15][16] that can be used for planning both transparent (without regenerators) and translucent (with regenerators) flexgrid networks. The developed IA-RSA (IA stands for Impairment Aware) algorithms take as input the traffic matrix and the feasible transmission options of the BVT transponders and serve the demands for their requested rates by choosing the route, breaking the transmission in multiple connections, placing regenerators if needed, and allocating spectrum to them. The objective is to minimize both the spectrum and the cost of transponders used in a multi-objective optimization formulation.

Given:

• the network topology represented by a graph G(N, E), and the number S of spectrum slots that the network supports

• the traffic matrix Λ

• Specification of BVT transponders: transmission reach as a function of the tuneable parameters (rate, spectrum) and guard band left from adjacent connections.

Output: Routes, spectrum allocation, placement of BVT transponders and regenerators, selection of transmission parameters

Objective: Minimize a weighted combination of the spectrum that is used and the cost of the transponders

The developed heuristic algorithm of [49][16] has been incorporated in the off-line network planning tool described next.

4 Specifically-designed recovery for Flexgrid

Flexgrid technology allows allocating the spectral bandwidth needed to convey heterogeneous client demand bitrates in a flexible manner so that the optical spectrum can be managed much more efficiently. In addition to provisioning, new strategies for recovery can be devised that take full advantage from the spectrum allocation flexibility. In [51] we propose a new recovery scheme, called single-path provisioning multi-path recovery (SPP-MPR), which provisions single-paths to serve the bitrate requested by client demands and combines protection and restoration schemes to jointly recover, in part or totally, that bitrate in case of failure. We defined the bitrate squeezed recovery optimization (BRASERO) problem to maximize the bitrate which is recovered in case of failure of any single fibre link. Exhaustive numerical experiments carried out over two network topologies and realistic traffic scenarios show the efficiency of the proposed SPP-MPR scheme approach while providing recovery times as short as protection schemes.

5 Large-scale optimization techniques

Finding optimal routes and spectrum allocation in Flexgrid optical networks, known as the RSA problem, is an important design problem in transport communication networks. The problem is NP-hard and its intractability becomes profound when network instances with several tens of nodes and several hundreds of demands are to be solved to optimum. In order to deal with such instances, large scale optimization methods need to be considered. We have developed a column (more precisely, path) generation-based method for the RSA problem. The method is capable of finding reasonable sets of lightpaths, avoiding large sets of pre-computed paths, and leading to high quality solutions. Numerical results illustrate effectiveness of the proposed method for obtaining solutions for large RSA problem instances.

6 Algorithms for in-operation network planning

WDM transport networks are typically initiated with an offline/planning algorithm assuming an oversubscribed traffic matrix to absorb short term fluctuations (e.g. daily cycles) and avoid frequent network upgrades (long term traffic increases eventually require upgrades, of course). Thus, the network is typically operated in an incremental manner, with new connections added sporadically, when utilization exceeds a certain percentage, and existing connections rarely (if ever) are terminated. Flexgrid networks using adjustable transponders (BVT) require a different approach as their operation will be more dynamic, having time scales at which optical connection rate changes occur probably 1-2 orders of magnitude smaller than in fixed grid networks. Flexgrid can bring the optical layer closer to the IP layer, making the IP layer able to “dial”/control the bandwidth that it uses.

Dynamic traffic variation in flexgrid networks can be accommodated at two different levels. We consider the first level to be the establishment of new connections, as in fixed grid networks. Given the high capacity that bandwidth variable transponders (BVT) are expected to transmit (designs of 400 Gb/s or higher are considered in Idealist project), relatively long periods of time will pass until a new connection is established, probably longer than in WDM networks. A second level is to absorb changes in the requested rate that are short- or medium-term by adapting the BVT, e.g., tuning the modulation format and/or the number of spectrum slots they use, a feature not available in WDM systems.

In the framework of IDEALIST project we are developing algorithms to establish and adapt connections following the above described framework. In particular we are developing algorithms to establish new connections but also to adapt existing connections to serve dynamic traffic variations in a holistic manner.

In the control plane of flexgrid-based networks, the Path Computation Element (PCE) is commonly used to perform such path computations each time a connection request arrives to the network. PCE is evolving from a pure state-less condition to an active stateful architecture [52], [53]. In the latter, Label Switched Path (LSP) state information is stored at the PCE and used to directly trigger network planning operations, e.g., virtual topology reconfiguration or defragmentation. In this section, we briefly present some works presenting algorithms developed in the IDEALIST project to allow performing network planning while the network is in operation.

• Flexgrid networks and BVTs enable transmission of different channel’s capacities and different signal’s format. This characteristic allows going a step further introducing the adaptation of the modulation format of the signal to the distance of the path to traverse. In a network scenario where the routes can vary between longer and shorter reach paths, the latter can be transmitted through a more efficient modulation format such as DP-16-QAM.

To this purpose, a novel heuristic algorithm called: KSP Distance Adaptive Multifiber Spectrum Assignment (KSP-DA-MSA) has been implemented to compare the network evolution results with a more efficient use of the spectral resources by adapting the modulation format to the path. Table 16 shows the different modulation formats considered and the related reach.

Table 17 Signal’s formats available by the distance adaptive algorithm

|Modulation Format |Capacity |SE |FEC |Guard Band |Reach (km) |

|OOK |10 |1 |0.12 |7 |2200 |

|DP-QPSK |40 |4 |0.12 |7 |2800 |

|DP-16-QAM |40 |8 |0.12 |7 |800 |

|DP-QPSK |100 |4 |0.12 |7 |2800 |

|DP-16-QAM |100 |8 |0.12 |7 |800 |

|OFDM-DP-QPSK |400 |4 |0.12 |10 |3560 |

|OFDM-DP-16-QAM |400 |8 |0.12 |10 |800 |

|OFDM-DP-QPSK |1000 |4 |0.12 |10 |3560 |

|OFDM-DP-16-QAM |1000 |8 |0.12 |10 |800 |

• Dynamic restoration in multi-layer IP/MPLS-over-Flexgrid (DYNAMO) [54].

Although the flexgrid technology favours more efficient spectrum utilization, multilayer IP/MPLS-over-flexgrid networks would still be needed. To be operated, a centralized PCE could be used. In the event of a failure, tens or hundreds of client flows could become disconnected and thus, restoration routes need to be found by the PCE for these flows. In standard restoration, path computation for each client flow is performed which derives into resource contention as a result of several connections trying to use some common resources. We thus propose to group client flows’ restoration requests into a single bulk in the PCE. Next, a Global Concurrent Optimization (GCO) module focuses on reconfiguring the virtual topology and finds routes for all the flows in the bulk. Exhaustive simulation results performed on two national core topologies show that a PCE with a GCO module solving DYNAMO highly improves restorability and reduces remarkably the number and capacity of transponders, at the expense of some increment in restoration times.

• Algorithms for dynamic lightpath adaptation under time-varying traffic.

Work in [46] focuses on lightpath adaptation under time-varying traffic in a dynamic Flexgrid optical network; it explores the elastic spectrum allocation (SA) capability of Flexgrid and, in this context, we study the effectiveness of three alternative SA policies, namely Fixed, Semi-Elastic and Elastic. For each elastic SA policy, we develop a dedicated algorithm which is responsible for adaptation of spectrum allocated to lightpath connections in response to traffic changes. The evaluation is performed for a set of network scenarios with different traffic variability. In our experiments up to 21% more traffic is served with the proposed elastic SA than with the fixed SA.

In [55] we proposed a framework to serve dynamic traffic in a flexgrid network. The offline algorithm used to initialize the network, or the dynamic online algorithm that subsequently adds connections, assigns to each connection a path and a reference frequency. A connection occupies a certain amount of spectrum slots around that reference frequency, and traffic variations can be absorbed by the BVT by tuning the modulation format or expanding/contracting the spectrum they use. Slots that are freed by a connection can be assigned to different connections at different time instants, obtaining statistical multiplexing gains. To enable the dynamic sharing of spectrum, we need Spectrum Expansion/ Contraction (SEC) policies to regulate how this is performed. After establishing and tearing down multiple connections in a flexgrid network the spectrum slowly becomes fragmented, reducing its ability to accommodate new connections.

• An algorithm for elastic operations and hitless defragmentation [56].

The objective of this algorithm is to perform elastic operations, i.e., increase/decrease the bitrate of already established lightpaths. An optimization strategy can be triggered, if required, for hitless defragmentation of the optical spectrum targeted to make enough room to perform the elastic operation. Defragmentation is achieved shifting already established lightpaths in the spectrum.

3 Architecture of network planning tool

In this section we describe the architecture of the planning tools currently being developed in IDEALIST.

1 Off-line network planning tool (MANTIS)

Mantis is the IDEALIST network planning and operation tool for designing the next generation flexgrid optical networks. It includes novel flexgrid optical network algorithms for planning and operation functions. Mantis architecture permits fast execution of the included mechanisms, efficient usage of the computational resources’ utilized and enables the deployment of the tool both as a desktop application and as a cloud service (SaaS).

In Mantis, users can define various parameters (e.g., network topology, traffic demands, equipment, devices monetary and energy cost) and select among a set of algorithms for routing and wavelength assignment for WDM networks, routing and spectrum allocation for flexgrid networks. Algorithms evaluate future network plans and demands and report detailed solutions including the required bandwidth to serve the demands, the number and configurations for transponders and regenerators, total monetary cost and total required energy, and report on connections that could not be established either due to physical layer impairments or due to bandwidth unavailability.

Architecture

Mantis components are organized in three layers: the access layer, the application layer and the execution layer. Furthermore, there are two common interfaces whose primary purpose is to provide loose coupling between the application layer and the other two layers. In this way, the same access and execution layers can be used whether Mantis is deployed as a desktop application or as a cloud service. Figure 40 shows Mantis architecture main components.

[pic]

Figure 43: Mantis architecture main components.

The access layer handles the interaction with the users through a web-based interface and exposes a RESTful API. The execution layer consists of the execution engine and the library of available network planning and operation algorithms. The execution engine receives requests, for starting or terminating algorithms’ executions, through the common interface from the application layer and is responsible for performing all the required actions, including the preparation of the execution environment, the monitoring of the execution progress and the handling of the final results or possible failures. In the current version, the algorithms are written either in Cython or in C++ language and are accessed from the execution engine through a custom plug-in mechanism. This mechanism enables new algorithms to be added in the tool without any modification of the application layer and the execution engine.

The application layer implements the application logic and orchestrates the execution of user requests. It is the only layer that differs between desktop application and cloud service deployment as there are different requirements and operations that should be performed. In the first case, there is a server that contains the desktop application engine and the execution layer implementations. The desktop application’s engine receives requests from the access layer and stores them in a local queue and a disk file that provides a simple fault tolerance mechanism, eliminating the possibility of requests getting lost or not served due to server problems. Also, desktop application’s engine limits the number of concurrent executions based on the capabilities of the hosting machine in order to avoid resources saturation since the algorithms are executed only in the machine where the server is deployed.

Software as a Service (SaaS) Operation

When MANTIS is deployed as cloud service, the application layer implements the cloud application’s engine that handles the interaction with the cloud infrastructure. In this deployment, there is one execution engine in every virtual computing node, which runs a particular algorithmic instance. Cloud engine has been designed to be modular in order to support multiple cloud service providers with minimum effort and changes. In the current version, Mantis supports Amazon Web Services and ~okeanos, GRNET’s cloud service for the Greek Academic Community.

Cloud engine consists of the following components: request and response queues, information provider and dispatcher. Cloud engine receives requests from the access layer and stores them in the request queue and a disk file. The dispatcher reads the request queue and checks if the user’s request relate to a new execution or the termination of an old one, still in progress. In both cases, the dispatcher uses the available information from the information provider in order to handle the requested operation. The information provider keeps useful details about the available cloud resources, their capabilities, their current load and the tasks that are assigned to each one for execution. Using this information the dispatcher can decide the execution node each request should be forwarded to, while it can automatically adjust the available cloud resources so that they are better aligned with the total demand. Finally, the response queue is used by the execution nodes in order to inform the cloud engine about the usage of their resources and the status of the executed jobs.

User Interface

Mantis comes with a simple web-based user interface through which users access all its functionalities: network topology and traffic demands creation, algorithms selection and configuration, execution and results presentation. Mantis user interface enables users to easily and graphically design network topologies, store, edit, and use them later. Similarly, users can define their own traffic matrices, either graphically or by importing them from comma-separated values (CSV) files. Figure 41a and Figure 41b show the available interface for network topologies and traffic demands respectively. New configurations can be created for each algorithm by selecting a network topology, traffic demands and specifying all the other required parameters (Figure 41c).

|[pic] |[pic] |

|(a) |(b) |

|[pic] |[pic] |

|(c) |(d) |

Figure 44: Creation of (a) network topology, (b) traffic matrix, (c) configuration, and (d) display of information for different instances.

Users can always check the status of the running instances and have access to useful details (Figure 41d). Furthermore, charts can be created by combining the results of completed instances. More details on Mantis can be found in

Using Mantis it is possible to create a common benchmarking environment with social characteristics where researchers share topologies, traffic matrices and CAPEX/OPEX parameters, and evaluate their algorithms under common conditions. In this way, Mantis could also evolve as an online collaboration platform for optical network researches, improving the comparability, quality and reliability of the results presented in various research articles and projects.

Algorithms included in Mantis

The current Mantis version includes network planning and operation algorithms for fixed-and flexgrid optical networks that can be used for both transparent (without regenerators) and translucent (with regenerators) networks. The IA-RSA (IA stands for Impairment Aware) algorithm [15] considers the planning problem of a flexgrid optical network under PLIs, allocates BVT transponders, selects their transmission configurations and assigns routes and spectrum to the connections. Also implemented are two algorithms for planning mixed and single line rate WDM systems. Dynamic versions of these algorithms are also available that take as input the output of the offline case and serve the new demands one-by-one.

2 In-Operation network planning tool (PLATON)

The requirement of executing network re-optimization operations to efficiently manage and deploy new generation flexgrid-based optical networks has brought to light the need of some specialized PCEs capable of performing such high time-consuming computations. Just to mention, some network re-optimization operations are optical spectrum defragmentation, re-optimization after a failure has been repaired, etc.

The objective of such re-optimizations is to compute network reconfigurations to be done based on the current state of network resources to achieve near-optimal resources utilization. Since these operations need High Performance Computing (HPC) hardware to produce a solution in practical times, specialized PCEs can be deployed in the back-end while having a PCE capable of solving some common tasks, such as plain path computations in the front-end. Front-end and back-end PCEs can communicate by using the PCE Protocol (PCEP) as Inter-PCE communication protocol.

Back-end PCEs require high performance computing equipment to process the huge amount of data in both the Traffic Engineering (TED) and the Label Switched Path (LSP-DB) databases in a unified computation step. To deal with this problem, a HPC Graphics Process Unit (GPU) -based cluster architecture is proposed. This architecture is capable of attending to PCE requests demanding execution of network re-optimization tasks, perform such computations and report a near-optimal solution in practical times.

When a request received at the front-end PCE requires computing a specific optimization algorithm which is not among the local algorithms, front-end PCE looks for into its algorithm lookup table to find a back-end PCE able to run such algorithm ID. An inter-PCE request is then sent via the PCEP protocol to that back-end PCE. Optimization algorithms need not negligible time to produce a solution, typically several minutes or even hours depending on its complexity and the size of the TED and LSP-DB databases. Hence, the front-end PCE cannot be stopped waiting for a computation request to terminate; instead, back-end PCEs send a notification after the computation finishes.

The architecture of the IDEALIST specialized back-end PCE is shown in Figure 42. It consists of a cluster manager module and some HPC agents which run algorithms on highly parallel GPU-based hardware. The cluster manager module contains a PCE server responsible for attending remote PCE requests and storing them into the requests database (RqDB), which stores pending computation requests. Another module, named manager, is in charge of managing computation queues assigning computation requests to HPC agents which in turn perform the requested computation. A request-response UDP-based protocol is used between manager and HPC agents.

Each HPC agent consists of two different parts. Running in the host computer CPU a communications module is in charge of the UDP-based protocol, while a set of optimization algorithms are available. Each optimization algorithm is separated into two blocks, the one running in the host’s CPU and the set of kernels running in the GPU device. The cluster might contain a number of these HPC agents, each in charge of a single GPU device, to create a highly scalable system with hundreds or even thousands of computers, each running several HPC agents.

The execution sequence used in PLATON is illustrated in Figure 43. When the PCE server receives a PCEP request (1) it creates a new entry in the RqDB (2). This database encodes a priority queue of requests used to decide the next request to be computed; the oldest request is not necessarily the first one to be computed. After committing the insertion, RqDB triggers a notification to the manager (3) so that the latter may know that a new request has been added to the queue. When manager receives the notification, it queries RqDB to get the new request (4) synchronizing the copy of the priority queue that it maintains in memory (5).

[pic]

Figure 45: IDEALIST HPC-based Planning Tool Architecture (PLATON)

When a HPC agent becomes idle after computing some request (6), manager selects a request from the priority queue and assigns that request to the HPC agent, transfers the data and parameters required for the computation and informs the agent about the optimization algorithm ID to be used (7). An HPC agent should inform manager about the evolution of the algorithm it is running (9). These intermediate results are stored in the RqDB and are available to be accessed from outside PLATON (10). When the agent finalizes the computation, it transfers the final results to the manager (11) who stores them into the RqDB (12). Then, the RqDB triggers a notification to PCE server informing that the request has been computed and results are available (13). PCE server queries the RqDB (14) to get the results (15) and sends them using a PCEP response to the front-end PCE that originated the request (16).

[pic]

Figure 46: Sequence Diagram

The main loop of HPC agents consists in: a) receiving the computation input data, parameters, and optimization algorithm ID (7); b) executing computation kernels, i.e. GPU computation functions, corresponding to that algorithm (8); c) periodically notifying manager computation statistics (9); and 4) sending the final results to manager upon a request computation is completed (11).

A specific planning tool architecture based on HPC GPU devices, PLATON, capable of solving large optimization problems related on optical network planning and re-optimization has been presented. Although at the time of writing this deliverable PLATON is under development we noticed that especially GPU designed shortest paths algorithms should be investigated so as to take full advantage from GPU massive parallelism. For this very reason, we are also currently focused on devising and implementing efficient parallel SP algorithms that eventually reduce algorithms’ execution time.

Table 17 presents the GANTT diagram for the development of PLATON, which includes the current implementation status.

Table 18 PLATON development status

Task%DoneMay 2013Jun 2013Jul 2013Aug 2013Sep 2013Oct 2013Nov 2013Year #2Year #3PLATON18%          Cluster Manager60%           Requests Database100%           Manager100%           Management Web Server100%           Web-services Server0%           PCE Server0%          HPC Agent13%           Communications0%           Optimization Framework25%          Deliverable D1.20%          Algorithms year #217%           Single Layer Flexgrid Network Design Problem50%           After Failure Repair Optimization (AFRO)0%           Spectrum Defragmentation (SPRESSO)0%          Algorithms year #30%           Define Algorithm Set0%           Implement Algorithms0%         Problems to be implemented in PLATON

We have selected three problems that will be implemented in PLATON. They cover the three stages of the networks’ life cycle:

Single Layer Flexgrid Network Design Problem

After Failure Repair Optimization (AFRO)

Spectrum defragmentation (SPRESSO)

Single Layer Flexgrid Network Design Problem

The objective of this problem is to design a single-layer Flexgrid network to serve a given set of demands. The problem statement is as follows.

Given:

a network topology represented by a graph G(N, E), being N the set of core locations and E the set of fiber links connecting two locations,

a set S of available slots of a given spectral width for each link e ∈ E,

a set D of demands to be transported,

BV-OXC cost, which includes a fixed cost for common hardware and a variable cost which depends on the nodal degree and the number of local ports,

an installation cost for each fibre link actually installed and a cost for every optical amplifier (OA) to be equipped in the used fibre links.

Output: The optical network, including BV-OXCs and its configuration, OAs and fibres.

Objective: Minimize the expected CAPEX for the core network designed for the given set of demands.

After Failure Repair Optimization (AFRO)

The AFRO problem is a re-optimization problem for dynamic traffic scenarios, which is triggered after a fibre link that had failed has now been repaired and is ready to be used again.

In the event of a link failure, we assume that some recovery mechanism is activated to recover those optical connections affected by the failure. While the failed link remains unrepaired in the network, such link is not available for incoming connection requests. Once the failed link is repaired and active again, a sub-optimal traffic routing exists in the network due to, at least, two reasons: a) recovered optical connections that used the repaired link might follow long routes after recovery, and b) connections arrived during the failure time-to-repair might follow longer routes due to the temporary unavailability of the failed link.

Therefore, the presence of optical connections whose current routing was affected directly (restored connections) or indirectly (new connections) by the link failure, and the fact that new repaired link is currently unused, justify the application of the AFRO problem as a mechanism to minimize the use of optical resources in the network by rerouting some of the established optical connections from current to shorter routes that use the repaired link.

The AFRO problem can be formally stated as follows:

Given:

an optical network, represented by a graph G (N, E), being N the set of core locations and E the set of fibre links connecting two locations,

The set of working links E’ and the repaired and empty link e*, so that E = E’ U e*

a set S of frequency slots available in each link e ∈ E,

a set P of already established lightpaths, only using links in E’

Output: a set of re-allocated lightpaths P*, so that each lightpath must contain e*

Objective: Minimize the use of optical resources (e.g. the total amount of used slots)

Spectrum defragmentation (SPRESSO)

The SPRESSO problem defined in [42] can be formally stated as follows:

Given:

an optical network, represented by a graph G (N, E), being N the set of core locations and E the set of fibre links connecting two locations,

a set S of frequency slots available in each link e ∈ E,

a set P of already established paths,

a new path (newP) to be established in the network. A route for the path has been already selected but there is no feasible spectrum allocation,

the threshold number of paths to be reallocated.

Output:

for each path to be reallocated, its new spectrum allocation,

the spectrum allocation for newP.

Objective: Minimize the amount of paths to be reallocated so to fit newP in.

Conclusions

The first deliverable has provided all of the information needed to carry out a thorough investigation into the benefits of flexgrid and more generally Elastic Optical Networking. Reference networks, applications or Use Cases, modelling tools and algorithms and techno-economic data are all provided here.

Of course these project inputs will, to some extent, require refinement. Although the reference networks will remain stable now for the remainder of the project, there will be an ongoing adjustment to the Use Cases. It is likely that further Use Cases will be added to the list, as the project digests the results of the simulations and modelling.

Regarding CAPEX, the status is mature, at least regarding existing network technologies. But it will be required to describe and to assign the cost parameters to new devices when they will be more clearly defined (SBVTs and flexgrid node parts, especially the ones on A/D side allowing the connection with SBVTs). In addition it could be useful add to the model an integrated OTN with fixed grid WDM as a competitor to the all-optical flexgrid option.

Regarding OPEX, there is clearly a great deal more that can be done on all of the OPEX related aspects, from automation to energy consumption and space saving. Energy consumption has an impact both from a network architecture perspective but also at the level of individual components: energy savings should come directly from the additional flexibility provided. There has already been strong input on energy savings to be expected from adaptive data-rate transponders, but further work is expected here.

Finally, the scene has been set for a great deal of intensive algorithm-based work to provide the key tools to carry out the comparisons required throughout Idealist. One area of particular interest is the research on large scale optimisation, which stands to shed light on the largest problems (e.g. modelling BT’s huge network of over 1000 nodes, and with many other simultaneous variables to be optimised).

EON and flexgrid offer the potential for significant bandwidth efficiency increases and this will enable operators to squeeze significantly more life out of their network infrastructures – one estimate presented here suggests up to 5 years. Although this is a large benefit, and although all carriers are experiencing continuing large traffic volume increases, the non-elastic fixed grid solutions can meet these capacity needs for several years to come. In the mean-time, there is a lot of discussion about when carriers should make their networks EON-ready and how they should migrate.

Additionally, there is increasing thought being given to other benefits that might emerge, as well as the basic capacity boost. Already in Idealist, we are seeing interest crystallising in the nearer term towards multi-layer protection applications, and in the longer term towards the use of technologies such as the sliceable bit rate variable transponder. The current perception is that elastic and flexible concepts have much to offer, but we need to continue the creative and analysis processes to find the application sweet spots.

For example, with SBVTs, a comprehensive techno-economic study is required, both for the longer term dynamic traffic scenario, but also for a nearer term more pragmatic case in which the traffic grows but in a predictable, relatively static way. Are BVTs useful – and if so, what are the additional advantages of sliceability? The SBVT architecture (including the structure of the device and the architecture of the application in which it will be used) is still in the process of being properly defined, and part of that exercise involves interworking with WP2.

Overall, EON has strong potential to provide capacity increase, flexibility to future traffic dynamics and a reduction in equipment costs and energy consumption. Which version of EON could achieve these advantages, and which carrier applications are most applicable in the medium and long terms, will require the detailed study open which WP1 is now embarked.

Annex 1 - Detailed reference network information

Note – this annex should not be made public

The embedded file contains all the necessary reference network topology and other information needed to do simulations on Idealist networks.

[pic]

END OF DOCUMENT

-----------------------

[1] Without further limitations (e.g. spectral scarcity) it is supposed here that in real network deployments optical interfaces run at their maximum capacity.

-----------------------

[pic]

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download

To fulfill the demand for quickly locating and searching documents.

It is intelligent file search solution for home and business.

Literature Lottery

Related searches