Product Description: Network Cloud Engine V100R018C00
Product Description: Network Cloud Engine V100R018C00
V100R018C00
Product Description
Issue 05
Date 2018-11-20
and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd.
All other trademarks and trade names mentioned in this document are the property of their respective
holders.
Notice
The purchased products, services and features are stipulated by the contract made between Huawei and the
customer. All or part of the products, services and features described in this document may not be within the
purchase scope or the usage scope. Unless otherwise specified in the contract, all statements, information,
and recommendations in this document are provided "AS IS" without warranties, guarantees or
representations of any kind, either express or implied.
The information in this document is subject to change without notice. Every effort has been made in the
preparation of this document to ensure accuracy of the contents, but all statements, information, and
recommendations in this document do not constitute a warranty of any kind, express or implied.
Website: http://www.huawei.com
Email: [email protected]
Purpose
This document describes the network position, highlights, architecture, configuration,
functions and features, and usage scenarios of Network Cloud Engine (NCE). With this
document, you can obtain an overall understanding of this product.
Intended Audience
This document is intended for:
Symbol Conventions
The symbols that may be found in this document are defined as follows.
Symbol Description
Symbol Description
GUI Conventions
The GUI conventions that may be found in this document are defined as follows.
Convention Description
Command Conventions
The command conventions that may be found in this document are defined as follows.
Convention Description
Change History
Issue Date Description
Contents
2 Architecture.................................................................................................................................... 4
2.1 Solution Architecture......................................................................................................................................................4
2.2 Software Architecture.....................................................................................................................................................5
2.3 External Interfaces.......................................................................................................................................................... 7
2.3.1 NBIs.............................................................................................................................................................................7
2.3.1.1 XML NBI............................................................................................................................................................... 10
2.3.1.2 CORBA NBI...........................................................................................................................................................14
2.3.1.3 SNMP NBI............................................................................................................................................................. 17
2.3.1.4 TL1......................................................................................................................................................................... 18
2.3.1.5 Performance Text NBI (FTP Performance Text NBI)............................................................................................ 20
2.3.1.6 Customer OSS Test NBI.........................................................................................................................................20
2.3.1.7 RESTful NBI.......................................................................................................................................................... 21
2.3.2 SBIs........................................................................................................................................................................... 25
2.4 Southbound DCN Networking..................................................................................................................................... 32
3 Deployment Schemes................................................................................................................. 35
3.1 Software Deployment Modes....................................................................................................................................... 35
3.2 On-Premises Deployment.............................................................................................................................................55
3.3 Deployment on Private Clouds.....................................................................................................................................59
4 Configuration Requirements.................................................................................................... 63
4.1 Hardware Configurations for On-Premises Deployment............................................................................................. 63
4.2 Hardware Configurations for Deployment on Private Clouds..................................................................................... 66
4.3 Server Configurations...................................................................................................................................................67
4.4 Client Configurations................................................................................................................................................... 68
4.5 Bandwidth Configurations............................................................................................................................................68
5 High Availability.........................................................................................................................71
5.1 Hardware Availability...................................................................................................................................................71
5.2 Software Availability.................................................................................................................................................... 73
5.3 DR Solutions.................................................................................................................................................................74
6 Security.......................................................................................................................................... 76
6.1 Security Architecture.................................................................................................................................................... 76
6.2 Security Functions........................................................................................................................................................ 77
9 Specifications............................................................................................................................. 283
9.1 Performance Specifications........................................................................................................................................ 283
9.2 Management Capabilities........................................................................................................................................... 287
9.2.1 Maximum Management Capabilities.......................................................................................................................287
9.2.2 Equivalent Coefficients........................................................................................................................................... 289
9.2.2.1 Equivalent NEs in the Transport Domain.............................................................................................................289
9.2.2.2 Equivalent NEs in the IP Domain.........................................................................................................................295
9.2.2.3 Equivalent NEs in the Access Domain.................................................................................................................303
11 Standards Compliance............................................................................................................376
A Glossary......................................................................................................................................382
1 Introduction
1.1 Positioning
Trends and Challenges
With the rapid development of the Internet industry and the advent of the cloud era, new
business models are emerging one after another, and enterprises are moving towards
cloudification and digitalization. The telecom industry, as a digital transformation enabler for
various industries, faces both challenges and new business opportunities.
Service cloudification results in great flexibility and uncertainty in service applications.
However, there is a huge gap between carriers' infrastructure networks and various
applications.
l A large number of legacy networks coexist with newly-built software-defined
networking/network functions virtualization (SDN/NFV) networks, making it difficult or
costly to adapt to new services. Especially, deploying enterprise private line services
encounters long time to market (TTM), slow customer response, and inflexible packages.
l With the migration of enterprise applications to the cloud and the development of new
services such as the telecom cloud, the network traffic in carriers' pipes is more dynamic
and unpredictable, making traditional network planning and optimization impracticable
and posing high requirements on Service Level Agreement (SLA).
l With the continuous increase in the network scale and complexity, O&M complexity is
intensified. Carriers urgently need to take automatic deployment measures to reduce the
skill requirements for O&M personnel and effectively control the operating expense
(OPEX) in a long term.
Therefore, an intelligent adapter layer (that is, a brand-new management, control, and
maintenance system) needs to be established between the service applications and the
infrastructure networks. The system must be able to abstract network resources and
capabilities, implement automatic and centralized scheduling, and allow application
developers to conveniently invoke various network capabilities to continuously innovate
services and applications at an unpredicted rate.
Product Positioning
NCE is an innovative network cloud engine developed by Huawei. Positioned as the brain of
future cloud-based networks, NCE integrates functions such as network management, service
control, and network analysis. It is the core enablement system for network resource pooling,
network connection automation and self-optimization, and O&M automation.
NCE is located at the management and control layer of the cloud network:
l NCE manages and controls IP, transport, and access devices on lower-layer networks,
supports unified management and control of SDN and legacy networks, and supports
automation of single-___domain, multi-___domain, and cross-layer services.
NCE can also connect to a third-party management and control system to implement
cross-vendor service orchestration and automation.
l NCE also opens capabilities to support interconnection and integration with upper-layer
OSSs, BSSs, and service orchestrators to support quick customization of the application
layer.
The goal of NCE is to build an intent-driven network (IDN) that is first automated, then self-
adaptive, and finally autonomous.
1.2 Highlights
NCE is a network lifecycle automation platform that integrates management, control, and
analysis. It focuses on service automation, O&M automation, and network autonomy to
support carriers' transformations to network cloudification and digital operations.
2 Architecture
The functions of each module in the NCE software architecture are as follows:
l Web portal
Provides a graphical user interface (GUI) to ensure consistent and smooth user
experience.
l Open API gateway
Provides open northbound capabilities, which can be integrated by carriers or third-party
application systems. The open NBIs include intent-driven RESTful NBIs and traditional
NBIs such as XML, CORBA, and SNMP NBIs.
l Business scenario applications
Provide application packages for business scenario automation. Users can define service
requirements based on their business intentions without considering how the network
implements them or what resources are utilized. NCE converts these service
requirements into specific network configurations and delivers the configurations.
l O&M scenario applications
Provide application packages for network O&M automation to implement automatic E2E
lifecycle management.
l Manager
Provides traditional management capabilities (FCAPS) for device configuration, alarms,
performance, links, and QoS, as well as automated E2E service provisioning capabilities
for traditional networks.
l Controller
Provides single- and multi-___domain control capabilities (including IP cross-___domain,
optical cross-___domain, and IP+optical cross-layer control) for SDN networks, and
computes globally optimal routes taking multiple factors into account, and delivers
related control configurations.
l Analyzer
Provides real-time data collection, status awareness, in-depth analysis, and intelligent
prediction capabilities for network traffic and performance, identifies potential risks
based on big data analysis, and proactively provides warnings.
l Cloud platform
Provides basic services, public services, and system management functions based on the
virtual machine (VM) cluster cloud system on the OS.
2.3.1 NBIs
NCE offers network monitoring information, such as the alarm, performance, and inventory
information, for OSSs through NBIs. The NBIs support network management, control, and
analysis functions, such as service configuration and diagnostic tests. Through the NBIs, NCE
can integrate with different OSSs flexibly.
The devices of each product ___domain support different NBI functions. For details, see the
following tables.
Supported √
Not supported ×
XML Alarm √ √ √ √ √ √ √ √ ×
(MTO
SI) Perfor √ √ √ √ √ √ √ √ ×
mance
Invento √ √ √ √ √ √ √ √ ×
ry
Config √ √ √ √ √ √ √ √ ×
uration
COR Alarm √ √ √ √ √ √ √ √ √
BA
Perfor √ √ √ √ √ √ √ √ √
mance
Invento √ √ √ √ √ √ √ √ √
ry
Config √ √ √ √ √ √ √ √ ×
uration
SNM Alarm √ √ √ √ √ √ √ √ ×
P
Perfor Perfor √ √ √ √ √ √ √ √ ×
mance mance
text
NBI
REST Invento × × √ √ × × × √ ×
ful ry
Config × × √ √ × × × √ ×
uration
XML Alarm √ √ √ √ ×
(MTOSI
) Performan × × × × ×
ce
Inventory √ √ √ √ ×
Configura √ √ √ √ ×
tion
CORBA Alarm √ √ √ √ √
SNMP Alarm √ √ √ √ √
Perform Performan √ √ √ √ √
ance ce
text NBI
TL1 Diagnosis √ √ √ √ √
Inventory √ √ √ √ √
Configura √ √ √ √ ×
tion
Custome Diagnosis √ √ × × ×
r OSS
test
RESTful Inventory √ √ √ √ √
XML Alarm √ √ √ √ √ × √
(MTO
SI) Performance √ √ × × √ × √
Inventory √ √ √ √ √ × √
Configuratio √ √ √ √ √ × √
n
CORB Alarm √ √ √ √ √ × √
A
SNMP Alarm √ √ √ √ √ √ √
Perfor Performance √ √ √ √ √ × √
mance
text
NBI
RESTf Alarm √ √ × × √ × √
ul
Performance √ √ × × √ × ×
Inventory √ √ × × √ × √
Configuratio √ √ × × √ × ×
n
REST Performan √ × √ ×
ful ce
Inventory √ √ √ ×
Configurati √ √ √ ×
on
Performance Indicators
Indicator Description
Delay of response to XML l When querying a small amount of data (for example,
requests the number of returned managed elements is less than
1000) from NCE, the XML interface returns all
information within three seconds (when the CPU usage
is lower than 50%). When querying from NEs,
however, the XML interface spends more time,
depending on the network and NE statuses.
l When querying a large quantity of alarms, the XML
interface handles at least 100 alarms per second,
depending on the network and NE statuses.
Alarm notification More than 60 records per second when three OSSs are
processing capability connected
Alarm notification Shorter than 10s when three OSSs are connected
transmission delay
Maximum invocation 1
concurrency of the XML
NBIs that involve a large
amount of data
Transport
Complying with the TMF MTOSI 2.0 series standards, the XML NBI enables NCE to provide
alarm, performance, inventory, and configuration management on transport devices for OSSs.
l Alarm management
– Alarm reporting
– Synchronization of active alarms
– Alarm acknowledgment
– Alarm unacknowledgment
– Alarm clearance
– Collection of alarm statistics
l Performance management
– Query of historical performance data
– Query of current performance data
– Reporting of performance threshold-crossing events
– Query of performance threshold-crossing events
– PTN performance instance management (creation, deletion, suspension, enabling,
and query)
l Inventory management
– Query of physical inventory, such as NEs, subracks, slots, boards, and physical
ports
– Query of logical inventory, such as logical ports, fibers or cables, cross-connections,
and trails
– Export of inventory data
– Report of changes in inventory
l Configuration management
– E2E WDM trail management, including creating, deleting, activating, deactivating,
and modifying trails
– E2E OTN trail management, including creating, deleting, activating, deactivating,
and modifying trails
– MS-OTN E2E EPL/EPLAN/TrunkLink/SDH trail management, including creating,
deleting, activating, and deactivating trails
– MS-OTN E2E PWE3/VPLS trail management, including creating, deleting,
activating, deactivating, and modifying trails
– MSTP+ E2E PWE3/VPLS trail management, including creating, deleting,
activating, deactivating, and modifying trails
– Hybrid MSTP E2E trail management, including creating, deleting, activating,
deactivating, and modifying trails
– E2E EoO/EoW trail management, including creating, deleting, activating, and
deactivating trails
– Link (fiber and Layer 2 link) management, including creating and deleting links
– Per-NE-based service management for L2VPN/L3VPN/NativeEth of PTN/RTN
devices, including creating, deleting, activating, deactivating, and modifying
services
l Protection group management
– SNCP protection (querying the protection group information and performing
switching)
– NE tunnel APS (creating, deleting, and querying protection groups and performing
switching)
– SDH MSP (querying the protection group information and switching status)
– WDM OCP or OLP (querying the protection group information and switching
status)
– NE protection (querying the protection group information and switching status)
– E2E tunnel APS (TNP) management (querying, creating, and deleting APS) of
Hybrid MSTP NEs
Access
Complying with the TMF MTOSI 2.0 series standards, the XML NBI enables NCE to provide
alarm, performance, inventory, configuration, and diagnosis management on access devices
for OSSs.
The XML NBI supports the following functions:
l Alarm management
– Alarm reporting
– Synchronization of active alarms
– Alarm acknowledgment
– Alarm unacknowledgment
– Alarm clearance
– Collection of alarm statistics
– TCA synchronization
l Inventory management
– Query of IP DSLAM inventory (ADSL ports, SHDSL ports, VDSL2 ports, and
templates)
– Query of GPON physical inventory such as NEs, slots, boards, and physical ports
– Query of GPON logical inventory such as VLANs and services
– Query of service ports
– Query of RU information
– Query of ANCP information (including ADSL and VDSL2)
l Configuration management
– FTTH (GPON) service creation, modification, deletion, activation, and deactivation
– FTTB/FTTC (GPON) service creation, modification, deletion, activation, and
deactivation
– xDSL configuration (including ADSL and VDSL2)
– Service port management (creation, deletion, activation, and deactivation)
– RU management (creation, deletion, and modification)
– ANCP information configuration (including ADSL and VDSL2)
l Access diagnosis management
– xDSL port test (including ADSL and VDSL2)
– Port loopback
– OAM detection
– ONT management
In addition, NCE supports the XML NBI that complies with the SOAP protocol in the access
___domain to provide functions such as VDSL2, GPON, service port, and multicast configuration
and inventory queries for the OSS in a customized manner. For details, see 2.3.1.4 TL1. If an
office requires the XML interface for interconnection with the OSS, contact Huawei engineers
to customize the wsdl files and documents based on the customer's service requirements.
IP
Complying with the TMF MTOSI 2.0 series standards, the XML NBI enables NCE to provide
alarm, performance, inventory, configuration, diagnostic test, and protection group
management on IP devices for OSSs.
l Alarm management
– Alarm reporting
Performance Indicators
Notification The CORBA interface can handle at most 100 reporting notifications
reporting every second.
Settings For settings that are not applied to NEs (for example, setting a local
name for an NE), the CORBA interface is able to handle most of
them within two seconds. For settings that are applied to NEs, the
CORBA interface is able to handle most of them within five
seconds.
Delay of response l When querying a small amount of data (for example, the number
to CORBA requests of returned managed objects is less than 1000) from NCE, the
CORBA interface returns all information within five seconds.
When querying from NEs, however, the CORBA interface
spends more time, depending on the network and NE statuses.
l When querying alarms, the CORBA interface handles at least
100 alarms per second.
l The alarm handling capability of the CORBA NBI depends on many factors, such as the
alarm quantity on the network, and CPU performance and memory size of the server. At
the same time, the CORBA NBI sends alarms synchronously, that is, another alarm will
not be sent until the OSS receives the previous alarm and responds to the CORBA NBI.
Therefore, the network stability between NCE and the OSS and the handling capability of
the OSS will affect the alarm handling capability of NCE.
l If an alarm storm occurs, the CORBA NBI will possibly reach its handling limit. The
CORBA NBI can report a maximum of 1,000,000 alarms within one hour. To ensure the
stability of the system, the CORBA NBI will discard some alarms if the alarm quantity
exceeds 1,000,000. You are recommended to handle network faults instantly if an alarm
storm occurs. In addition, the OSS is suggested to synchronize alarms actively at proper
time, for example, when the system is idle.
Transport
Complying with the TMF MTNM V3.5 series standards, the CORBA NBI enables NCE to
provide unified alarm and inventory management for transport devices. The CORBA NBI
also enables NCE to provide service configuration, performance, diagnostic test, and
protection group management for transport devices.
l Alarm management
– Alarm reporting
– Synchronization of active alarms
– Alarm acknowledgment
– Alarm unacknowledgment
– Alarm clearance
l Performance management
– Query of historical performance data
– Query of current performance data
– Reporting of performance threshold-crossing events
– Query of performance threshold-crossing events
l Inventory management
– Inventory change notification reporting
– Query of physical inventory, such as NEs, subracks, slots, boards, and physical
ports
– Query of logical inventory, such as logical ports, fibers or cables, cross-connections,
and trails
l Configuration management
– Configuration of E2E services (SDH, WDM, OTN, MSTP, ASON, and RTN) in the
transport ___domain
– Configuration of per-NE-based services (SDH, WDM, OTN, MSTP, and PTN) in
the transport ___domain
– Configuration of per-NE-based services for tunnels (MPLS tunnels and IP tunnels)
– Configuration of per-NE-based services (ATM PWE3, CES PWE3, Ethernet PWE3,
VPLS, and PWSwitch)
– Configuration of E2E services for tunnels (only for PTN devices) (static-CR tunnel)
– Configuration of E2E services (only for PTN devices) (CES PWE3 and Ethernet
PWE3)
l Diagnostic test management (RTN and PTN)
– Port loopback and alarm insertion
– Ethernet CC, LB, and LT tests
– OAM management for MPLS LSP, PW, PWE3, and VPLS services
l Protection group management (SDH, WDM, OTN, PTN, RTN, and Hybrid MSTP)
– Board protection, including querying protection groups and performing switching
– Port protection, including querying protection groups and performing switching
Access
Complying with the TMF MTNM V3.5 series standards, the CORBA NBI enables NCE to
provide unified alarm and inventory management for access devices.
The CORBA NBI supports the following functions:
l Alarm management
– Alarm reporting
– Synchronization of active alarms
– Alarm acknowledgment
– Alarm unacknowledgment
– Alarm clearance
l Inventory management
– Query of physical inventory, including NEs and boards.
IP
Complying with the TMF MTNM V3.5 series standards, the CORBA NBI enables NCE to
provide unified alarm and inventory management for IP devices.
The CORBA NBI supports the following functions:
l Alarm management
– Alarm reporting
– Synchronization of active alarms
– Alarm acknowledgment
– Alarm unacknowledgment
– Alarm clearance
l Inventory management
– Query of physical inventory, including NEs and boards.
Performance Indicators
Alarm forwarding efficiency Not less than 60 alarms per second (for
three OSS connections)
SNMP request response delay Less than 5 seconds (CPU usage is less than
50%)
Functions
The SNMP NBI supports the following functions:
l Alarm reporting
l Synchronization of active alarms
l Alarm acknowledgment
l Alarm unacknowledgment
l Alarm clearance
l Heartbeat alarm reporting
l Setting of alarm filter criteria
l Alarm maintenance status reporting
2.3.1.4 TL1
Complying with the GR 831 standard, the TL1 NBI enables NCE to provide service
provisioning (xDSL, xPON, broadband, and narrowband services), inventory query, and
diagnostic test (line test and ETH OAM) in the access ___domain for OSSs.
The TL1 NBI supports service provisioning, inventory query, and diagnosis test functions,
and uses the default port 9819.
Performance Indicators
Response time for requesting commands of Within two minutes (excluding test
TL1 commands)
Functions
The TL1 NBI supports the following functions:
l Service provisioning
– Provisioning of xDSL (ADSL, G.SHDSL, and VDSL2) services
– Provisioning of G.fast services
– Provisioning of multicast services
– Provisioning of xPON (GPON and EPON) services
– Provisioning of CNU services
– Management of VLANs
– Management of Ethernet ports
– Management of service ports
– Management of PVCs
– Provisioning of voice (VoIP, PSTN, ISDN, and SPC) services
– Management of ACL & QoS and HQoS
l Inventory query
– Query of various resources, such as devices, xDSL (ADSL, G.SHDSL, and
VDSL2), G.fast, video, xPON (GPON and EPON), CNU, CM, VLAN, Ethernet,
service port, PVC, voice (VoIP, PSTN, ISDN, and SPC), ACL & QoS, and HQoS
resources
Performance Indicators
Maximum number of l As the FTP client, NCE transmits files to one OSS only.
connected OSSs l As the FTP server, NCE can be visited by a maximum of
three OSSs.
Functions
The performance text NBI supports the following functions:
Performance Indicators
Item Specification
Functions
The customer OSS test NBI supports the following functions:
RESTful is a software architecture style rather than a standard. It provides a set of software
design guidelines and constraints for designing software for interaction between clients and
servers. RESTful software is simpler and more hierarchical, and facilitates the implementation
of the cache mechanism.
Performance Indicators
Indicator Description
Maximum number of 10
concurrent OSS
connections
Request response delay When querying a small amount of data (for example, the
number of returned managed elements is less than 1000) from
NCE, the RESTful interface returns all information within
three seconds (when the CPU usage is lower than 50%). When
querying from NEs, however, the RESTful interface spends
more time, depending on the network and NE statuses.
Transport
l Resource inventory
– Query of NE, board, and port data, and resource change notification reporting
l Topology
– Query of nodes
l Service inventory
– Query of OCh and ODU tunnels and client, packet (Ethernet), and SDH services
l Service provisioning and configuration
– Provisioning of OCh and ODU tunnels and client, packet (Ethernet) and SDH
services
– Service path computation
l Network optimization
– Service optimization
l Diagnostic test
– Network survivability analysis
– Fault simulation
– Resource warning
IP
l Resource inventory
– Query of NE, board, and port data, and resource change notification reporting
l Service inventory
– Query of L3VPN, EVPN, and tunnel services
l Service provisioning and configuration
– L3VPN and EVPN service provisioning
– Service path computation
– Service deletion
l Network optimization
– Service optimization
The RESTful interface IDs have been unified. Currently, the legacy NBI IDs and RESTful
interface IDs are not unified and cannot be used together. The new RESTful interface models
are unified, including L2EVPN/L3VPN service provisioning interfaces, NE, board, and port
inventory query interfaces, and L3VPN, EVPN, and tunnel service query interfaces. Other
capabilities are planned based on market requirements.
Access
l Resource inventory
– Query of NE, board, and port data, and resource change notification reporting
In NCE V100R018C00, the RESTful interface can query only NE, board, and port data.
Super
l Resource inventory
– Query of NE, port, and link data
l Service inventory
– Query of L1/L2/L3 hybrid services
– Query of service definition templates
l Service provisioning and configuration
– Provisioning of composite services (L1/L2/L3 hybrid services)
The models and IDs of interfaces for the Super ___domain are not unified with those for the
transport and IP domains.
2.3.2 SBIs
Using SBIs, NCE can interconnect with physical-layer network devices and other
management and control systems to implement management and control functions.
The transport devices that NCE can manage include the MSTP, WDM, and RTN series.
TFTP/FTP/ TFTP, FTP, and SFTP are TCP/IP-based network All transport
SFTP management protocols at the application layer and are devices
dependent on the UDP protocol.
l TFTP is a simple and low-overhead protocol (as
compared with FTP and SFTP) used to upload and
download files. The TFTP protocol does not
support password configuration and transfers file
contents in plain text.
l FTP is a set of standard protocols used for
transferring files on networks. The FTP protocol
transfers passwords and file contents in plain text.
l SFTP uses the SSH protocol to provide secure file
transfer and processing. For data backup using the
SFTP protocol, passwords and data are encrypted
during transmission.
Transport NEs use NE Software Management to
implement functions such as NE upgrades, backup,
and patch installation.
Syslog The Syslog SBI serves as an interface for NCE to All transport
receive system logs from NEs. With the Syslog SBI, devices
NCE can manage NE logs.
Transport NEs support NE security logs.
NETCONF l NETCONF is used to manage network data device Only OptiX OSN
configurations. This protocol is designed to 9600/9800 series
supplement the SNMP and Telnet protocols for based on the VRP
network configuration. V8 platform
l NETCONF defines a simple mechanism for
installing, manipulating, and deleting the
configurations of network devices. NETCONF
uses XML-based data encoding for the
configuration data and protocol messages. In an
automatic network configuration system,
NETCONF plays a crucial role.
TL1 The TL1 interfaces comply with GR-811-CORE and Only WDM (NA)
GR-831-CORE. and NG WDM
(NA) NEs
OSPF/OSPF- l OSPF is used to obtain the topology information Only TSDN NEs
TE of TSDN NEs.
l OSPF-TE is used to obtain TE link information.
PCEP PCEP implements centralized control of TSDN NEs. Only TSDN NEs
NCE functions as the path computation element
(PCE), and each TSDN NE functions as a path
computation client (PCC). They communicate with
each other using PCEP to allow NCE to obtain device
resource information, provide the centralized path
computation service, and maintain the link status.
NOTE
The following products and versions support the TSDN technology:
l OSN 9800 V100R005C00 and later versions
l OSN 9600 V100R005C00 and later versions
l OSN 8800 V100R011C10 and later versions
l OSN 1800 V V100R007C00 and later versions (with TNZ5UXCMS as the system control board)
l OSN 1800 II V100R008C10 and later versions (with TNZ2UXCL as the system control board)
l OSN902 V100R002C10
The IP devices that NCE can manage include NE series routers, CX series routers, Ethernet
switches, BRASs, ATN devices, and security devices.
Telnet/ The Telnet and STelnet SBIs are a basic type of All IP devices
STelnet interface used for remote login to and management
of NEs. The Telnet and STelnet SBIs address the
disadvantages of the SNMP SBI and allow NCE to
provide more management functions.
l Telnet is a TCP/IP-based network management
protocol at the application layer. Through the
Telnet SBI, users can log in to an NE in the CLI
and run commands directly in the CLI to
maintain and configure the NE. Using the TCP
protocol at the transmission layer, the Telnet
protocol provides services for network
communication. The Telnet protocol transmits
communication data in plain text, which is not
secure.
l STelnet provides secure Telnet services based on
SSH connections. Providing encryption and
authentication, SSH protects NEs against attacks
of IP address spoofing.
TFTP/FTP/ TFTP, FTP, and SFTP are TCP/IP-based network All IP devices
SFTP management protocols at the application layer and
are dependent on the UDP protocol.
l TFTP is a simple and low-overhead protocol (as
compared with FTP and SFTP) used to upload
and download files. The TFTP protocol does not
support password configuration and transfers file
contents in plain text.
l FTP is a set of standard protocols used for
transferring files on networks. The FTP protocol
transfers passwords and file contents in plain
text.
l SFTP uses the SSH protocol to provide secure
file transfer and processing. For data backup
using the SFTP protocol, passwords and data are
encrypted during transmission.
Routers and switches whose VRP version is 5.7 or
later use the TFTP, FTP, or SFTP SBI to synchronize
data with NCE. IP NEs use NE Software
Management to implement functions such as NE
upgrades, backup, and patch installation.
Syslog The Syslog SBI serves as an interface for NCE to All IP devices
receive system logs from NEs. With the Syslog SBI,
NCE can manage NE logs. IP NEs support Syslog
running logs.
BGP BGP FlowSpec routes are new BGP routes and carry
FlowSpec traffic matching rules and actions. The SDN
controller delivers BGP FlowSpec routes to
forwarders to implement traffic optimization.
The access devices that NCE can manage include MSAN/DSLAM, OLT, DMU, ONT, and D-
CCAP devices.
SNMP SNMP is a TCP/IP-based network management protocol at the FTTx series (except
application layer. SNMP uses the UDP protocol at the ONT), MSAN series,
transmission layer. Through the SNMP SBI, NCE can manage DSLAM series, and D-
network devices that support agent processes. CCAP series NEs
NCE supports the SNMP SBI that complies with SNMPv1,
SNMPv2c, and SNMPv3. Through the SNMP SBI, NCE can
connect to devices. The SNMP SBI supports basic management
functions such as auto-discovery of network devices, service
configuration data synchronization, fault management, and
performance management.
Telnet/STelnet The Telnet and STelnet SBIs are a basic type of interface used FTTx series (except
for remote login to and management of NEs. The Telnet and ONT), MSAN series,
STelnet SBIs address the disadvantages of the SNMP SBI and DSLAM series, and D-
allow NCE to provide more management functions. CCAP series NEs
l Telnet is a TCP/IP-based network management protocol at
the application layer. Through the Telnet SBI, users can log
in to an NE in the CLI and run commands directly in the CLI
to maintain and configure the NE. Using the TCP protocol at
the transmission layer, the Telnet protocol provides services
for network communication. The Telnet protocol transmits
communication data in plain text, which is not secure.
l STelnet provides secure Telnet services based on SSH
connections. Providing encryption and authentication, SSH
protects NEs against attacks of IP address spoofing.
TFTP/FTP/SFTP TFTP, FTP, and SFTP are TCP/IP-based network management All access devices
protocols at the application layer and are dependent on the UDP
protocol.
l TFTP is a simple and low-overhead protocol (as compared
with FTP and SFTP) used to upload and download files. The
TFTP protocol does not support password configuration and
transfers file contents in plain text.
l FTP is a set of standard protocols used for transferring files
on networks. The FTP protocol transfers passwords and file
contents in plain text.
l SFTP uses the SSH protocol to provide secure file transfer
and processing. For data backup using the SFTP protocol,
passwords and data are encrypted during transmission.
Access NEs use the TFTP, FTP, or SFTP SBI to synchronize
data with NCE, and use NE Software Management to
implement functions such as NE upgrades, backup, and patch
installation.
Syslog The Syslog SBI serves as an interface for NCE to receive FTTx series (except
system logs from NEs. With the Syslog SBI, NCE can manage ONT), MSAN series,
NE logs. Access NEs support Syslog operation logs. DSLAM series, and D-
CCAP series NEs
ICMP ICMP is a network-layer protocol. It provides error reports and FTTx series (except
IP packet processing messages that will be sent back to the ONT), MSAN series,
source. ICMP is usually used as an IP-layer or higher-layer DSLAM series, and D-
protocol and ICMP packets are usually encapsulated in IP CCAP series NEs
packets for transmission. Some ICMP packets carry error
packets that will be sent back to NEs.
TR069 TR069 is also called the CPE WAN management protocol. It Only ONTs
defines a set of automatic negotiation and interaction protocols
between a CPE and an auto-configuration server (ACS) to
implement automatic terminal configuration, upgrade, and
remote centralized management.
NCE provides SBIs that support interconnection with controllers or EMSs to implement
cross-___domain, cross-layer, and multi-vendor integration.
NCE and each managed device communicate with each other through the internal data
communication network (DCN) or external DCN. The two modes can also be used together
for DCN networking.
NOTE
For details about the bandwidth requirements of the southbound DCN, see 4.5 Bandwidth
Configurations.
In this mode, the managed network is generally divided into multiple DCN subnets. NCE
directly communicates with one NE in each subnet called the gateway NE (GNE). The
communication between NCE and the other NEs in the subnet is forwarded by the GNE. If
NCE and the GNE are in the same equipment room, they can be connected through LAN;
otherwise, they need to be connected through private lines.
l Advantages: External DCN networking allows NCE to connect to the managed network
through other devices. When a managed device is faulty, the device will not be
unreachable from NCE due to the fault of the managed device.
l Disadvantages: External DCN networking requires the construction of an independent
maintenance network that provides no service channels, which makes network
deployment expensive.
3 Deployment Schemes
NCE must be installed and deployed on VMs. Based on VM environments, the scenarios are
classified into on-premises deployment and deployment on private clouds.
B/S Mode
NCE is deployed in B/S mode. VMs are used as servers. After SUSE Linux is installed on
each VM, install the database and NCE service software. Then, users can easily access NCE
through a browser without installing traditional clients.
NOTE
For the DR system where Manager is deployed, Veritas DR software is required between the SUSE
Linux OS and database.
Distributed Mode
NCE is deployed in distributed mode. That is, application services and databases are deployed
on different VMs, and nodes in the distributed system are deployed in a differentiated manner
based on node protection schemes and deployment policies.
Nodes of the same type in primary/secondary or cluster mode occupy more physical
resources. To save resources, the primary/secondary or cluster deployment scheme is not
used by default in some management scales of 2000 equivalent NEs.
l Node deployment policies:
– N/A: Only a single VM node is deployed, and no VM combination is involved.
– Anti-affinity: VMs of the same type must be deployed on different hosts.
Data storage Data_B Data backup node. It 2000 Single N/A 1 4 8 500 750
node ackup stores the backup equiv instan
files of NCE alent ce
databases. NEs
Data storage Data_ Data backup node. 6000 Singl N/A 1 4 8 500 750
node Backu It stores the backup equiv e
p files of NCE alent instan
databases. NEs ce
Data storage Data_ Data backup 30,000 Singl N/A 1 4 8 500 750
node Backu node. It stores NEs e
p the backup files instan
of NCE ce
databases.
50,000 Singl N/A 1 4 8 500 750
NEs e
instan
ce
Service Analyz Analysis node. It 30,000 Singl N/A 1 60 240 400 150
deployment er provides data NEs e 0
node of collection, instan
Analyzer calculation (of ce
time and
objects), real- 50,000 Singl N/A 1 60 240 600 150
time NEs e 0
performance, instan
historical ce
performance, and
basic reports.
Service Simu It provides the 6000 Single N/A 1 32 128 150 150
deploy lator simulation service equiva instan 0
ment for what-if lent ce
node of analysis. NEs
Analyz
er Simu It provides 6000 Single N/A 1 16 64 200 150
lator management equiva instan 0
_Sch services for what- lent ce
edule if analysis, such NEs
r as inventory
synchronization,
parsing and
restoration, and
scheduling
services.
Trans NCE OMP NCE O&M 6000 Single N/A 1 4 8 300 100
port manage management equiva instan 0
ment node. It supports lent ce
node service NEs
installation and
deployment, node
management,
system
commissioning,
system
maintenance,
system
monitoring, and
data backup.
NOTE
l The management DCNs of the primary and secondary sites can be isolated from each other or not.
l The DR system requires high bandwidth. A replication link must be configured between the primary
and secondary sites.
NOTE
l The management DCNs of the primary and secondary sites can be isolated from each other or not.
l The DR system requires high bandwidth. A replication link must be configured between the primary
and secondary sites.
l VMware private cloud platform: Customers provide the virtualization platform based on
VMware, and Huawei installs VMs, OSs, databases, and NCE software.
l FusionSphere+CSM private cloud platform: Customers provide the virtualization
platform and VMs based on VMware, and Huawei installs OSs, databases, and NCE
software.
l Any private cloud platform: The virtualization software type is not limited. Customers
provide the virtualization platform, VMs, and OSs that meet the NCE resource and
configuration requirements. Huawei installs databases and NCE software.
NOTE
The DR system requires high bandwidth. A replication link must be configured between the primary and
secondary sites.
4 Configuration Requirements
NCE has specific requirements on the hardware, software, client, and bandwidth to ensure the
stable running of the system.
Principles
Plan hardware configurations under the following principles:
l Select an optimal resource usage solution based on the current network scale, capacity
expansion plan, and hardware cost.
l Select hardware configurations that match the current network scale or higher. For
example, the hardware configurations corresponding to the network scale of 6000 or
15,000 equivalent NEs can be used to deploy NCE that needs to manage 6000 equivalent
NEs, but the hardware configurations corresponding to the network scale of 6000
equivalent NEs cannot be used to deploy NCE that needs to manage 15,000 equivalent
NEs.
l If the DR system is deployed, configure the hardware of the primary and secondary sites
in the same way.
Requirements
The resources for a single blade and disk array are specified in the on-premises scenario. You
need to select required blades and disk arrays based on the network type, functional unit
combination, and network scale difference on the live network. If the minimum requirements
are not met, NCE cannot be properly installed or run.
Table 4-2 Blades and disk arrays required for NCE (Transport Domain)
Functi Network Scale Number of Blades Number of Disks
onal
Unit
Table 4-3 Blades and disk arrays required for NCE (IP Domain)
Functional Unit Network Scale Number of Blades Number of Disks
Manager+Controller 2000–6000 4 1
+Analyzer (basic equivalent NEs
network analysis
+simulation
analysis)
Manager + 2000–6000 5 1
Controller+Analyzer equivalent NEs
(basic network
analysis+flow
collector+simulation
analysis)
Table 4-4 Blades and disk arrays required for NCE (Access Domain)
Table 4-5 Blades and disk arrays required for NCE (Super)
50,000 NEs 5 1
Table 4-6 Blades and disk arrays required for NCE (IP Domain+Transport Domain+Super)
Total: 10 (including a 3
standby blade)
Restrictions
l Do not configure CPU, memory, or storage overcommitment. Otherwise, the NCE
performance will deteriorate.
NOTE
Overcommitment is a virtualization technology that allocates more virtual resources than physical
resources to VMs. For example, if the physical memory size is 32 GB, more than 32 GB memory
can be allocated to VMs by using overcommitment.
l VM resources must be exclusively occupied. NCE VMs cannot share resources with
other VM applications.
l Reliability protection must be configured for storage resources. For example, RAID is
configured for disks.
l The management network must be isolated from the service network to improve network
security, for example, isolating the network by VLAN.
Memory 4 GB or above
Item Requirement
Bandwidth between A replication link is configured between the primary and secondary
the primary and sites in the DR system for real-time data synchronization.
secondary sites of CIRs of the replication link bandwidth are as follows:
the DR system
l If N is 6000, the CIR is 30 Mbit/s.
l If N is 15,000, the CIR is 50 Mbit/s.
l If telemetry second-level collection is deployed, additional 20
Mbit/s needs to be added to the bandwidth.
NOTE
N indicates the number of equivalent NEs.
Item Requirement
Bandwidth between A replication link is configured between the primary and secondary
the primary and sites in the DR system for real-time data synchronization.
secondary sites of CIRs of the replication link bandwidth are as follows:
the DR system
l If N is 15,000, the CIR is 50 Mbit/s.
l If N is 30,000, the CIR is 100 Mbit/s.
l If N is 50,000, the CIR is 150 Mbit/s.
NOTE
N indicates the number of equivalent NEs.
Table 4-11 Bandwidth configuration requirements for NCE (IP Domain+Transport Domain
+Super)
Item Requirement
Bandwidth between A replication link is configured between the primary and secondary
the primary and sites in the DR system for real-time data synchronization.
secondary sites of CIR of the replication link bandwidth: 30 Mbit/s
the DR system
5 High Availability
During system running, unexpected faults may occur due to external environments,
misoperations, or system factors. For these unknown risks, NCE provides hardware, software,
and system-level availability protection solutions, which recover the system from faults to
minimize the damage to the system.
Figure 5-1 shows the NCE availability protection solutions.
Protection Mechanism
If a fault occurs on the hardware with redundancy protection, the hardware automatically
switches to the normal component to ensure that the NCE OS and application services are
running properly.
Protection Capabilities
Table 5-1 describes HA solutions and their protection capabilities at the hardware level.
Notes:
1. Recovery Point Objective (RPO): A service switchover policy that ensures the least data
loss. It tasks the data recovery point as the objective and ensures that the data used for
the service switchover is the latest backup data.
2. Recovery Time Objective (RTO): The maximum acceptable amount of time for
restoring a network or application and regaining access to data after an unexpected
interruption.
Protection Capabilities
Table 5-2 describes HA solutions and their protection capabilities at the software level.
Notes:
1. Recovery Point Objective (RPO): A service switchover policy that ensures the least data
loss. It tasks the data recovery point as the objective and ensures that the data used for
the service switchover is the latest backup data.
2. Recovery Time Objective (RTO): The maximum acceptable amount of time for
restoring a network or application and regaining access to data after an unexpected
interruption.
5.3 DR Solutions
DR solutions are provided to prevent unknown risks on the entire system and ensure secure
and stable running of NCE.
Protection Mechanism
The primary and secondary sites communicate with each other through a heartbeat link to
detect the status of the peer site in real time. The primary site uses a data synchronization link
to synchronize data to the secondary site based on the synchronization policy, ensuring data
consistency between the primary and secondary sites. When a disaster such as an earthquake,
fire, or power failure occurs at the primary site, the protection mechanism between the
primary and secondary sites in the DR system is as follows:
Protection Capabilities
Table 5-3 describes HA solutions and their protection capabilities at the system level.
Notes:
1. Recovery Point Objective (RPO): A service switchover policy that ensures the least data
loss. It tasks the data recovery point as the objective and ensures that the data used for
the service switchover is the latest backup data.
2. Recovery Time Objective (RTO): The maximum acceptable amount of time for
restoring a network or application and regaining access to data after an unexpected
interruption.
6 Security
NCE uses the security architecture design that complies with industry standards and practices
to ensure system, network, and application security from multiple layers.
l Service security: secure O&M, SIA authentication, security regulation compliance, and
log auditing.
l IAM (authentication and access control management): user management, role-based
access control, policy management, token management, and user access credential
management.
l Southbound and northbound security: API security, authentication and authorization,
forcible access policy, log recording and auditing, and drive security management.
l Web service security: web application firewall (WAF), certificate management, service
running environment Tomcat/JVM security, load balancing LVS&Nginx, and Redis
memory database security hardening.
l OS security: system hardening, SELinux, and TPM-based trusted boot.
l Database security: user management, access control, data protection, monitoring and
auditing, backup and restore, and load balancing.
l Basic security protection: TLS, anti-DDoS policy, interface access rate control, load
protection, attack detection, security analysis, security zone allocation, and HA solution.
User management NCE can manage the roles, permissions, and access policies of
system users.
Log management NCE can manage operation logs, system logs, security logs, NE logs,
and northbound logs and, dump Syslog logs.
Security Description
Function
Authentication and l The user passwords of the management plane and O&M plane are
authorization encrypted and stored using the MD5+Salt/SHA256 irreversible
management algorithms.
l NCE can interconnect with authentication, authorization, and
accounting (AAA) systems such as the RADIUS or LADP
system, and manage and authenticate O&M users in a unified
manner.
l NCE provides SSO authentication services based on CAS and
SAML and supports northbound interconnection and integration
authentication.
l Digital certificates are used for identity authentication. Different
certificates are used for northbound communication, southbound
communication, internal communication, and interconnection
with third-party systems, and the interconnections are isolated
from each other. Certificate replacement and certificate lifecycle
management are supported.
l The SIA token is used for access control between services.
Sensitive data l The PBKDF2 algorithm is used to securely store user passwords.
security l The SHA256 algorithm is used to encrypted and stored sensitive
data.
l In SSH communication, the RSA algorithm is used to exchange
keys, and AES128-CTR, AES192-CTR, or AES256-CTR are
used to encrypt data.
l In SNMPv3 communication, HMAC-MD5-96 and HMAC-
SHA-96 are used for authentication and DES-CBC is used for
data encryption.
Network security Network isolation and firewall deployment are used to ensure
network security, as shown in Figure 6-2.
Figure 6-2 NCE networking security (on-premises deployment where southbound and
northbound networks are isolated)
NCE user-facing scenarios provide cloud-based network management, control, and analysis
optimization.
System Interconnection
l Southbound interconnection: Integrated with Huawei or third-party systems to quickly
access NEs or virtual resources and obtain NE resources, alarm and performance data,
and virtual resources required for NCE service provisioning or assurance. This improves
interconnection efficiency.
– Configuring and managing southbound drivers: Before interconnecting NCE with a
southbound system, users need to import external drivers by means of driver
lifecycle management and configure SNMP parameters so that SNMP alarms can
be reported to quickly adapt to NEs and service models (resources, alarms, and
performance) of the interconnected system. This achieves quick driver access and
improves interconnection efficiency. Users can also query driver types and monitor
and delete driver instances for unified driver management.
– Interconnecting with a southbound system: Users can interconnect NCE with a
Huawei or third-party system and update certificates to manage NEs or virtual
resources. After login, users can obtain basic information of the NEs, such as NE
and port resources, alarms, and performance data, and collect VM images,
networks, and firewalls. This ensures proper NCE service provisioning and
assurance.
l Single Sign On (SSO): SSO is an access control policy between NCE and its
southbound systems or between the upper-layer system and NCE. With a single login,
users can access all mutually trusted systems. This implements seamless O&M interface
interconnection between systems and improves O&M efficiency. The SSO system
consists of servers and clients. The clients obtain certificates from the servers and deploy
them. One server can interconnect with multiple clients to achieve unified authentication.
After successfully logging in to the server, users can access all the clients without
entering the username and password.
System Configuration
l Time synchronization: NCE nodes are managed and maintained in a unified mode.
Therefore, the Coordinated Universal Time (UTC) on each node must be the same to
ensure that NCE can properly manage services and data on the nodes. An NTP-based
external clock source is required to serve as the NTP server of NCE so that the system
time can be adjusted at any time without manual intervention.
– A maximum of 10 NTP servers can be added on NCE. Only one active NTP server
can be configured, and the active NTP server is mandatory. In a disaster recovery
(DR) system, the primary and secondary sites must use the same NTP server to
ensure time consistency between the two sites.
– After an active NTP server is configured, the OMP node synchronizes time with the
active NTP server preferentially. Service nodes then synchronize time with the
OMP node.
– When the active NTP server fails, NCE selects an available NTP server from the
standby NTP servers within 15 minutes and sets it as the active NTP server. If
multiple NTP servers configured on NCE become invalid, the OMP node cannot
synchronize time with the NTP server, and service nodes will no longer synchronize
time with the OMP node.
l License management: Updating and maintaining a license allow the system to properly
run based on the features, versions, capacity, and validity period authorized in a license
file.
License management allows users to initially load, update, and routinely maintain
licenses.
– Initially loading a license
After the system is deployed, you need to load a license by importing license files
so that you can use the system properly.
– Updating a license
During O&M, you need to update a license file under any of the following
conditions:
n The license is about to expire.
n The license has expired.
n The license is invalid.
n The license control resource items or function control items do not meet
service requirements.
n The software service annual fee in the license is about to expire.
n The software service annual fee in the license has expired.
– Routine license maintenance
You need to query license information in the system from time to time, such as the
expiration time, consumption, and capacity, so that you can quickly identify and
resolve problems (for example, a license is about to expire or its capacity is
insufficient).
l Remote notification: When O&M personnel are not on site due to business travel or off
duty and cannot query significant alarms and service reports, remote notification is used
to send SMS messages and emails to the O&M personnel.
System Monitoring
Global monitoring capability is supported to monitor NCE resource indicators such as
services, processes, nodes, and databases. This helps conduct predictive analysis and detect
potential risks in time. For key resources, the administrator can set thresholds to trigger
alarms and handle exceptions promptly.
l Service and process monitoring: Monitors the service running status and indicators such
as the CPU usage, memory usage, and number of handles. When a process in a service
stops abnormally or becomes faulty, NCE attempts to restart the process. A maximum of
10 consecutive restarts are allowed. If the number is exceeded, an alarm is generated,
requesting users to process the exception manually.
l Node monitoring: Monitors node indicators such as the CPU, virtual memory, physical
memory, and disk partitions. If any resource of the node encounters an exception, the
node is displayed as abnormal. If a key resource remains abnormal within a sampling
period, an alarm is generated.
l Database monitoring: Monitors database indicators such as the space, memory, and
disks. If any resource of the database encounters an exception, the database is displayed
as abnormal. If a key resource remains abnormal within a sampling period, an alarm is
generated.
System Maintenance
l System backup and restore: Backs up and restores the dynamic data, OS, database,
management plane, or application software of NCE. Data is backed up in a timely
manner. If any backup object is abnormal, you can use the corresponding backup file to
recover the object to the normal state.
l O&M management: Provides system maintenance and management functions to help
O&M personnel learn the health status of the system during system running and reduce
running risks. If a system fault occurs, fault information can be collected for fault
demarcation and locating to facilitate repair and reduce losses.
– Health check: Checks and evaluates hardware, OSs, databases, networks, and NCE
services to learn the health status, detect abnormal check items, and determine
whether operation or running risks exist in NCE.
– Data collection: Provides data collection templates based on fault scenarios,
services, and directories. When a system fault occurs, O&M personnel can collect
logs and database tables as required and analyze and locate the fault.
– Fault demarcation and locating: Provides default locating templates for O&M
personnel to select templates based on fault scenarios for automatic fault
demarcation and locating. This helps O&M personnel quickly obtain solutions and
shorten the fault locating time.
Help System
NCE provides a layered design for the help system adapting to user needs in diverse
scenarios. The help system supports anytime, anywhere, and on-demand learning. A variety
of help forms such as tooltips, panels, F1 pressing, question mark tips, and Information Center
are provided. All necessary information is directly displayed on the GUI. Information that is
closely related to the current operation is folded. You can expand the information if necessary.
Systematic learning information is placed in the Information Center.
Alarm severity
Alarm severities indicate the severities of faults. Alarms need to be handled depending on
their severity. Alarm severities can also be redefined, as shown in Table 7-2.
Major Services are affected. If the fault is Major alarms need to be handled
not rectified in a timely manner, in time. Otherwise, important
serious consequences may occur. services will be affected.
Minor Indicates a minor impact on You need to find out the cause of
services. Problems of this severity the alarm and rectify the fault.
may result in serious faults, and
therefore corrective actions are
required.
Different handling policies apply to different alarm severities. You can change the severity of
a specific alarm as required.
NOTE
The severity of an alarm needs to be adjusted when the impact of the alarm becomes larger or smaller.
Alarm statuses
Table 7-3 lists the alarm statuses, Figure 7-2 list the alarm status relationship.
Clearance status Cleared and uncleared The initial clearance status is Uncleared.
When a fault that causes an alarm is
rectified, a clearance notification is
automatically reported to Alarm
Management and the clearance status
changes to Cleared. For some alarms,
clearance notifications cannot be
automatically reported. You need to
manually clear these alarms after
corresponding faults are rectified. The
background color of cleared alarms is
green.
Validity Valid and invalid The initial validity status is Valid. You
can configure an identification rule to
identify alarms that do not concern you as
invalid alarms. When monitoring or
querying alarms, O&M personnel can set
the filter criteria to filter out invalid
alarms.
An acknowledged and cleared alarm becomes a historical alarm after a specified period, and a
non-historical alarm is called a current alarm. Table 7-4 shows the definition of an alarm.
Name Description
Historical alarms Acknowledged and cleared alarms are historical alarms. You can
analyze historical alarms to optimize system performance.
Eve Indicates a
nt notification of
status changes
generated
when the
system or an
MO is
running
properly.
Alarms or events are displayed on the page when NEs, services, and interconnected third-
party systems detect their exceptions or significant status changes. Table 7-6 describes the
types of alarms and events.
Equipment alarm Alarms caused by physical resource faults. Example: board fault
alarm
Integrity alarm Alarms generated when requested operations are denied. Example:
alarms caused by unauthorized modification, addition, and deletion
of user information
Operation alarm Alarms generated when the required services cannot run properly
due to problems such as service unavailability, faults, or incorrect
invocation. Example: service rejection, service exit, and procedural
errors.
Physical resource Alarms generated when physical resources are damaged. Example:
alarm alarms caused by cable damage and intrusion into an equipment
room
Type Description
Security alarm Alarms generated when security issues are detected by a security
service or mechanism. Example: authentication failures,
confidential disclosures, and unauthorized accesses
Time ___domain alarm Alarms generated when an event occurs at improper time. Example:
alarms caused by information delay, invalid key, or resource access
at unauthorized time
Over limit Alarms or events reported when the performance counter reaches
the threshold.
File transfer status Alarms or events reported when the file transfer succeeds or fails.
Alarm merging rule To help users improve the efficiency of monitoring and handling
alarms, Alarm Management provides alarm merging rules.
Alarms with the same specified fields (nativeMoDn, moi,
alarmGroupId, alarmId, reasonId, and specialAlarmStatus)
are merged into one alarm. This rule is used only for monitoring
and viewing alarms on the Current Alarms page and takes
effect only for current alarms.
The specific implementation scheme is as follows:
l If a newly reported alarm does not correspond to any
previous reported alarm that meets the merging rule, the
newly reported alarm is displayed as a merged alarm and the
value of Occurrence Times is 1.
l If the newly reported alarm B and the previous reported
alarm A meet the merging rule, alarm B and alarm A are
merged into one alarm record and are sorted by clearance
status and occurrence time.
If alarm A is displayed on top, it is still regarded as a merged
alarm, and the Occurrence Times value of the merged alarm
increases by one. Alarm B is regarded as an individual
alarm.
If alarm B is displayed on top, it is regarded as a merged
alarm, and the Occurrence Times value of the merged alarm
increases by one. Alarm A is regarded as an individual
alarm.
In the alarm list, click Occurrence Times of an alarm, you
can view the detailed information about the merged alarm
and individual alarm.
l If a merged alarm is cleared, it will be converted into an
individual alarm. The previous individual alarms will be
sorted by clearance status and occurrence time. The first one
becomes a merged alarm.
l If a merged alarm or individual alarm is cleared and
acknowledged, the alarm will be converted to a historical
alarm and the value of Occurrence Times decreases by one.
Mechanism Description
Processing of the full To prevent excessive current alarms from deteriorating system
current alarm cache performance, Alarm Management provides a full-alarm
processing rule. When the number of current alarms in the
database reaches the upper threshold, Alarm Management
applies the following two rules to move some alarms to the
historical-alarm list until the number of alarms falls to 90% of
the upper threshold.
l The cleared alarms, acknowledged and uncleared ADMC
alarms, acknowledged and uncleared ADAC alarms, and
unacknowledged and uncleared alarms are moved to the
historical-alarm list in sequence.
l The first reported alarms are moved to the historical-alarm
list by time.
Alarm dump rule To avoid excessive alarm database data, the system processes
events, masked alarms, and historical alarms every two minutes
according to the following rules:
l If the database tablespace usage reaches 80%, Alarm
Management dumps the data in the database to files
according to the sequence of occurrence time and the data
table type (event, masked alarm, or historical alarm).
l The dumped file will be deleted after 180 days.
l If the size of the dumped file exceeds 1024 MB or the total
number of files exceeds 1000, the system deletes the earliest
files.
Function Description
alarm. O&M personnel can view the aggregated alarms on the alarm
details page.
l Experience
After handling an alarm, record the handling information to the
experience database for future reference or guidance. You can import
or export experience.
l Setting events as ADMC alarms
If you want to improve the significance of specific events, you can
set them to Auto Detected Manually Cleared (ADMC) alarms. These
events will be reported as alarms that are not automatically cleared.
You can monitor and view these alarms in real time on the alarm list.
l Auto acknowledge rule
After an automatic acknowledgment rule is set, Alarm Management
automatically acknowledges the current alarms in the cleared state
according to a specific rule and moves the acknowledged alarms to
the Historical Alarms page.
Function Description
User Management
l User
– Information about a user includes a user name, password, and permissions.
– User admin is the default user in the system, that is, the system administrator. User
admin can manage all resources and has all operation rights. This user is attached
to both the Administrators and SMManagers roles.
– The user who has the User Management permission in the default region is a
security administrator.
– The Administrators role has all the permissions except User Management. The
user attached to this role is an administrator.
l Role
Users attached to a role have all the permissions granted to the role. You can quickly
authorize a user by attaching the user to a role, facilitating permission management.
Figure 7-3 shows role information.
Users attached to a role have all the permissions granted to the role and can manage all
the resources managed by the role. A user can be attached to multiple roles. If a user is
attached to multiple roles, this user has all the permissions granted to the roles and can
manage all the resources managed by the roles.
Default roles cannot be deleted and their permissions cannot be modified because the
permissions are granted by the system. The system provides the following default roles:
Users attached to the Administrators or SMManagers roles have the operation rights
for all resources in the system. Therefore, perform operations using these user accounts
with caution. Do not perform any operations affecting system security. For example, do
not share or distribute these user names and passwords.
Administrators The user group has all the permissions except User
Management, Query Security Log, View Online Users, and
Query Personal Security Log. The user attached to this role
is an administrator.
APIManager The user group has the permissions to call the RESTCONF
API and northbound APIs with the GET, POST, PUT,
DELETE or PATCH operations.
The role to invoke The user group has the Invoke Southbound APIs permission.
southbound APIs
NBISNMPManager The user group has the permission to configure the SNMP
alarm interface. Users attached to this role can query and
configure the interconnection parameters of the SNMP alarm
interface.
NBI User Group The user group has the permission to configure the
northbound interfaces excluding SNMP NBI, such as
CORBA, XML, OMC, TEXT NBIs.
Guest The ___domain of this user group is All Objects, and it has
operation rights for default monitor operation sets. They can
perform query operations, such as querying statistics, but
cannot create or configure objects.
Maintenance Group The ___domain of this user group is All Objects, and it has
operation rights for default maintenance operation sets. In
addition to the rights of the Guests and Operator Group
groups, users in this group have the rights to to create services
and perform configurations that affect the running of the NCE
and NEs. For example, they can search for SDH protection
subnets and trails, delete composite services, and reset boards.
Operator Group The ___domain of this user group is All Objects, and it has
operation rights for default operator operation sets. In addition
to the rights of the Guests group, users in this group have the
rights to modify, (rights to perform potentially service-
affecting operations are not involved). For example, they can
change alarm severities.
uTraffic User Group When uTraffic interconnects with the NCE, uTraffic accounts
will be created on the NCE to manage operation uTraffic
rights on the NCE.
Permission Management
A permission defines what operations a user can perform on what objects. Permission
elements include an operator, operation objects, and operations as shown in Figure 7-4.
l Permission
– Users act as operators.
– Operation objects include the system and resources (physical and virtual resources,
such as servers, network devices, and VMs) where users perform operations.
– Operations include application operations and device operations. Application
operations are performed on the system. Device operations are performed on
resources.
l Authorization mechanism
NOTE
l Users can perform application operations and device operations. If only managed objects are
configured for a role but no device operation is configured, users of this role can view the
managed objects after logging in to the system but do not have the operation rights for the
managed objects.
l If Assign rights to users directly is selected, permissions can be directly granted to users.
l Authorization method
The authorization method of user management grants permissions by attaching a user to
a role. After the security administrator sets role permissions (including managed objects
and operation rights), the security administrator attaches the user to a role so that the user
has the permissions of this role. If Assign rights to users directly is selected,
permissions can be directly granted to users.
User authorization allows security administrators to implement authorization for all users
in a post at one time. If the employees of a post are changed, security administrators can
delete the original user from the role and add the new user to authorize the new user.
Regions Management
Regions can be classified by geographic ___location or resource usage. Users can be authorized
based on regions.
Security administrators can create different regions based on service requirements to
implement regional rights-based management. After a region is created, the system
automatically creates a region administrator role Region name_SMManager for the new
region. Security administrators need to set parameters on the Mandate-Operation Rights and
Mandate-Managed Objects tab pages for the region administrator so that the region
administrator can manage the users, roles, objects, and operation sets in this region based on
the settings.
l When users modify their personal data, such as mobile numbers and email addresses, they are
obligated to take considerable measures, in compliance with the laws of the countries
concerned and the user privacy policies of their company, to ensure that the personal data of
users is fully protected.
l To ensure the security of personal data, such as mobile numbers and email addresses, these
data is anonymized on the GUI, and HTTPS encryption transmission channels are used.
l Resetting a User Password: If a user other than admin loses the password or cannot
change the password, this user needs to contact security administrators to reset the
password.
l You are not allowed to reset the password of user admin. If you forget the password
of user admin, it cannot be retrieved and you can only reinstall the system.
Therefore, ensure that you memorize the password of user admin.
l For account security purposes, it is recommended that third-party system access users
contact the security administrator to periodically reset their passwords.
l User monitoring: User monitoring monitors resource access behavior of users, including
session monitoring (online status) and operation monitoring. If a user performs an
unauthorized or dangerous operation, the system allows security administrators to
forcibly log out the user. This function allows security administrators to prevent user
accesses and ensure system security.
Security Policies
Security Policies allow you to set access control rules for users. This function improves O&M
efficiency and prevents unauthorized users from performing malicious operations in the
system to ensure system security. The security policy function allows you to set account
policies, password policies, login IP address control policies, and login time control policies.
l Account policies: An account policy includes the minimum user name length and user
login policies. Appropriate setting of an account policy improves system access security.
The account policy is set by security administrators and takes effect for all users.
Scenario
Log Management is used when users need to locate and troubleshoot faults, perform routine
maintenance, trace history logs, and query operation logs across systems.
l Routine maintenance
In routine maintenance, users view logs. If there are logs recording failed or partially
successful operations, or logs in Risk level, analyze the logs and troubleshoot the faults.
l Fault locating and troubleshooting
To locate and troubleshoot faults occurring during system running, users analyze logs to
detect whether risk-level operations or operations that affect system security are
performed.
l Historical log tracing
Logs are stored in the database after being generated. The system periodically dumps
logs from the database to a hard disk for sufficient database space and periodically
deletes the dumped logs from the hard disk for sufficient disk space. To ensure the
integrity and traceability of logs, users can forward these logs to the Syslog server.
l Cross-system operation log query
If users need to query operation logs meeting the same criteria on different systems, they
can set filter criteria on one of the systems, save these criteria as a template, and import
the template to other systems.
Log types
Log Management allows the system to automatically record the information about operations
performed by users in the system and the system running status.
Log Management records five types of logs. Table 7-9 describes the log types.
Operatio Records user User (including Trace and analyze Risk, Minor,
n log operations third-party system user operations. and Warning
performed in the access users)
system that do not operations, such as:
affect system l Exporting
security. current alarms.
l Creating a
subnet.
NE Syslog run logs System operations. You can view the Warning,
syslog record running NE syslog run Error
run log information about logs on the NCE,
managed NEs. rather than
viewing them on
each NE. The
NCE allows users
to browse syslog
run logs of IP
NEs.
NOTE
This function
applies to NCE
(IP Domain).
Log Management
When operations are performed by users in the system or events are triggered by the system,
Log Management records logs and saves the logs to the Log Management database for users
to view on the GUI. In addition, Log Management can automatically dump the logs from the
database to the hard disk.
Figure 7-7 shows the principles of Log Management.
NOTE
To modify the default conditions, contact Huawei technical support. Default dumping parameters as
followings:
l Number of logs: ≥1,000,000
l Tablespace usage: ≥80%
l Log storage duratioon: ≥90 days
Log Forwarding
Log Forwarding Settings is used when users need to trace the logs recorded by Log
Management, and query and analyze the logs recorded by Log Management and the logs of
other functions (such as NE Syslog run logs and NE Syslog operation logs) in real time.
l Users need to permanently store the logs recorded by Log Management so that they can
trace the logs to locate problems or rectify faults.
l Users need to query and analyze the logs recorded by Log Management and the logs of
other functions (such as NE Syslog run logs and NE Syslog operation logs) in real time
on Syslog servers so that they can centrally manage the logs and detect and handle
potential security risks in a timely manner.
Figure 7-8 shows the principles of Log Forwarding Settings.
management. It can manage all NEs on Huawei transport networks, IP networks, access
networks, and obtain third-party device information over NETCONF, SNMP, and ICMP to
manage third-party devices. This meets customer requirements for network convergence and
service growth.
Security Security management ensures the security of NCE through user management,
managem login management, rights- and ___domain-based management, and other security
ent policies. Security management also includes log management, which manages
logs about logins, user operations, and running of NCE to provide a
comprehensive security solution.
For details, see Security Management.
NE NCE communicates with managed NEs successfully only after the connection
communi parameters are correctly set.
cation Users can perform the following operations to manage NE communication
paramete parameters on NCE:
r
managem l Query and set the SNMP parameters;
ent l Manage the default SNMP parameter template;
l Query and set the NE Telnet/STelnet parameters;
l Configure the Telnet/STelnet parameters in batches;
l Manage the Telnet/STelnet parameter template;
l Manage the FTP/TFTP/SFTP parameter template;
l Manage the NETCONF parameter template;
l Query and set the NE NETCONF parameters;
l Set CloudOpera CSM Communication.
Topology In topology management, the managed NEs and their connections are
managem displayed in a topology view. Users can learn the network structure and
ent monitor the operating status of the entire network in real time by browsing the
topology view.
For details, see 7.2.1.1 Topology Management.
DCN NCE communicates with NEs and manages and maintains network nodes
managem through a data communication network (DCN).
ent For details, see 7.2.1.2 DCN Management.
Alarm Alarm management enables O&M personnel to centrally monitor NE, system
managem service, and third-party system alarms and quickly locate and handle network
ent faults, ensuring normal network operation.
For details, see Alarm Management.
Functio Description
n or
Feature
Inventory NCE provides the functions of collecting, querying, printing, and exporting
managem network-wide physical resources and service resources in a unified manner.
ent This helps users obtain resource information on the entire network in a
convenient, quick, and accurate manner to support service planning, routine
maintenance, NE warranty, and network reconstruction.
For details, see 7.2.1.4 Inventory Management.
Functio Description
n or
Feature
Network The function enables users to conveniently install and update the Network
Manage Management app when the NCE server functions properly and communicates
ment app successfully with the NCE client.
installati
on and
update
8: User name of the logged- 9: Server name that is set on 10: Network-wide NE
in user the client and the IP address statistics
of the server This area displays statistics
about managed NEs on the
entire network by type.
Alarm Display
In the topology view, alarms are displayed in different colors or icons to indicate different
status of the subnets and NEs. The default method is color-coded display.
The alarm display in the topology view has the following features:
l The color of a topological node indicates the operating status (such as normal, unknown,
or offline) and alarm status of the monitored NE.
l When an NE generates multiple alarms of different severities, the color or icon that
indicates the highest alarm severity of these alarms is displayed in the topology view.
l When multiple nodes in a subnet generate alarms, the subnet is displayed in the color or
icon that indicates the highest alarm severity of these alarms.
l Users can switch to the current alarm window of an NE using the shortcut menu of the
NE node. In addition, users can query the details of current alarms on the NE Panel.
Automatic Topology-Discovery
NCE provides automatic topology-discovery to automatically add NEs to the topology view,
which helps reduce the operation expenditure (OPEX).
1. A wizard is provided to instructs users to set the parameters required for the automatic
discovery, such as NE type, SNMP parameters, and the IP address range.
2. When parameters are set, NCE searches for the required NEs in the specified network
segments according to the preset conditions. All NEs from Huawei and other vendors
that meet the conditions will be displayed in the topology view. Meanwhile, the basic
configuration data of these NEs is uploaded, which simplifies configuration.
3. Users can pause the discovery at any time. If the discovery fails, the cause of failure is
provided when the discovery ends.
NCE supports the following automatic topology-discovery functions:
l Creating NEs in batches
– Batch creation of SNMP/ICMP-based NEs: SNMP/ICMP-based NEs involve
routers, switches, and security and access NEs. When NCE communicates with
these NEs successfully, it can search out the required NEs by IP address or by
network segment and then bulk create these NEs.
– Batch creation of transport/PTN NEs: Based on the IP address, network segment, or
network service access point (NSAP) address of a GNE, NCE can automatically
search out all the NEs that communicate with the GNE and bulk creates these NEs.
– Batch import of NEs: Security GNEs, service monitoring GNEs, and security
virtual network (SVN) series security NEs periodically send proactive registration
messages that contain the IP addresses of NEs to the NCE server. With proactive
registration management, NCE can bulk create these NEs after receiving the
messages.
l Automatically discovering NEs: NEs can be automatically discovered and created on
NCE and NE configuration data can be uploaded automatically to NCE.
NOTE
NCE provides a secure channel for discovering NEs. The channel supports the following NE versions:
l V600R009C00 and later versions of PTN 6900, NE series, and CX series NEs
l V100R007C00 and later versions of PTN series NEs (excluding PTN 7900)
l V100R005C00 and later versions of RTN 300 series NEs, V100R008C00 and later versions of RTN
900 series NEs
l Scheduling NE searches: NCE can search for specified NE types in specified network
segments at scheduled intervals and automatically add discovered NEs to the topology
view. The NE types include routers, switches, and security and access NEs.
l Automatically creating fibers/cables/microwave links or links: NCE can search for new
fibers/cables/microwave links and links in batches and automatically add them to the
topology view.
Querying clock attributes: Querying the clock status: Querying clock loops:
Users can query the type, Users can query specific Users can query clock loops
hop count, and port name configurations of the clock for network-wide clock
for clocks traced by the NE. This facilitates tracing relationships. If there
current NE, and view the troubleshooting when the are clock loops, users can
compensation value for NE is abnormal. double-click a record in the
clocks traced by the port. query result list to locate
related NEs in the clock
view. This enables users to
modify incorrect clock
configurations on NEs.
Redirecting to the physical Redirecting to the clock Viewing master clock ID:
view: view: When users hover the
Users can be redirected from Users can be redirected to pointer over a clock NE
the clock view to the the corresponding subnet in where clock tracing
corresponding subnet in the the clock view. If the subnet relationships exist, a tooltip
physical view. If the subnet does not exist in the clock pops up, displaying the NE's
does not exist in the view, the root node will be clock mode, master clock
physical view, the root node displayed. ID, port status, and other
will be displayed. information.
The clock view can display Querying clock tracing Exporting NE clock
NEs copied in the physical trails: information:
root view and their clock NCE highlights the current Users can export the clock
tracing relationships. Copied clock tracing trails of clock information about one or all
NEs own the same clock NEs. When a fault occurs, NEs to a specified directory.
tracing relationships as the users do not need to draw
source NE. the clock tracing trails
between NEs manually. This
improves fault diagnosis
efficiency.
Custom View
The custom view is a subset of the main topology view. The network entities can be NEs,
NCEs, and subnets. Typically, network management personnel need to customize views, and
choose network entities within their management scope from the main topology view.
In a DCN, both NCE and NEs are considered as nodes. These nodes are connected to each
other through Ethernet or data communications channels (DCCs). The DCN between NCE
and the managed network is usually divided into two parts:
l DCN between the NCE server and NEs: Usually, a LAN or WAN is adopted for DCN
communication between the NCE server and NEs. In an actual network, the NCE server
and NEs may be located on different floors of the same building, in different buildings,
or in different cities. Therefore, NCE usually communicates with NEs through an
external DCN that consists of equipment such as switches and routers. The DCN is
referred to as an external DCN.
l DCN between NEs: NEs can communicate with the U2000 in inband networking mode
or outband networking mode. As the DCN between the NCE server and NEs is external,
the DCN between NEs is referred to as an internal DCN.
Huawei's NEs support DCN networking through the following communication protocols:
l All of Huawei's transmission NEs support HWECC, and the physical transmission channels support
D1 to D3 bytes by default. If the NE ID is set, ECC communication can be conducted by only
inserting optical fibers. Because HWECC is a proprietary protocol, it cannot meet the requirement
for managing the network consisting of devices from different vendors.
l IP and OSI are standard communication protocols, which enable the management of hybrid device
networking. In addition, these two standard protocols can be adopted on networks consisting of only
Huawei's transmission devices. In the case of a hybrid network consisting of transmission NEs from
different vendors that do not support IP or OSI, Huawei provides solutions such as transparent
transmission of DCC bytes and Ethernet service channels' transparent transmission of management
information.
In addition, the DCC view is provided to display the DCC network in a topology where
relationships between NEs are intuitively shown. The DCC view offers the following DCN
management functions:
l Check the communication status and relationships between NEs based on the status of
DCC links and DCC subnets
l Synchronize data from the main topology to the DCC view, including the network-wide
DCC data and DCC subnet data
l Save the DCC view at a time point as a snapshot to facilitate maintenance and
troubleshooting
l Switch to the DCN Management window of NE Explorer for setting the DCN attributes
of NEs
NCE provides performance monitoring functions at both the NE and network levels. This
function is applicable to access NEs, router/switch NEs and transport NEs. When a
performance instance is created, NCE can collect performance data from NEs at specified
intervals.
l Monitoring NE performance. This function supports the following NE-level
performance indicators:
– CPU usage
– Memory usage
– Hard disk usage
l Monitoring network traffic. This function is used to collect traffic statistics of network
ports, including:
– Inbound traffic
– Outbound traffic
– Packet error rate
l Monitoring SLA data. This function supports multiple types of SLA data, including:
– Delay, jitter, and loss rate of ICMP, TCP, UDP, and SNMP packets
– Connection delay and download speed of Internet services such as HTTP and FTP
l Collecting interface-based traffic and performance indicators. This function supports
interface-based traffic in BGP/MPLS VPN, VPLS, and PWE3 services, and performance
indicators such as delay, packet loss rate, and jitter in BGP/MPLS VPN SLA service.
Performance indicators vary depending on the NE type.
l Setting performance thresholds. This function allows users to set thresholds for
specific performance indicators. NCE also provides default global settings for batch
configuration. The following thresholds can be set:
l Collection based on SNMP, which is used for most IP and access equipment.
NOTE
l NEs must respond to collection requests sent from NCE in 0.05s. Otherwise, the actual performance
collection capability compromises.
l The performance collection capability listed in the preceding table is based on SNMPv1 and
SNMPv2c. The SNMPv3-based performance collection capability achieves only two thirds. For
example, on a large-scale network, the performance collection capability for SNMPv1 and
SNMPv2c is 150,000, and for SNMPv3 is 100,000 when max/min data aggregation is disabled.
l Bulk collection based on a file transfer protocol, which is usually used for large-capacity
performance management.
l Collection based on Qx, a protocol developed by Huawei, which can be used for MSTP,
WDM, RTN, PTN, and OTN equipment.
NOTE
An equivalent record is a basic unit to describe the performance collection capability of the PMS.
– If the performance indicator value is available, the time for generating the value and
the deviation range can be calculated.
– If the time is available, the performance indicator value at that time and the
deviation range can be calculated.
SLA Template
This function enables users to configure SLA parameters in a template. This saves operation
time for configuring SLA parameters when creating an instance.
NOTE
l Resource: A model of telecom resource such as device, card, port, or link in the performance
management ___domain. A resource can be either a physical or logical entity. Logical resources contain
physical resources. A resource has specific performance indicators. A performance monitoring
system collects indicator values for resources.
l Template: A collection of indicators classified into indicator groups. Users can configure templates
to manage performance monitoring tasks easily.
l Indicator group: An indicator group consists of one or more indicators with similar properties.
l Indicator: A performance indicator defines a specific aspect of performance of the associated
resource, for example, traffic, availability, and CPU usage. Performance indicator values are
calculated based on the performance data collected from the monitored resource. A performance
indicator has properties, such as the data type, precision value (only for float), maximum value, and
minimum value.
Monitoring instances enable users to collect performance data for resources of specified
equipment according to a preset monitoring template and scheduling policy. One monitoring
instance collects data for only one resource. Users can perform the following operations on
NCE:
l Create monitoring instances for the IP SLA from resources such as NEs, boards, ports,
and links to PTN and third-party equipment.
l Modify monitoring instances.
l Query monitoring instances.
l Delete monitoring instances.
l Suspend monitoring instances.
l Resume monitoring instances.
l Synchronize resources corresponding to monitoring instances.
l Query thresholds.
l Query the VPN SLA test result.
l Control the number of monitoring instances that can be created using a license.
l Export instance information.
Scheduled Task
A schedule task specifies the time range and period for collecting performance data. Users can
configure a scheduling policy for resources when creating or modifying a monitoring
instance.
In addition, users can compare performance data in different periods in a line chart or bar
chart or compare the indicators of different resources in a chart.
Users can choose whether to view historical performance of a resource in one chart or
multiple charts.
resources can be exported to XLS, XLSX, TXT, HTML, or CSV files in a unified manner.
This helps users obtain resource information on the entire network in a convenient, quick, and
accurate manner to support service planning, routine maintenance, NE warranty, and network
reconstruction.
l Physical resources (such as equipment rooms, NEs, ports, optical/electrical modules, and
passive devices) and logical resources on the entire network can be managed or
maintained in unified and hierarchical mode. You can easily query and export various
types of resources of multi-layer attributes. You can also customize filter criteria to query
data on the live network, which improves resource maintenance efficiency.
l You can customize categories to make statistics of physical resources from multiple
aspects, and export inventory reports for equipment rooms, racks, subracks, NEs, boards,
subboards, ports, optical/electrical modules, slot usage and ONUs. This helps you learn
various types of resources on the entire network, provides reference for E2E operation
and maintenance, and improves resource usage.
You can customize categories to query network-wide resources from multiple aspects in
unified mode and make statistics for the resources. This provides reference for E2E operation
and maintenance.
l Business planning: You can make statistics for the NEs, used slots, ports, and optical
modules to learn the online NEs, slot usage, idle ports, optical splitting, and remaining
bandwidth.
l Routine maintenance: You can query racks, NEs, boards, and ports to learn the running
status of the rack power supply and NEs, software version of NEs and boards, service
configurations, and port rate.
l NE warranty: You can query NEs and boards to learn the NEs created in the specified
time, or hardware and software versions of boards.
l Network reconstruction: You can make statistics for NEs by customizing categories to
learn the NE types, quantities, and positions.
The following table provides the functions of each type of physical and logical resources,
inventory query results, and categories supported by the inventory reports.
Table 7-12 Inventory function and resource query and report statistics
Category Resource Function Queried or Default
Exported Statistics
Inventory Categories of
Information the Inventory
Report
Reference
Receive
Time,
Receive
Status, Upper
Threshold for
Receive
Optical
Power
(dBm),
Lower
Threshold for
Receive
Optical
Power
(dBm),
Transmit
Optical
Power
(dBm),
Reference
Transmit
Optical
Power
(dBm),
Reference
Transmit
Time,
Transmit
Status, Upper
Threshold for
Transmit
Optical
Power
(dBm),
Lower
Threshold for
Transmit
Optical
Power
(dBm),
SingleMode/
MultiMode,
Speed
(Mbit/s),
Wave Length
(nm),
Transmission
Distance (m),
Fiber Type,
Manufacturer
, Optical
Mode
Authenticatio
n, Port
Remark, Port
Custom
Column,
OpticalDirect
ion Type,
Vender PN,
Model,
Rev(Issue
Number).
Application Scenarios
In conventional network O&M modes, customers have increasing requirements on batch data
query and configuration. Script compiling and maintenance functions provided by iSStar are
suitable for customers with batch operations and customization requirements. In addition, the
authorization function is also provided to ensure operation security. By using the database
operation functions, users can query information, including NE inventory, links, and service
information, in the database conveniently. In addition, users can export the obtained
information in the form of files with user-defined file name extension to learn the queried
details. NCE also supports automatic script execution as scheduled and allows users to set the
script execution time and frequency as required, automating routine maintenance.
Solution Overview
Being an enhanced tool component of NCE, iSStar is a powerful script secondary
development platform. iSStar provides an easy-to-learn programmable script language High-
Level Script Language (HSL), powerful High-level Foundation Classes (HFC), and an
integrated development environment for editing, debugging, and executing the HSL scripts.
Users can create an HSL script for the repetitive and effort-consuming daily maintenance. In
this way, daily maintenance can be performed automatically. This reduces workload and
improves work efficiency.
Basic Functions
iSStar provides the following functions:
l HSL language: The HSL is an easy-to-learn and powerful script language provided by
iSStar. The HSL provides simple and efficient syntax, abundant data types, and efficient
and extendable library functions.
l Script management: iSStar helps users operate script files, and edit and commission
scripts.
l Script task management: When a user executes a script file, script project, or script
application to implement a specific service function, iSStar creates a task. The user can
view the basic information and execution of all the tasks in the iSStar Task
Management Window and manage the execution of the tasks.
l Report export: iSStar allows a user to compile tasks into scripts for execution and
automatically exports results as XLS reports. In addition, iSStar provides a centralized
task management interface for the user to set scheduled tasks and set the script execution
time and frequency so that the tasks can be executed periodically.
management, and software library management. For security purposes, users are advised to
use SFTP as the transfer protocol between NCE and NEs.
l Saving: After the configuration is complete, the configuration data is saved in the
memory or hard disks of NEs so that the data will not be lost during restart. NCE saves
data in the following ways:
– Manually performing the save operation
– Automatically performing the save task
– Automatically performing the save policy
l Backup: Backs up NE data (such as configuration data or databases) to storage devices
other than NEs. The backup data is used for restoring NE data. If NCE has the
permissions to manage the NE and the loading, backup, or restoration operation is not
being performed on the NE, the NE will accept the request to back up the data. NCE then
transmits the contents to be backed up to the specified backup directory on the server by
using the transfer protocol. The backup data can also be backed up to the client, or to a
third-part server. The allowed size of backup files depends on the space size of the disk
where the backup directory is located. NCE backs up data in the following ways:
– Manually performing the backup operation
– Automatically performing the backup task
– Automatically performing the backup policy
l Policy management: Setting policies in advance enables NCE to perform operations on
NEs periodically or when trigger conditions are met. This is applicable to routine NE
maintenance. A policy is periodic and is used for operations that are performed
frequently, such as data saving and data backup. Users can select a policy based on the
scenarios at the sites.
l Loading: Software is loaded for NE upgrade. If NCE has the permissions to manage NE
upgrade and the loading, backup, or restoration operation is not being performed on the
NE, the NE will accept the request to load the software. NCE then transmits the contents
to be loaded to the NE by using the transfer protocol. NCE loads data in the following
ways:
– Automatically loading by creating a task
– Automatically loading through an automatic upgrade task
l Activating: A newly loaded NE can be activated to take effect. If NCE has the
permissions to manage NE upgrade and the loading, backup, or restoration operation is
not being performed on the NE, the NE will accept the request to be activated NCE
activates the NE automatically through an automatic upgrade task.
l Restoration: An NE can be restored using the backup NE data. If NCE has the
permissions to manage NE upgrade and the loading, backup, or restoration operation is
not being performed on the NE, the NE will accept the request to restore the NE data.
NCE then transmits the contents to be restored to the NE by using the transfer protocol.
NCE restores data in the following ways:
– Manually performing the restoration operation
– Automatically performing the restoration task
l Task management: NCE encapsulates all operations into tasks. By creating an upgrade
task or a downgrade task for software or a patch, users can upgrade or downgrade the
software or patch in one-click mode or at the scheduled time. A task is not periodic and
is used for operations that are not performed frequently, such as data upgrading. Users
can select a task based on the scenarios at the sites.
l Software library management: The software used for NE upgrade can be managed in a
centralized manner. For example, users can upload NE software from the NCE client or
NCE server to the software library. In this manner, the process of loading software is
simple and fast.
l NE license management: NE license management involves querying, applying for,
installing, and changing an NE License. In addition, you can adjust the capacities defined
in the license. The license controls the validity period or functions of an NE. Therefore,
users need to view the license status and change the expired license. Otherwise, services
will be affected.
l 40G board: Boards can be expanded to 40G without interrupting services, thereby
improving the utilization of physical bandwidths.
l OTN board replacement: OTN boards can be securely and quickly adjusted or added
through a wizard.
l Optical doctor (OD) solution: This solution combines software and hardware to support
online OSNR monitoring, performance monitoring, and performance optimization for
10G, 40G, and 100G wavelengths. This solution significantly improves optical-layer
maintenance capabilities.
l Hop management: The source and sink NEs of a microwave link can be configured on
the same page, and key parameters are associated between them, improving microwave
link configuration efficiency.
l E2E service provisioning: E2E services are supported, including TDM, EoS, E-Line, E-
LAN, E-Line_E-LAN, and PWE3 services and tunnels. They can be uniformly created
and managed. In addition, E2E service templates are provided.
FTTx PnP
l Configuration script application: When maintenance personnel use commands to
configure a device, the configuration commands can be written into a batch processing
script and applied to the device from NCE. In this way, commands are issued in batches.
l Pre-deployment: For PON-upstream FTTB/FTTH devices, users can configure them in
the pre-deployment sheet provided by NCE and import the sheet to NCE. After the
device goes online, NCE automatically applies the configurations in the sheet to the
device to complete the service configuration in the deployment phase.
l Automatic recovery after replacement: If a remote MxU becomes faulty, onsite
personnel can replace it. After the replacement, NCE automatically upgrades the
software and restores configurations without requiring manual operations.
l PnP deployment: MxUs are usually installed in harsh environments (for example, in
manholes or on poles). To simplify work for deployment personnel, NCE automatically
upgrades software and applies configurations to powered-on MxUs.
Remote Diagnosis
l FTTC/B diagnosis view: This view displays E2E links from CPE UNIs to OLT
upstream ports to implement one-stop NE and link status monitoring and fault diagnosis.
With this view, FTTC/B troubleshooting is simplified.
l FTTH diagnosis view: This view displays E2E links from ONT UNIs to OLT upstream
ports to implement one-stop NE and link status monitoring and fault diagnosis. With this
view, FTTH troubleshooting is simplified.
l Remote emulation testing: Maintenance personnel can remotely diagnose faults on
voice, broadband, and other services.
l Intelligent alarm analysis: Alarm compression, correlation analysis, and other methods
are provided to reduce the alarm quantity and help users identify root causes.
l NE data verification: NE configuration data is verified against the data planned by
carriers to avoid data inconsistency and the resultant service problems.
l ONT video quality diagnosis: Faults of IPTV+cooperated OTT videos can be quickly
diagnosed and demarcated. NCE collects quality indicators in real time, displays them in
a visualized manner, and compares them to quickly demarcate faults.
l Remote Wi-Fi diagnosis: NCE supports home network management and common Wi-Fi
maintenance methods. Remote Wi-Fi monitoring and troubleshooting help reduce home
visits.
7.2.4.1 NE Management
IP NEs
NCE supports the following series of IP NEs: PTN, NE, ATN, CX, service gateway, R/AR,
switch, voice gateway, and security.
PnP
Plug-and-play (PnP) facilitates remote commissioning and basic configuration of NEs in
batches, freeing engineers from going to sites and greatly improving the deployment
efficiency.
Scenario
PnP is used for making scripts and configuring NEs during deployment. It provides a rich set
of system-defined templates, where users need to set only a few of parameters for generating
basic NE scripts. DCN remote commissioning eliminates the need for planning before NE
power-on, and NEs are automatically added to NCE for management.
During deployment, PnP is mainly used for remotely commissioning newly added NEs and
bringing them online so that NCE can manage these NEs. PnP applies basic configurations to
NEs in different networking scenarios to complete batch NE deployment.
IP Network Troubleshooting
IP network troubleshooting helps users quickly locate faults in service paths, improving O&M
efficiency.
IP network troubleshooting provides the following functions:
l Unicast service path visualization: E2E service paths and backup paths are displayed in
the topology view. Backup paths support five backup modes: primary/secondary PW,
VRRP, E-APS, TE hot standby, and VPN FRR. Paths are displayed at the service, tunnel,
IP, and link layers. If a path is incomplete or used as a backup path, a message will be
displayed asking users whether to display the historical paths that are complete and not
used as backup paths.
l Multicast service path visualization: NCE can discover shared paths and shortest paths,
and paths are displayed at the service, IP, and link layers.
l Display of NE and link status: After users select an NE or link in the topology view, the
NE performance data or the link type, source, and sink are displayed. The performance
data, alarm information, and optical module information of all NEs and links in the
topology view are displayed on tab pages in the details area to help users with
preliminary fault locating.
l Fast fault diagnosis: After users click Quick Diagnosis, path and service check items are
executed layer by layer. The check results are displayed in a table. If a check item is
abnormal, it is highlighted in red and handling suggestions are provided for it. Clicking
an underscored record opens the associated window in the NE Explorer where users can
quickly rectify faults.
l Fault detection: Faults can be detected through packet comparison, port loopback tests,
ping tests, interface packet loss/bit error collection, TDM PW statistics, path detection,
smart ping, ACL traffic statistics, or fault information collection, or by using the
MultiCast Tools or IGP Source Tracing tool.
IP Network Assurance
NCE provides various functions, such as network health checking, intelligent troubleshooting
based on path visualization, automatic locating of top N typical alarms, and network-level
alarm correlation. This greatly reduces skill requirements for fault locating. The average fault
locating time is reduced from more than 1 hour to 10 minutes.
l IP NE health check: IP NE health status can be checked efficiently and automatically.
Checking a single NE (optical power and IP address) only needs several seconds. The
check efficiency is greatly improved because users no longer need to manually check
configurations. In addition, this function reduces the potential faults caused by improper
basic configurations.
l PTN network health check: This tool provides a centralized GUI for checking and
viewing network running indicators. During routine maintenance, users can compare the
current and historical network running statuses to detect risks and rectify faults in
advance.
l Intelligent troubleshooting based on path visualization: NCE supports protection path
discovery, covering five protection schemes (primary/secondary PW, VPN FRR, VRRP,
E-APS, and TE hot standby) on the live network. It can also discover and display
primary and backup paths to meet fault locating requirements during service switching.
Paths are displayed by layer (including service, tunnel, route, and link layers) so that
specific fault locations can be easily identified. Path discovery in the VPLS and HVPLS
scenarios and visualized fault locating based on paths are provided. NCE automatically
determines the diagnosis process based on MBB scenarios and fault types (such as
interruption, deterioration, and clock switching). It provides up to 300 check items,
covering key MBB features and improving check accuracy and efficiency.
l Automatic locating of top N alarms: For top N alarms, NCE automatically selects
check methods, completes fault diagnosis with just one click, and provides rectification
suggestions.
l Network-level alarm correlation: By analyzing the alarms reported by IP NEs within a
period, NCE identifies root alarms and correlative alarms based on rules and displays
them to the maintenance personnel.
composite services. Specific functions include service provisioning, service monitoring, and
service diagnosis.
Service Provisioning
NCE provides a user-friendly graphical user interface (GUI) on which you can complete all
service configuration operations. Parameters for multiple NEs can be automatically generated
by using service templates. User configuration results can be previewed in the topology
before being applied.
l Dynamic Tunnel
– Provision SR-TE and RSVP-TE tunnels.
– Split and merge tunnels.
– Configure tunnel separation groups.
l Static Tunnel
– Deploy Static LSP or Static CR LSP services to implement MPLS access schemes.
– Configure the link bandwidth threshold.
During the establishment of a static CR LSP calculation path, the U2000 can
generate a link weight based on the remaining link bandwidth and the threshold,
achieving traffic balance.
l Dynamic L3VPN service
– Provision dynamic unicast and multicast L3VPN services.
– Provision services through templates.
– Create tunnels upon service creation.
l EVPN service
– Support P2P and MP2MP connection types.
– Provision services through templates.
– Create tunnels upon service creation.
l Static L3VPN service
– Configure L3VPN services in Full-Mesh, Hub-Spoke, HVPN, or customized mode.
– Configure VPN fast reroute (FRR) and IP FRR for L3VPN services, and bind
L3VPN services to traffic engineering (TE) tunnels.
– Configure static, Open Shortest Path First (OSPF), Border Gateway Protocol
(BGP), IS-IS, and RIP private routes.
– Configure VPN services in inter-AS OptionA or inter-AS OptionB mode.
l VPLS service
– Manage VPLS services in Label Distribution Protocol (LDP) signaling (Martini)
mode.
– Manage VPLS services in Border Gateway Protocol (BGP) signaling (Kompella)
mode.
– Manage VPLS services for interworking of different virtual switch instances
(VSIs).
l PWE3 service
– Configure static and dynamic PWE3 services.
Service Diagnosis
Diagnostic tools are used to check network connectivity and locate faults. You can generate
diagnostic tasks according to selected services and directly perform operations on NEs in
topology views. Diagnostic results can be directly displayed.
l Protocol status test: NCE can check service protocol status and forwarding tables, and
display error information to help you locate faults.
The following third-party NEs are supported: Cisco ASR 9000 series 6.1.3.
Using the SNMP and ICMP protocols, NCE interacts with third-party routers to obtain device
information and implement the following functions.
Alarm management Alarm You can view standard alarms and some
private alarms of third-party NEs.
Definition
The Optical Service Provisioning app is a basic feature for implementing cloud-based
transport networks. It mainly applies to business-to-business (B2B) scenarios, such as
government and enterprise private lines and data center interconnections. Its functions include
service templates, agile provisioning, scheduled reservation, BOD, service visualization,
routing policies, and route computation.
Benefits
Breaking through the traditional service provisioning mode, the Optical Service
Provisioning app implements fast service provisioning and supports dynamic service
scheduling and flexible bandwidth adjustment. Resource utilization is effectively improved.
The Optical Service Provisioning app well fits customers' usage scenarios and achieves one-
stop, automated O&M of private lines.
7.3.1.3 BOD
Users can adjust service bandwidths based on actual requirements.
l NCE supports real-time and scheduled bandwidth adjustment for E-Line services.
l NCE supports real-time adjustment of network-side bandwidth for client services whose
access service type is packet and rate is 10GE LAN or 100GE.
NOTE
7.3.2 OVPN
TSDN networks support the optical virtual private network (OVPN) function.
Definition
OVPNs slice TSDN resources and reserve link bandwidths for users. OVPNs can be
customized for large enterprise customers. They are logically isolated from each other,
achieving high security and reliability.
Benefits
With the OVPN function, carriers can provide virtual private networks for large enterprise
customers. Large enterprise customers can obtain high security through hard pipe isolation
without the need to construct their own physical private networks. The reserved resources can
guarantee sustainable service development. In addition, carriers can flexibly sell link
bandwidths, helping carriers improve resource utilization.
l Service Design: displays the numbers of published and unpublished service profiles and
provides the Create, Import, and Profiles shortcuts.
l Resource Management: displays the numbers of NEs managed by top 5 controllers and
provides the Discovery, Planning, Main Topology, and Preparation shortcuts.
l Service Deployment: displays the numbers of services in the Activated (Up, Down, or
Unknown) or Others state, and provides the Provisioning, Routing Policy,
Management, and Bandwidth Calendar shortcuts.
l Service Assurance: displays the total number of current service alarms by fault type and
provides the Service Assurance shortcut for viewing the list of services affected by
alarms.
During automated service provisioning, the four functions work collaboratively as follows:
l Use Service Design to design a service offline and generate a service profile.
l Use Resource Management to collect and plan resource data.
l Use Service Design to bind the offline service profile to the planned resources and
publish the profile.
l Use Service Deployment to reference the published service profile.
l Use Service Assurance to monitor service status. If the service is faulty or degraded,
you can reroute it with the help of the service deployment function.
Resource Discovery
NCE supports the following resource discovery modes:
l Manual full synchronization: Users manually create one-time or periodic tasks to
synchronize all resource and service data from controllers or EMSs.
l Manual incremental synchronization: Users manually synchronize new NE and port data
from controllers or EMSs.
l Automatic incremental synchronization: Controllers automatically report NE, port, link,
and service changes to ensure data consistency.
Resource Planning
Resource planning includes NE role definition, forwarding ___domain and subnet division, and
inter-___domain link planning.
l NE role definition: In different scenarios, an NE can play different roles, such as an
Autonomous System Boundary Router (ASBR) and Superstratum Provider Edge (SPE).
l Forwarding ___domain and subnet division: A forwarding ___domain can contain network
devices that support multiple protocols and rates. An NE can belong to multiple
forwarding domains. For example, an intermediate NE located on an access ring and a
core ring can belong to two forwarding domains.
NE roles and domains have the following functions:
– In service definition scenarios, NE roles and domains are used to define service
types so that service profiles are decoupled from specific network resources.
– In service provisioning scenarios, NE roles and domains are used to filter network-
wide resources so that only the resources required by a specific service type are
listed.
l Intra-___domain path planning: Support enhanced intra-___domain path planning. For example,
you can plan the NEs that primary and secondary paths must pass through. During
service provisioning, paths must be computed based on the planned paths in the ___domain.
l Inter-___domain link planning: In cross-___domain service provisioning scenarios, inter-___domain
links need to be manually created to specify cross-___domain routes.
l Resource facing service (RFS): configures single-___domain VPN service policies for
resources. These policies take effect only for devices in a specified management ___domain
and contain the following information:
– Connection information: basic VPN information, protection mode, routing
constraint, and port filtering rule (To define a cross-___domain service, create multiple
single-___domain policies.)
– Link information: A and Z role labels
l Service design in a different life cycle (nested service design): Service profiles can be
nested into other service profiles. Supported features are as follows:
– Service design: Each service profile can be orchestrated as a ___domain and set as the
primary ___domain.
– Resource allocation: Domain resources, extended parameters, VPN pools,
association policies, and mappings can be configured for existing service profiles.
Function Scope
The function scope of service design is determined by the combination of single-___domain
services and composite services. Based on the service type, topology, and application ___domain,
service design can provide service profiles for various scenarios.
Service Provisioning
Service provisioning helps users quickly create service instances based on service profiles and
enable and activate services in a one-stop manner. Service provisioning has the following
highlights:
l Supports various composite services, such as EPL/EVPL Layer 2 private lines, E-LAN
Layer 2 private networks, VLL services, L3VPN services, and SDH services.
l Coordinates resources on the entire network to compute the optimal service paths.
l Deploys services conveniently.
– NCE (Super) identifies the service profiles that have been published on the service
design page and adds them to the service deployment page.
– NCE (Super) simplifies the provisioning process by service type.
– NCE (Super) provides the service topology. After paths are computed, users can
view resource information, such as inter-___domain links, intra-___domain NEs, VLAN
IDs, and ports.
Service Maintenance
After service deployment, users can configure bandwidth calendars, adjust real-time
bandwidth, check E2E connectivity, activate or deactivate services, restore services, modify
tunnel policies and routing protocols, view multi-layer service topology, and browse abnormal
events.
– Management of service alarms: After configuring an SLA policy, users can view the
alarms on all provisioned services on the abnormal service monitoring page. They
can also view the service topology, check service connectivity, test service
throughput, and view private line details.
l Collect and display alarm information.
– Statistics: collects statistics on the number of alarms based on alarm types (access
point fault, tunnel fault, VPN fault, or SLA threshold-crossing), displays the top 5
tenants with the most alarms, and displays the top 5 services with the most alarms.
– Service alarm list: displays the services with alarms, including the service name,
alarm status, number of alarms, service profile name, tenant name, running status,
earliest alarm generation time, and last alarm generation time.
During IP+Optical multi-layer network management, the four functions work collaboratively
as follows:
l Use Multi-Layer Discovery to discover basic network resources and cross links, and set
geographic sites.
l Use Multi-Layer Provisioning to create, delete, activate, or deactivate ML links and set
constraints (explicit paths and delay constraints) protection for ML links.
l Use Multi-Layer Monitoring to monitor multi-layer network faults and traffic.
l Use Multi-Layer Optimization to optimize networks. At present, multi-layer network
BOD and network bypass are supported.
l Manual full synchronization: Users manually create one-time tasks to synchronize all
resource data (NE, port, and link data) from controllers or EMSs.
l Manual incremental synchronization: Users manually synchronize new NE, port, and
link data from controllers or EMSs.
l Automatic incremental synchronization: Controllers automatically report NE, port, and
link changes to ensure data consistency.
Service site Site where IP devices are connected to optical devices. This site can
contain routers, OTN devices, and ROADM devices.
ROADM site Site that completes optical wavelength switching. This site contains
only ROADM devices.
OLA site Site that amplifies optical signals but cannot complete optical
wavelength switching. This site contains only OLA devices.
IP site Site where IP devices are not connected to optical devices. This site
contains only routers.
To respect the O&M habits of some users, NCE also supports NEs outside sites (that is, NEs
not within any sites).
The NCE network monitoring function supports multi-layer topology. The multi-layer
topology is constructed with geographic sites and NEs outside sites.
l The bandwidth usage of ML links is displayed in a list, and capacity can be expanded for
the IP links whose bandwidth usage has exceeded the threshold.
l The wavelength usage of fibers is displayed in a list. Used wavelengths are also
displayed in a list.
l IP and optical paths of TE tunnels at the IP layer (The topology also shows whether the
primary and backup paths of TE tunnels overlap.)
l Network BOD: NCE (Super) supports manual expansion, BC expansion, and automatic
expansion by adding member links to Eth-Trunk links. In addition, users can configure
an Eth-Trunk member link resource pools for ML link capacity expansion.
l Service/tunnel bypass: NCE (Super) can bypass faulty services and tunnels as well as the
services or tunnels with serious delays.
Definition
Optical network survivability analysis is a process of simulating a fault on a network resource
to check whether the remaining resources meet service protection requirements and identify
the services that will be interrupted or degraded. This function helps users assess whether
SLA violation risks are present in service quality and take countermeasures in advance.
Optical network survivability analysis can simulate node, link, and SRLG faults on a TSDN
network and can be triggered in the following modes:
NOTE
NCE supports survivability analysis only for the services carried by dynamic trails.
l Immediate analysis: proactively simulates one or two faults on network-wide site or fiber
resources.
l Resource warning: triggers survivability analysis periodically or automatically triggers
survivability analysis upon a resource change. Warnings will be provided for risky links.
l Fault simulation: simulates faults on key nodes or links to quickly learn the impact on
network services.
Benefits
With the optical network survivability analysis function, users can easily detect network
resource bottlenecks, identify risks in advance, and proactively take O&M measures to ensure
service reliability and avoid SLA violation, which helps build up network O&M capabilities.
Definition
To improve the satisfaction and experience of private line customers, NCE provides key event
assurance (KEA) to help users identify and diagnose faults on key services at the earliest time
possible.
l Enterprise private line services that need to be monitored in the long term (for example,
One Belt and One Road, bank, and public security services)
l Services that need to be assured in a certain period (for example, live broadcast of sports
events or summits)
Benefits
l Fast fault detection: Collects and aggregates service monitoring information. If any faults
exist, you can easily detect them.
The Key Event Assurance app lists service KPIs (current alarms and delay).
l Fast fault diagnosis: If faults occur, you can quickly diagnosing the faults.
l Report: The assurance report provides statistics on service running status and reflects
QoS clearly.
l Rights-based management:
– Each monitor can only see the key event assurance tasks they are responsible for
(after secondary filtering by permission). All statistics need to be matched on the
Dashboard.
– On the Task Management and Task Monitoring pages, all tables support data
filtering.
Specifications
Analyzer allows users to maintain and manage the home page of the Network Analysis and
Optimization app and Dashlets based on service requirements. The following operations are
supported:
Specifications
Table 7-17 provides the specifications of IP RAN analysis reports.
NE NE Health Report:
Collects the performance indicators (such as CPU
usage and memory usage) of NEs based on subnets,
helping users quickly locate NEs with the heaviest
load.
Ring Report:
Collects the performance indicators (such as the
traffic rate and bandwidth usage) of inter-ring ports,
helping users quickly identify the ports with the
heaviest load on access rings.
Typical Reports
The following are some typical reports in the IP RAN solution.
l NE
l Board
Figure 7-29 Regional traffic map in the IP RAN traffic monitoring solution
Figure 7-30 Regional quality map in the TWAMP quality monitoring solution
Specifications
Regional Traffic Map
l Coloring
Regions on the regional map are displayed in different colors, depending on which
thresholds they have exceeded.
– Green indicates that the regional traffic is less than the Minor Alarm threshold.
– Yellow indicates that the regional traffic is less than the Critical Alarm threshold
but greater than the Major Alarm threshold.
– Red indicates that the regional traffic is greater than the Critical Alarm threshold.
– Gray indicates that the regions are not monitored.
l Region ranking
Regions are ranked by total traffic or bandwidth usage in descending order, and the
ranking is refreshed every hour.
l Traffic visualization
The regional IP RAN traffic monitoring map clearly displays the inbound bandwidth
usage, outbound bandwidth usage, inbound peak bandwidth usage, and outbound peak
bandwidth usage of regions, helping users identify key regions in a timely manner.
Regional Quality Map
l Coloring
Regions on the regional map are displayed in different colors, depending on the regional
service quality.
– Green indicates that the regional service quality is Good.
– Yellow indicates that the regional service quality is Medium.
– Red indicates that the regional service quality is Bad.
– Gray indicates that the regions are not monitored.
l Region ranking
Regions are ranked by service quality in the order from Bad to Good, and the ranking is
refreshed every hour.
l Indicator visualization
The regional map clearly displays quality indicators, which helps users quickly identify
the regions with poor quality.
l The physical topology integrates NE performance indicators and alarms, helping users
learn the NE condition and running status of the entire network.
l The physical topology integrates link performance indicators and alarms, helping users
learn the structure and running status of the entire network.
Specifications
l A maximum of 24 tasks can be created.
l Daily, weekly, and monthly reports are exported at 05:00 every day, 09:00 every
Monday, and 13:00 on the first day of every month, respectively.
l All data can be exported if the report format is XLS or CSV. However, if the report
format is PDF, only the first 100 records can be exported.
l If SendMethod is set to local storage, the exported reports are saved to the NCE server
by default, and only the latest 30 files are reserved. If SendMethod is set to SMTP,
users can send the exported reports to multiple email boxes.
Benefits
l NCE provides the object management function, which allows you to select performance
indicators of multiple resource objects of the same type within a specified period for
statistics collection and analysis.
l Maintenance Suite is provided for users to manage multiple objects and view
performance data or even the performance aggregation results of objects. The
aggregation results can be exported as a report. Maintenance Suite aggregates
performance data by area, service type, or device. For example, it can aggregate
performance data for multiple ports of the same device or in the same area. In addition,
Maintenance Suite allows users to flexibly set object layers for easily collecting and
querying performance data by layer.
Specifications
l Groups can be queried, created, or deleted, and objects in the same group must be of the
same type.
l Users can modify group names but cannot modify resource types. During the
modification, all objects in the group are listed. Users can add or delete objects as
required.
l The performance data of groups can be browsed and exported to reports, but cannot be
saved to the database.
Table 7-18
Solution Resource Type
PTN_CP
L2Link
PTN_Ethernetport
PWTrail
OTN_CP
OTN_Ethernetport
L2Link
PWTrail
OTSTrail
IP RAN solution NE
CP
IpOpticalPort
IPLink
IPInterface
L2Link
Real-time network quality monitoring and assistance in fault demarcation and locating:
A TWAMP test can be conducted between NEs to collect network performance data. Carriers
can deploy NCE to detect service quality change trends in a visualized manner to facilitate
fault demarcation and locating.
Figure 7-42 shows the typical networking schemes.
Destination device IP N Y
address (TWAMP-capable
device)
IP device N Y
V200R005 ATN910(5130),ATN9 NA NA NA Y N
C00 10I,ATN910B,ATN9
50B(5131), ATN905
V200R006 ATN910(5130),ATN9 NA NA NA Y N
C00 10I,ATN910B,ATN9
50B(5131),ATN905
V300R002 ATN950C NA NA NA Y Y
C00 and
later
versions
V200R006 ETN500 NA NA NA Y N
C22
Two way Rate of lost packets to transmitted packets on a link within a period.
Packet Loss
Ratio
Two way Time during which a source end sends a packet to the peer end and then
Delay receives a response from the peer end.
Two way Difference between two consecutive delays for packets to reach the same
Jitter destination.
One-way Rate of lost packets to transmitted packets on a link from the source to the
Packet Loss destination or from the destination to the source within a period.
Ratio
One-way Time required from sending a packet by the source to receiving the packet
Delay by the destination or sending an acknowledgment packet by the destination
to receiving the acknowledgment packet by the source.
One-way Delay change from the source to the destination or from the destination to
Jitter the source.
MOS A test to obtain users' view of the network quality. The MOS provides a
numerical indication of the perceived quality from the users' perspective of
received media and is expressed as a single number in the range of 1 to 5,
where 1 indicates the lowest perceived quality, and 5 indicates the highest
perceived quality. The voice quality based on E-Model calculation.
Specifications
The following lists the specifications supported by TWAMP-based IP RAN quality
monitoring:
l Viewing real-time performance data
The real-time performance data obtained from a TWAMP test can be displayed in a line
chart to help users learn the current service quality.
Function Overview
During a trace test, packets with different time to live (TTL) are sent to the destination to
determine the IP address, packet loss rate, and delay of each hop on the path.
Trace Test DescriptionThe following tables describe the related indicators, source and
destination ends, and version mapping of trace tests.
Packet loss Ratio of the number of lost packets on each hop to the total number of
ratio packets transmitted on the forwarding path from the source to the
destination within a period of time.
Delay Time required for each intermediate device to respond when a packet is sent
along the forwarding path from the source device to the destination device.
SNC Y N
Specifications
The following lists the specifications supported by network quality evaluation based on trace
tests:
l Viewing historical performance data
To learn the quality of a service path, users can view the historical performance data and
check the IP address, packet loss rate, and average two-way delay of each hop.
l Supporting periodical testing
The test period can be set to 5min, 10min, 15min, 20min, 30min, 1h, 6h, 12h, 24h, 2d,
or 7d to facilitate periodical tests of a service path.
Function Overview
NCE utilizes the path restoration function triggered by TWAMP tests to help carriers achieve
the following:
l Discover and restore hop-by-hop service paths from the alarm source to the alarm
destination.
l Based on the restored service paths, read indicators of existing or on-demand TWAMP
tests hop by hop; implement hop-by-hop fault diagnosis based on the test results.
Figure 7-47 shows the typical networking scheme.
Packet loss Ratio of the number of discarded packets to the total number of packets
rate transmitted within a period.
Delay Time used by a packet to be transmitted from the source to the destination.
Jitter Variation between delays of two continuous packets towards the same
destination.
Device IP (TWAMP-capable Y N
device)
Destination device IP N Y
address (TWAMP-capable
device)
TWAMP-capable base N Y
station
Specifications
The following describes the specifications supported by ATN+CX quality path restoration and
fault diagnosis:
l A TWAMP test case alarm can trigger path restoration and fault diagnosis.
If a TWAMP test case with the source of a NE generates an alarm, click the alarm device
in the physical topology. In the quality alarm panel displayed in the right pane, manually
trigger path restoration and fault diagnosis to help fault analysis.
l Hop-by-hop fault information collection is supported.
Fault diagnosis is implemented based on the restored path, and performance indicator
values are accumulated. The fault information is displayed on each hop of the path,
including the link traffic, optical power, and queue packet loss rate. The lower pane of
each hop of the path provides a report tab page to display detailed indicators.
Function Overview
Path restoration brings the following benefits to carriers:
Based on the restored service paths, read indicators of existing or on-demand TWAMP tests
hop by hop; implement hop-by-hop fault diagnosis based on the test results.
Packet loss Ratio of the number of discarded packets to the total number of packets
rate transmitted within a period.
Delay Time used by a packet to be transmitted from the source to the destination.
Jitter Variation between delays of two continuous packets towards the same
destination.
Destination device IP N Y
(TWAMP-capable device)
TWAMP-capable base N Y
station
Specifications
The following describes the specifications supported by IP network quality path restoration
and fault diagnosis:
A TWAMP test case alarm can trigger path restoration and fault diagnosis.
If a TWAMP test case with the source of an NE generates an alarm, click the alarm device in
the physical topology. In the quality alarm panel displayed in the right pane, manually trigger
path restoration and fault diagnosis to help fault analysis.
The dashboard provides a topology view to clearly display the traffic situations of network-
wide links and displays top 10 link TX bandwidth usage as well as top 10 tunnel rates.
Specifications
Report Function
Out Direction Custom Flow Report l Displays basic information such as the
names and traversed NEs of outbound
custom flows.
l Collects performance data such as traffic
rates to help users learn traffic situations
and optimize traffic.
l Drills down to the flow link report.
Typical Reports
The following figure shows the typical reports provided in the IP network optimization
solution.
Implementation Principles
l Outbound traffic optimization
Users can specify the service flow to be optimized and the next-hop information on the
egress device to divert the traffic to idle links. In addition, users can set up a dedicated
link for VIP users to ensure the bandwidth.
l Inbound traffic optimization
Users can specify the service flow to be optimized and the routing attribute on the MAN
egress device to modify the priority of the route advertised by the MAN egress device to
the backbone device and divert the traffic to idle links on the MAN.
l Users can select a link in the topology to automatically filter custom flows that traverse
the link (the flows are displayed in the list on the right).
l Users can select a flow in the list of custom flows to display the link through which the
flow passes.
l The topology displays the situation that the link traffic exceeds the threshold, link status
(up/down), and NE status.
l Users can select inbound and outbound custom flows or top N flows for optimization.
Specifications
l Traffic optimization can be implemented for top N flows only after they are configured
as custom flows.
l A maximum of 50 outbound flows can be selected for optimization, and inbound flows
do not support batch optimization.
The dashboard provides a topology view to clearly display the traffic situations of network-
wide links and displays top 10 link TX bandwidth usage as well as top 10 tunnel rates.
Specifications
The following table lists the reports supported by the MPLS network optimization solution.
Report Function
IP Link Traffic Report Collects the traffic rate and bandwidth usage
of IP links, helping users quickly locate
interfaces with heavy loads.
Report Function
IP Link Quality Report Collects the delay, jitter, and packet loss rate
of IP links, helping users quickly identify
the interfaces with the heaviest load.
Typical Reports
The following figure shows the typical reports provided in the IP network optimization
solution.
Implementation Principles
SNMP is used to collect 5-minute traffic of IP links and TE tunnels. The Controller
periodically obtains tunnel traffic of the entire network from the Analyzer for path
computation reference. During optimization, the Analyzer applies the selected link or tunnel
to the Controller for path computation, and displays the path computation result to users for
confirmation. Users can evaluate the optimization effect based on the comparison result
(tunnel path and link traffic changes) before and after path computation and network traffic
distribution to determine whether to perform optimization. If users choose to perform
optimization, the Controller modifies the LSP route on the router to optimize the network
traffic. Through traffic collection, path computation, and network optimization, the traffic
distribution of the network is optimal.
After measuring the link delay using TWAMP, NCE can apply the delay to the Controller
during link resource management. Then the Controller implements optimization based on the
link delay.
Figure 7-48 Manual optimization page for the MPLS network optimization solution
Figure 7-49 Link delay configuration page for the MPLS network optimization solution
Specifications
The specifications of manual adjustment are as follows:
Implementation Principles
SNMP is used to collect 5-minute traffic of IP links and TE tunnels. The Controller
periodically obtains tunnel traffic of the entire network from the Analyzer for path
computation reference. During optimization, the Analyzer applies the selected link or tunnel
to the Controller for path computation, and displays the path computation result to users for
confirmation. Users can evaluate the optimization effect based on the comparison result
(tunnel path and link traffic changes) before and after path computation and network traffic
distribution to determine whether to perform optimization. If users choose to perform
optimization, the Controller modifies the LSP route on the router to optimize the network
traffic. Through traffic collection, path computation, and network optimization, the traffic
distribution of the network is optimal.
GUI
Figure 1-1 Maintenance policy page
Compared with other collection methods, telemetry collection can be used on larger networks
and completed in a shorter period.
Specifications
Table 7-32 provides the specifications of IP analysis reports.
Version Mapping
NE Version
Features
The Private Line Analysis and Assurance app focuses on the objects on NCE (Super). For
example, the app manages only E2E and segmented services. It does not process internal
service details. It locates the problem management domains, not resources in the management
domains.
The app is divided into two parts: analysis and perception. Analysis is the basis. Perception
can work independently.
The app functions include:
l TP and CPE monitoring
NCE (IP Domain) or uTraffic is informed to monitor the TP and CPE indicators over a
northbound interface (NBI), and then sends the monitoring results to NCE (Super)
through Kafka.
l Private line SLA monitoring
The app sends requests for E2E private line SLA monitoring. It automatically selects
TWAMP or Y.1731 and notifies NCE (IP Domain) or uTraffic of test case creation,
through NBIs. NCE (IP Domain) or uTraffic reports the test results to NCE (Super)
through Kafka.
l Private line availability monitoring
If the packet loss rate of a private line is above the threshold in SLA detection results, the
app considers the private line unavailable and calculates its availability.
l Private line fault demarcation
Through path restoration, the app displays each hop of a private line. It can perform PW
Trace, LSP Trace, or IP Trace tests to detect the connectivity of each hop. In addition, it
shows the traffic and CPU or memory usage of devices on each hop, which facilitates
fault demarcation.
Table 7-33 KPIs of the Private Line Analysis and Assurance app
Resource Type KPI Description
Private line Private line Ratio of the available time to the statistics time
availability of a private line.
Private line SLA Delay, packet loss, and jitter of a private line.
l CPE analysis
CPE analysis provides the following functions:
– Displaying the alarm information of CPEs
– Displaying the basic attributes, CPU usage, and memory usage of CPEs
– Displaying the KPI trends of CPEs
– Lists all the TPs under each CPE
l Availability analysis
Availability analysis provides the following functions:
– Reflecting the availability fulfillment rate (Number of tenants with normal
availability/Total number of tenants)
– Filtering data by time range or tenant
– Displaying the tenant and private line availability rankings, listing all interruption
records, and associating tenants with private lines
– Drilling down to tenant and private line details
l SLA analysis
SLA analysis provides the following functions:
– Reflecting the SLA fulfillment rate (Number of tenants with normal SLAs/Total
number of tenants)
– Displaying the SLA exceeding times in each area
– Filtering data by time range or tenant
– Drilling down to tenant and private line details
NOTE
What-if analysis is implemented in compliance with RFC standards. Simulation results (routes) may
differ from the actual routes.
The key functions of what-if analysis are topology restoration, traffic modeling and
simulation, and fault simulation.
Topology Restoration
By interconnecting with Manager and Controller, what-if analysis obtains data such as
topology inventory data, device configurations, and tunnel path computation constraints. Then
it parses and restores topology information to provide basic network data as the input of what-
if analysis.
l Supports basic topology management, such as adjusting the layout, displaying the
topology in full-screen mode, and saving the topology view.
l Displays topology statistics, including the quantities of nodes and links.
Layout Adjusts the topology layout. The following layout modes are
supported:
l Balanced layout: All NEs are evenly distributed on the
page, and the locations of the NEs are proper.
l Circle layout: All NEs are arranged in a circle in the
topology view. This mode is unavailable when there are
more than 200 NEs.
l Layered layout: NEs are layered as they are connected.
The NEs at the top layer (first layer) can be manually
specified or automatically assigned. The NEs directly
connected to those at the first layer are put at the second
layer, the NEs directly connected to those at the second
layer are put at the third layer, and so on.
Parse Log Views topology parsing logs, including data sources, time,
parsing results (successful or failed), statistics, and details.
Statistics include the numbers of created NEs, and links,
number of NEs that are parsed abnormally, and number of
NEs that fail to be parsed. Users can click For more detail
to learn the failure causes, such as invalid data or failure to
obtain services.
What-if analysis provides topology statistics, including the quantities of nodes and links.
Traffic Simulation
Traffic simulation: simulates service traffic when the network is normal or abnormal (for
example, links or nodes are faulty). From the traffic simulation results, users can learn the
changes on service paths, link utilization, and services on each link.
Route simulation: simulates IGP, BGP, and TE, the system analyzes route computation of IP
devices, and generates protocol routes and forwarding routes of the devices. Route simulation
must be performed before traffic simulation.
NOTE
Reason why route simulation must be performed before traffic simulation: The major function of
traffic simulation is to quickly calculate E2E service paths and the load on each link based on the
forwarding tables maintained by routes (What-if analysis supports load overlaying for original links and
calculation of Layer 2 protocol header, Layer 3 protocol header, frame header, and frame trailer costs).
During traffic simulation, what-if analysis simulates routing table query behavior on the forwarding
plane of an IP network based on the route simulation results (a routing table on each router) to obtain a
complete path to the destination IP address hop by hop. The major purpose of route simulation is to
generate routing and forwarding tables. Therefore, route simulation must be performed before traffic
simulation.
The major differences between flow data and load data are as follows:
l Flow data: E2E service traffic data, including the IP addresses of source and destination
devices, and traffic volume.
l Load data: throughput data of each interface in a certain period.
l Changes to the flows carried on links or TE tunnels before and after faults occur, helping
users check whether the flows carried on key links have changed
l Fault statistics reports on the UI or in Excel, which contain the numbers of flow changes,
flow interruptions, tunnel interruptions, load changes, and number of overloaded links
caused by faults.
Performance Indicators
What-if analysis supports the following performance indicators.
NOTE
Direct 1
Static 1
OSPF 1.5
IS-IS 1.5
BGP 1.6
VPN 1.6
FIB 1
Version Requirements
What-if analysis has the following application limitations:
l Applies only to the single-AS scenario on an IP network and applies only to the IP Core
and ATN+CX IP RAN networking scenarios.
l Does not support IPv6, multicast, SR-BE, EVPN, and VXLAN scenarios.
l Does not support the networking of third-party devices.
l Does not support subnets.
Table 7-38 lists the device types and their versions supported by What-if analysis.
8 Usage Scenarios
NCEE can be flexibly used in various scenarios to achieve network connection automation
and self-optimization as well as O&M automation.
Solution
As shown in Figure 2, NCE enables an intent-driven TSDN network that meets the on-
demand use requirements of enterprise private line services.
l The forwarding plane at the infrastructure layer uses OSN 9800s, OSN 8800s, and OSN
1800s (which are large-capacity MS-OTN and DWDM devices) to deliver high-
bandwidth and short-delay transparent transmission pipes.
l NCE is deployed at the management and control layer to provide one-stop, automated
private line provisioning, centralized service control, and northbound openness and
integration capabilities.
Features
The following describes the key features of NCE in the enterprise private line scenario:
l Agile provisioning of optical services
– Agile provisioning of E2E services: meets the fast service innovation and release
requirements in the Internet era, helping customers win market opportunities.
Solution
l Private line services are VPN services that carriers provide for customers to connect
different areas over a VPN (L3VPN or L2VPN). These services are typically configured
through the NMS GUI or device CLI.
– L3VPN is a virtual private network established at the network layer. It allows sites
in different locations to communicate with each other through the backbone
networks of service providers.
– L2VPN transparently transmits the Layer 2 data of users over a PSN (IP/MPLS)
network.
l The IP private line provisioning solution gives carriers a new way to manage and control
private line services. By introducing an SDN controller to the legacy network, it
automates VPN service deployment and centralizes VPN service control. The controller
provides a portal for users to configure services. NCE converts the service models to
configurations that can be automatically deployed on devices. Through centralized path
computation, the controller can produce optimal paths for the TE tunnels that carry VPN
services.
l The IP private line provisioning solution accelerates private line provisioning, computes
optimal paths for VPN services, and improves network utilization.
Capabilities
The IP private line provisioning solution has the following key capabilities:
automatically creates the required tunnels or binds previously created tunnels to the
service.
l Tunnel path computation and traffic optimization
NCE can collect physical and network topology information for the entire network and
compute tunnel paths based on the topology information as well as tunnel constraints.
Because NCE has network-wide information, it can compute optimal paths (paths with
the lowest costs or shortest delay) that meet tunnel constraints (including bandwidths,
delays, affinity attributes, SRLGs, priorities, and number of hops). The optimal paths are
then applied to forwarders to enable packet forwarding. NCE can dynamically collect
traffic information and optimize tunnels.
l Link SLA collection
NCE allows users to configure SLA collection policies. It converts the policies to
performance data collection configurations for forwarders, which collect SLA
performance data over TWAMP. The uTraffic periodically obtains the SLA performance
data from forwarders.
l Network OAM
NCE can display the paths and statuses of services and tunnels, visualize physical and
logical topologies, and detect tunnel and path connectivity.
Benefits
The IP private line provisioning solution brings the following benefits to customers:
l Accelerates E2E service deployment and shortens TTM.
l Improves network resource efficiency.
l Simplifies O&M, reduces manual errors, and shortens the Mean Time to Repair (MTTR)
without affecting services.
l Supports smooth evolution and reduces the total cost of ownership (TCO).
8.1.3.1 Introduction
The legacy private line bearer networks cannot provide high MSTP bandwidths and do not
support advanced technologies. Though OTN networks can flexibly access SDH services and
support Layer 2 Ethernet services such as E-Line and E-LAN, they are weak at Layer 3 and
cannot access private lines due to their positions.
IP RANs are strong at Layer 3 because they have routers, which is suitable for complex mesh
networking. In addition, IP RANs feature comprehensive coverage. Therefore, carriers prefer
to use IP RAN to provision private line services.
Carriers' IP RANs are deployed by city. One IP RAN is deployed in each city. If a city uses
devices from two vendors, two IP RANs will be deployed. Enterprise branches in different
cities need to interconnect with each other through different IP RAN networks. Therefore,
they need multi-___domain multi-vendor IP RAN private lines.
For private line services across different IP RANs, Option A and Option C can be used for
interconnection between IP RANs. For unified O&M, carriers prefer Option C.
Service access devices (CPEs) and IP RANs can be managed in a centralized or separate
manner. Typically, CPEs are managed independently of IP RAN networks.
Related Concepts
IP RAN: A RAN that uses the IP packet transport technology to achieve data backhaul on a
radio access network.
Dual-homing: A network topology where a customer service accesses the carrier network
using two independent PEs that work in active/standby mode. The standby connection is
activated only when the active connection is faulty.
Separate CPE management: All CPEs are put into one ___domain and all IP RAN devices are
put into another ___domain. To facilitate understanding, the source and sink access points CPE1
and CPE2 are put into different domains here.
Centralized CPE management: CPEs and IP RAN devices are put into one ___domain and
managed by the same controller.
CSG single-homing/dual-homing: The CPEs serving as service access points can be either
single-homed or dual-homed to upstream CSGs in an IP RAN ___domain. Single-homing access
refers to the scenario where a CPE connects to a single upstream CSG, while dual-homing
access refers to the scenario where a CPE connects to two upstream CSGs.
ASG single-homing/dual-homing: The CPEs serving as service access points can be either
single-homed or dual-homed to upstream ASGs in an IP RAN ___domain. Single-homing access
refers to the scenario where a CPE connects to a single upstream ASG, while dual-homing
access refers to the scenario where a CPE connects to two upstream ASGs.
l An enterprise has branches A and B in the same city and needs to apply for a private line
service to connect the branches. This is an intra-city private line service.
l An enterprise has a branch in city A and city B and needs to apply for a private line
service to connect two branches in city A and city B. This is an inter-city private line
service.
Figure 8-4 Private line service for enterprise interconnection using Option C (separated CPE
management)
NCE (Super) interconnects with Agile Controller-WAN, NCE (IP Domain), or a third-party
controller in the southbound direction to manage devices on the backbone network. NCE
(Super) interconnects with Agile Controller-WAN (IP RAN) in the southbound direction or
directly interconnects with U2000 to manage IPRAN devices. Agile Controller-WAN (IP
RAN) interconnects with U2000 in the southbound direction and synchronizes the inventory
data of the access and aggregation domains as well as CPE devices using U2000.
When CPEs and devices in the IPRAN ___domain are managed by different controllers, this
scenario is called separate CPE management.
l In this scenario, CPE1 and CSG1 as well as CPE2 and CSG3 are access points for each
other, and inter-___domain links need to be created.
l If there is no preset tunnel between CPE1 and CSG1 as well as between CPE3 and
CSG3, you can create a tunnel on NCE (Super).
l ASGs and ASBRs are access points for each other, and inter-___domain links need to be
created.
Table 8-1 Types of services supported by the cross-city private line for enterprise
interconnection
Service Type Service Service Accessed CPE Service
Segmentation Requirement Template
Type
Figure 8-8 CPE+MS+PW+CPE private line for interconnection in the IP RAN/IP metro
___domain
Figure 8-9 IP RAN+IP Core private line for enterprise interconnection in different cities
Figure 8-11 Inter-province private line service for enterprise interconnection (Option A)
From the perspective of users, the purpose of the service is to access the Internet. Therefore,
the service is called Internet access service.
This scenario has the following characteristics:
1. This service is a circuit cross connect (CCC) service. NCE (Super) manages only the
CCC service on CPE1.
2. A large-capacity PW is provisioned between CSG1 and ASG1. Multiple users under the
same CPE share the same PW.
3. CPE1 and CSG1 are connected through optical fibers and interconnected through a
VLAN. CSG1 uses a QinQ interface. A QinQ sub-interface can be connected to multiple
VLANs or VLAN segments. The CCC service becomes available after its VLANs match
with the VLANs of PW sub-interfaces.
4. NCE (Super) does not manage the PWs connected to the network. The existing NMS
OSS/BSS system provisions service configurations on the IP RAN and BRAS/CR.
8.1.4.1 Introduction
Enterprises deploy DCs on clouds. Enterprises need private lines for accessing the cloud DCs.
At present, the following cloud private lines are used: IP RAN+EVPN VXLAN cloud private
line, IP RAN+backbone MS-PW cloud private line, and PON+L3VPN cloud private line.
Related Concepts
QinQ: A layer 2 tunnel protocol based on IEEE 802.1Q encapsulation. It adds a public
VLAN tag to a frame carrying only a private VLAN tag , allowing the frame to be transmitted
over the service provider's backbone network based on the public VLAN tag. This provides a
L2VPN tunnel for customers.
In this scenario, the PON service deployment is managed by a separate access NMS. The
service management scope of NCE (Super) is BRSA-PE. The BRAS belongs to the metro
network, the PE connects to the cloud, and the PE belongs to the converged cloud backbone
network. .
If the IP RAN for City A and the converged cloud backbone network are managed by the
same set of NCE (IP Domain), NCE (Super) processes the service as a single-___domain L3VPN
service.
8.1.5.1 Introduction
8.1.5.1.1 Definition
The IP and optical cross-___domain cross-vendor private line feature provides carriers with self-
service design, fast service provisioning, and diverse service maintenance functions.
NOTE
The IP and optical cross-___domain cross-vendor private line feature consists of three layers:
super controller, ___domain controller, and forwarding layers.
l NCE (Super) is located at the super controller layer.
l Controllers are located at the ___domain controller layer. Huawei controllers include Agile
Controller-WAN, Agile Controller-Transport, and U2000.
l Network forwarding devices, including Huawei NE40E routers, MS-OTN devices,
Alcatel-Lucent (ALU) routers, and optical devices, are located at the forwarding layer.
NCE (Super) manages composite services. Using functions such as cross-___domain path
computation and service orchestration, NCE (Super) decomposes cross-___domain composite
services and applies them to corresponding controllers to implement the following functions:
l Service creation, modification, deletion, and query
l Real-time display, activation, and deactivation of service paths
l Bandwidth on demand (BOD)
l Bandwidth calendar
l Service status display
l Service continuity check
Users can manage composite services through the GUIs or northbound APIs provided by
NCE (Super).
8.1.5.1.2 Benefits
In the IP and optical cross-___domain cross-vendor private line feature, NCE (Super) provides
the following benefits:
Single-Domain Scenario
NCE (Super) manages a ___domain controller that manages all services within its management
___domain. The service types include the virtual leased line (VLL) and L3VPN services.
Figure 8-20 Protection scenario where the port type is MC-LAG or LAG
IP RAN+OTN Private Line for Enterprise Interconnection (with the OTN Only
as a Pipe)
As shown in Figure5 IP RAN+OTN private line for enterprise interconnection, the carrier
uses IPRAN and optical aggregation network to carry enterprise private line services. One end
of the enterprise network connects to the IPRAN through the CPE, and the other end of the
enterprise network connects to the ACC NE attached to the optical aggregation network (MS-
OTN). The NE can access both mobile bearer services and enterprise private line services,
which is slightly different from the CPE that carries only the enterprise private line. The
optical aggregation network is deployed in advance as a pipe to streamline IP routes between
the ACC and IPRAN. During service provisioning, NCE (Super) manages a CPE access
___domain (including the ACC NE) and an IPRAN ___domain. Tenant services are CPE+IPRAN
+ACC multi-___domain services.
The service type is VLAN-based L2VPN on the CPE and VLL in the IP RAN ___domain and on
the ACC.
IP RAN+OTN Private Line for Enterprise Interconnection (with the OTN and
IPRAN back-to-back interworking using Option A)
As shown in Figure6 IP RAN+OTN private line service for enterprise interconnection
(Option A), the carrier uses IP RAN and optical aggregation networks to carry enterprise
private line services. One end of the enterprise network connects to the IP RAN through the
CPE, the other end of the enterprise network connects to the CPE attached to the optical
aggregation network. The OTN connects to the IPRAN in back-to-back mode using Option A.
During service provisioning, NCE (Super) manages one CPE access ___domain, one IP RAN
___domain, and one OTN ___domain. Tenant' services are CPE+IP RAN+ACC multi-___domain
services.
The service type CPE is VLAN-based L2VPN on the CPE,
VLL in the IP RAN ___domain, and EVPL in the MS-OTN ___domain.
Figure 8-23 IP RAN+OTN private line service for enterprise interconnection (Option A)
The service type of the access CPE (including ACC) is VLAN-based L2VPN
on the access CPE (including the ACC), VLL in the IP RAN, EVPL in the MS-OTN, and
VLL or L3VPN in the IP core ___domain.
8.1.6.1 Introduction
Definition
The optical multi-___domain solution is an end-to-end service provisioning and management
solution for multiple domains and vendors in the transport ___domain.
Traditional transport services are based on the end-to-end configuration of the NMS of the
vendor. The NMS can manage the networks and services of only one city or one vendor. To
meet the requirements of simplified networks, customers hope that the separated provisioning
and management experience can be transformed into service-oriented end-to-end management
without separate configuration and management of segmented networks. Therefore, the
optical multi-___domain solution comes into place.
NOTE
As shown in the preceding figure, the architecture of the optical multi-___domain private line
feature consists of three layers: Super controller, ___domain controller, and forwarding layer
l The component of the Super controller is NCE (Super).
l The ___domain controller component is the controller, and the Huawei controller is NCE
(Transport Domain).
l The forwarding layer contains network forwarding OTN devices, including Huawei OSN
9800 and OSN 1800.
In the optical multi-___domain feature, NCE (Super) provides self-service service design, quick
service provisioning, and rich service maintenance functions for carriers.
l Service design supports definition of multiple service types and the service planning
capabilities.
l As a service production system, service design implements cross-___domain composite
service provisioning of multiple controllers. Private line services for cross-___domain
customers in the OTN ___domain can be provisioned and adjusted on the live network.
l Various service maintenance functions are supported, such as displaying service views.
Benefits
The optical multi-___domain solution brings the following benefits:
l Provides end-to-end service GUIs from the user perspective to display service dynamics
clearly. The solution enables you to quickly identify the ___domain or vendor whose
services are interrupted.
l Supports design of service templates to provision multiple services using one template.
This reduces manpower and ensures service provisioning without differences.
l Reduces the difficulty of service provisioning for maintenance engineers and enables
one-click service provisioning across domains and scenarios, improving the service
provisioning efficiency.
Single-Domain Scenarios
Devices in the access, aggregation, and core networks are optical devices grouped into one
___domain. E1/STM services are carried on SDH HS.CPEs, achieving SDH private line service
provisioning.
Multi-Domain Scenarios
As shown in Figure 8-27, NCE (Super) manages multiple optical domains, and these optical
domains may belong to the same core network or one or more access (metro) networks. User
services may be in the management ___domain of one controller or across the management
domains of multiple controllers.
The accessed services can be SDH, Ethernet, SAN, OTN, video, or other services. After these
services reach the client-side ports, the client-side boards directly map these services to OTN
signals that can be transmitted on the OTN. T boards are used to interconnect the management
domains of different controllers. Client-side signals are added and dropped on T boards.
Table 8-12 Service type supported by the optical multi-___domain networking scenario
Service Type Service Requirement Service Template Type
8.1.7 SPTN
8.1.7.1 Introduction
8.1.7.1.1 Definition
To meet future service development requirements and cope with live network O&M
pressures, the packet transport network (PTN) must be further evolved. Forwarding-control
decoupling and the centralized management and control architecture provide superb evolution
performance for the PTN, making it possible for the PTN to combine the advanced concept of
software-defined networking (SDN) with carrier-class reliability and high service quality and
smoothly evolve to the SDN PTN (SPTN).
The overall SPTN architecture consists of three parts: forwarding layer, control layer, and
application layer.
The main SPTN components include the integrated resource management system (RMS),
integrated transport network management system (NMS), apps, app servers, super controller
NCE (Super), ___domain controller NCE (IP Domain), and forwarders.
l Forwarding layer
Services are forwarded and processed at this layer. Existing PTN devices perform service
forwarding and processing. In other words, these devices forward services based on
configurations delivered from the control layer and perform protection switching.
l Control layer
This layer provides the following functions:
– Topology management: automatically discovers adjacency relationships, link
information, and link status.
– Route selection: selects appropriate routes for link establishment.
– Connection management: establish, remove, and maintain E2E connections based
on related protocols and signaling. If a network fault occurs, rerouting or protection
switching can be performed to restore services.
– Service order processing: accepts service orders and decomposes service requests
into specific configurations for automated service provisioning across domains.
l Application layer
Apps call the northbound APIs of the super controller at the control layer to perform
operations on the network and present QoS.
The existing Qx or SNMP interfaces are used as the southbound interfaces between the
___domain controller and existing devices on the live network. RESTCONF interfaces are
used between the ___domain controller and super controller and between the super
controller and apps.
8.1.7.1.2 Benefits
The SPTN feature offers the following benefits:
The SPTN system automatically completes path selection in the service provisioning phase without
manual intervention.
l E2E service path data is synchronized to the integrated RMS or integrated transport
NMS, so that the data is stored into the database and the service order can be closed.
During service provisioning, routes must be selected in compliant with the routing policy and
based on users' route selection requirements. Routing policies are divided into the following
types:
l Default policy: The primary and backup routes are carried over different paths. To be
specific, the primary and backup routes are carried over different devices, different
boards, and different links. The default policy is automatically applied during path
computation and does not need to be specified by users.
l Basic policy: can be based on the shortest path, network load balancing, or minimum
delay. You can specify a basic policy during path computation. If no basic policy is
specified, the shortest path policy is used by default.
l Advanced policy: Aggregation nodes are not deployed in the same equipment room, and
cannot be deployed on other access rings. The primary and backup routes are carried
over different optical cables. The advanced policy is based on inventory information on
the network and the method to obtain inventory information needs to be further
researched.
The detailed requirements for different routing policies are as follows:
l The primary and backup routes are carried over different paths: Service protection is the
most important reliability method of the PTN, ensuring that the network is protected if a
single point of failure occurs. Therefore, if APS services are configured, path selection
must first comply with the policy that the primary and backup routes are carried over
different paths. The SPTN system cannot detect boards. As a result, the policy that the
primary and backup paths are carried over different boards temporarily cannot be applied
in the SPTN system.
Figure 8-30 Primary and backup routes carried over different paths
l Shortest path: After this policy is selected, paths are selected based on the principle that
the minimum number of hops are passed. If users do not specify any basic policy, the
shortest path policy is used by default.
l Network load balancing: After this policy is selected, paths are selected based on the link
load (user-planned CIR bandwidth) to avoid overloaded links.
Network load balancing is implemented in the following scenarios:
a. Multiple links are deployed between two nodes, and different services are load-
balanced on different links.
b. On a ring network, different services are load-balanced in different directions of the
ring.
l Low delay (not supported in the current version): The E2E service path with the
minimum delay is selected. Depending on user configurations, the link and node delay
used for path computation is either measured from the live network or calculated based
on the configured bandwidth.
NOTE
The intelligent routing policies apply only to the service provisioning phase and cannot be used during
the reroute process caused by a channel fault.
introduces the layered control capability to implement E2E automated multi-___domain service
provisioning, improving the service provisioning efficiency.
l The integrated RMS generates service orders (including source and destination NEs
interface information) and delivers them to the super controller through standard
interfaces.
l The super controller interacts with the ___domain controllers to perform multi-___domain path
computation and generates the final service path based on the preset policy.
l The super controller decomposes multi-___domain services into independent ___domain
services (including source and destination NE interface information) and delivers
decomposed services into the corresponding ___domain controllers.
l The ___domain controllers create the corresponding ___domain services. The service creation
process includes service label allocation, binding between service tunnels and PWs, and
delivery of configurations to forwarders.
E2E multi-___domain paths are reported by the NMS to the integrated RMS, and the path data is
stored into the database.
NOTE
App access security depends on the existing management and control system, not the SPTN.
Figure 8-34 Presentation of performance information related to service traffic and quality
l Permanent 1: 1 protection
The 1:1 LSP protection commonly used on live networks can only restore services when
a single point of failure occurs. If multiple nodes are faulty, services cannot be protected,
affecting network stability.
If one side of the access ring is interrupted frequently due to construction, faults, or other
reasons and is not restored into a timely manner, services get interrupted after the other
side of the aggregation ring is also interrupted. If a broken access ring persists for a long
time, a large number of services may be interrupted due to aggregation- or backbone-side
device upgrade, optical cable cutover, or optical cable interruption.
The intelligent routing capability of the SPTN system improves network security as
follows:
− If the access ring is broken and one path in the 1:1 protection group is faulty, the
traditional protection mode takes effect and the forwarding plane initiates protection
switching to recover services within 50 ms.
− If the access ring does not recover within the specified time, the controller proactively
performs route re-computation for the faulty path based on the preset routing policy. The
computation result and route selection suggestions are delivered and implemented after
being acknowledged by O&M personnel. Routes are adjusted through the human-
computer interaction interface of the NMS or ___domain controllers.
− The SPTN system reports new route information to the integrated RMS or integrated
transport NMS, and then new route data is stored into the database.
− After the faulty path recovers, the SPTN system detects that the fault is rectified and
performs path re-computation based on the preferred routing policy. The system then
switches services back to the original service path after this path is confirmed by the
O&M personnel and reports path information to the integrated RMS or integrated
transport NMS, which then stores the path data into the database.
− Rerouting results are manually confirmed to avoid uncertainty caused by the
automated process.
The PTN for LTE uses the L2+L3 service model, as shown in Figure 8-37. On the access and
aggregation sides, the VLL is used for access to LTE services, and the L3VPN is configured
at the core layer to differentiate services. To enhance access reliability, PW dual-homing
protection (one source and two sinks) is deployed for L2 access. VE bridging is configured on
edge devices at the core layer to forward service traffic from L2 to L3.
A complete LTE service requires access, L2 to L3 bridging, and L3VPN configurations. The
configuration workload is heavy, and the traditional manual configuration way is inefficient.
In comparison, the SPTN can automatically provision LTE services in the following
procedure:
l The integrated RMS generates service work orders (source and sink NEs and interface
information) and imports the work order information to the app. The app converts the
work orders into SPTN RESTCONF packets and sends them to the super controller
through the standard interface.
l The app instructs the super controller to create an L3VPN at the core layer.
l The app notifies the super controller of the VE bridging interface.
l The app instructs the super controller to create VLL access and connects to the L3VPN
at the core layer.
l The super controller receives the RESTCONF packets, uses the corresponding routing
policy to compute the path, and establishes the service path.
l The integrated RMS or transport NMS synchronizes and archives E2E service path data,
completing the service process.
8.2.1 5G Bearer
5G networks feature high bandwidths, short delays, massive connections, high-precision
clocks, and intelligent O&M. As a management and control platform for 5G bearer networks,
NCE simplifies service deployment and provides complete closed-loop O&M SLA assurance
capabilities. With technologies such as big data, NCE discovers and resolves problems before
users complain, thereby implementing proactive O&M, greatly improving troubleshooting
efficiency, and reducing OPEX.
Key Capabilities
NCE provides the following capabilities for intelligent 5G network O&M:
l Full-lifecycle O&M platform: NCE supports unified management and control of
network-wide resources, including automatic device go-online, service deployment, path
computation, network self-optimization, and proactive network O&M. In addition, it
provides a unified portal to ensure consistent operation experience.
l Simplified service provisioning: NCE simplifies service provisioning to improve
efficiency.
l Centralized path computation: Based on the industry's most comprehensive routing
algorithms (hierarchical, cross-___domain, bandwidth, and delay algorithms), NCE
implements flexible path control to provide differentiated SLA assurance for different
services.
l Proactive O&M: NCE detects network topology changes in real time and drives network
re-optimization. In addition, it supports fast fault locating.
l Open network: Open NBIs and SBIs improve service and application deployment
efficiency and support the management of some third-party NEs.
Network Deployment
l Fast ONU deployment
ONU deployment and remote software commissioning are automatically implemented
based on the ONU plug-and-play (PnP) policy, which greatly improves the deployment
efficiency and reduces the network construction costs.
– Complete scenarios: ONU PnP deployment is supported in preconfiguration and
other scenarios.
– Unified configuration: A unified GUI is provided for configuring the ONU PnP
policy, including assigning IP addresses, adding ONUs and management channels,
upgrading software, and configuring scripts, in each FTTx scenario. Users can
upgrade, add, and software commission ONUs in a simplified manner.
– Simple process: Users can configure a PnP policy and bind it to OLTs on NCE.
After ONUs go online for the first time and report messages to NCE, NCE
automatically calls the PnP policy to deploy ONUs.
l Fast deployment and remote acceptance of MDU services
For FTTB networks, NCE provides the PON-upstream MDU predeployment and remote
acceptance functions. These functions improve the PON-upstream MDU deployment
efficiency and reduce OPEX.
– ONU predeployment: Before powering on an ONU, create a corresponding virtual
NE and configure service data on NCE. After the ONU is powered on, NCE
automatically applies configurations to it to deploy services. Therefore, network
predeployment and device installation can be performed at the same time, halving
the network construction time.
– Predeployment sheet: Device data can be batch imported to NCE by using a sheet.
The devices do not need field software commissioning but support remote
acceptance tests. Only hardware installation engineers need to visit the sites to
install devices. This greatly improves deployment efficiency and reduces
construction costs.
– Flexible authentication and comprehensive preconfiguration: ONU PnP is
implemented. Installation costs are greatly reduced. Users only need to enter the
authentication information when they replace a faulty ONU. NCE automatically
applies the configurations of the faulty ONU to the new ONU, eliminating the need
for reconfiguration.
– Remote acceptance: ONU acceptance tasks can be created offline. After ONUs go
online, NCE automatically applies configurations to them to achieve remote
acceptance.
– Optical topology sheet: By importing this sheet, users can quickly create topology
views, including creating optical splitters under ports and moving ONUs to other
optical splitters. This function improves O&M efficiency.
NCE provides a solution for fast ONU deployment. With the solution, engineers visit
sites only once (for installing devices) and complete ONU deployment in the following
five steps:
a. Plan data: With an Excel-based planning tool, engineers can easily copy and paste
data when configuring a large amount of data.
b. Perform offline deployment: Import Excel sheets that contain planned data to NCE
and pre-provision services offline.
c. Install ONUs: Only the hardware installation engineers need to visit the sites to
install ONUs. There is no need for on-site software commissioning.
d. Activate services: Power on ONUs. The PnP feature takes effect.
e. Perform remote acceptance: Engineers do not need to visit the sites. Instead, NEs
automatically report acceptance results without manual intervention.
Figure 8-39 shows the FTTx predeployment flowchart.
Service Provisioning
NCE provides three FTTx service provisioning methods. All the methods can provision
services quickly, regardless of whether an OSS is interconnected.
l Automatic provisioning with an OSS
NCE provides standard and open NBIs, which enable fast OSS integration for the FTTx
solution.
l Sheet-based service provisioning
NCE supports convenient and fast ONU predeployment and service preconfiguration for
the FTTx solution. Service provisioning is automated, saving the costs caused by manual
intervention.
l FTTx service template
NCE provides templates for provisioning FTTx services. Carriers can customize service
templates for their customers and provision services in a one-click manner. In this way,
services can be provisioned efficiently and accurately.
Network Maintenance
l FTTx topology management
– ODN view: displays PON ports, optical splitters, fibers, and ONUs on OLTs as well
as a complete network topology of type C protection.
8.4 Backbone
Solution
As shown in Figure 2, NCE enables an intent-driven TSDN network that meets the flexible
transmission requirements of optical backbone networks.
l The forwarding plane at the infrastructure layer uses OSN 9800s and OSN 8800s (which
are large-capacity MS-OTN and DWDM devices) to deliver high-bandwidth and short-
delay transparent transmission pipes.
l NCE is deployed at the management and control layer to provide one-stop, automated
private line provisioning, centralized service control, and northbound openness and
integration capabilities.
Features
The following describes the key features of NCE in the DCI scenario:
l Agile provisioning of optical connections
– Service provisioning takes mere minutes. TTM is shortened, and new DC services
can be developed quickly.
– Flexible bandwidth policies are provided for carriers to adjust bandwidths for
tenants at any time, meeting the requirements of frequent service migration and
scheduling between DCs. For example, by using the bandwidth reservation
function, different service departments of a tenant can use limited pipe resources in
a staggered manner.
l Northbound openness
– Through RESTful NBIs, carriers can develop customized tenant portals to achieve
e-commerce-style operations for stronger competitiveness and shorter TTM.
– Through standardized network models, service models, and NBIs, carriers can use
orchestrators to uniformly orchestrate transport and DCN networks and automate
E2E O&M.
8.4.2 IP Backbone
IP backbone network resources are visualized, adjustable, automated, and open, enabling
efficient, automatic O&M and dynamic resource adjustment.
Features
l DCI: Ubiquitous access implements interconnection of traditional DCs and cloud DCs.
Tenant-level service isolation and MPLS network optimization ensure guarantee tenant
SLAs.
l DC or MAN outbound optimization (IP backbone traffic optimization): Traffic data is
visualized from multiple dimensions. IP backbone networks are optimized based on IP,
AS, and community attributes. Link load is balanced on the IDC or MAN outbound
interfaces, and special assurance is offered to VIP customers.
8.4.3.1 Introduction
8.4.3.1.1 Definition
The SDN IP+optical solution provided by Huawei reshapes the resource configuration pattern
of traditional networks. This solution achieves effective synergy between IP and optical
networks through software-based network configuration control, simplifying network O&M
and improving network intelligence and automation.
As optical network techniques develop and the GMPLS control plane is introduced to optical
networks, these networks can now dynamically schedule resources. Most commonly used
optical network techniques are reconfigurable optical add/drop multiplexer (ROADM) and
optical transport network (OTN).
With SDN in mind, Huawei has defined a new backbone network architecture featuring SDN
IP+optical synergy in this solution. Specifically, Huawei has defined:
l A new network architecture, which consists of a series of integrated software modules
for network planning and control, network traffic analysis, policy management, service
provisioning
l Relationships between software modules and interfaces between software modules and
network layers
l A collaboration mechanism between IP and optical networks
The SDN IP+optical solution uses the SDN architecture to effectively synergize IP/MPLS and
optical resources on backbone networks, improve resource utilization, increase O&M
efficiency, and reduce total cost of ownership (TCO).
Figure 8-43 shows this architecture, which consists of the application, network management
and control, and infrastructure network layers.
l Application layer
This layer provides interconnection with various applications, including third-party
applications, carrier BSSs/OSSs, and network and service planning tools. Currently, this
layer only supports interconnection with the network and service planning tool. This tool
provides network planning and simulation for carrier services and various application
innovation functions.
l Network management and control layer
The control layer consists of the super controller, IP controller, and optical controller.
This layer supports service automation and policy management. It detects network
topology changes in real time and calculates service forwarding paths based on the real-
time network topology to implement real-time and intelligent network control. This layer
provides NETCONF, PCEP, BGP-LS and OSPF-TE interfaces in the southbound
direction. This layer manages and controls both the IP and optical networks.
In this solution, this layer manages planned IP+optical multi-layer data, multi-layer link
protection policies, and multi-layer link disjoint policies, collects IP+optical network
topology and service information, automates network deployment based on planned data,
implements multi-layer service protection and restoration based on configured policies,
and displays visual multi-layer network and service topologies based on collected
information.
The network management layer performs basic configurations on infrastructure network
devices for Fault, Configuration, Accounting, Performance, Security (FCAPS)
management.
In the SDN IP+optical solution, the network management layer performs basic
configurations on both optical network devices and controllers to implement FCAPS
management.
l Infrastructure network layer
The infrastructure network layer consists of two sub-layers: IP layer and optical layer.
The IP layer is composed of routers, whereas the optical layer is composed of transport
devices. Devices at the infrastructure network layer receive specific forwarding entries
from the controller and establish or tear down E2E service forwarding paths based on
these entries.
Network devices use protocols such as NETCONF, PCEP, BGP-LS and OSPF-TE to
communicate with controllers, and use protocols such as NETCONF and Qx to
communicate with the NMS.
l NMS
The NMS performs basic configurations on infrastructure network devices for fault,
configuration, accounting, performance, security (FCAPS) management.
In the SDN IP+optical solution, the NMS performs basic configurations on both optical
network devices and controllers.
8.4.3.1.2 Benefits
The SDN IP+optical solution meets the requirements of backbone network traffic in the cloud
era, adapts to new information consumption modes, quickly responds to changes, reduces
O&M costs, and improves customer experience
Specifically, this solution offers the following benefits:
l Improved network deployment efficiency
The unified management of backbone network resources improves network automation,
reduces manual workload required for service provisioning, and improves network
deployment efficiency.
l Quick response to service requirements
With interaction between the control planes of the IP and optical networks, the IP layer is
able to trigger dynamic reallocation of optical resources to quickly adapt to traffic
changes.
l High resource utilization efficiency
Multi-layer service traffic scheduling ensures real-time optimization of network resource
distribution, improving network resource utilization.
l Good network maintenance experience
Multi-layer topology display in real time simplifies service deployment and makes
service quality visible and controllable, greatly improving user experience.
Planning and NCE (Super) imports planned multi-layer link topology data,
import of multi- including multi-layer links, multi-layer link protection groups,
layer links and multi-layer link disjoint groups.
NCE (Super) converts the imported multi-layer link disjoint
group data to client service disjoint group configurations and
delivers these configurations to NCE (Transport Domain).
Network Interconnection
The IP+optical solution supports vertical multi-layer interconnection between IP services on
the backbone network and optical services. Generally, IP+optical single-___domain services refer
to IP single-___domain services and optical single-___domain service, and IP+ optical multi-___domain
services refer to IP single-___domain services and optical multi-___domain services. An IP ___domain
is divided by a router ___domain and generally refers to an AS ___domain. Optical domains are
classified by device vendors. Generally, transport devices from one vendor are placed in one
optical ___domain. IP+optical solution V100R018C00 supports only single-___domain and single-
vendor scenarios.
Specifically, this solution supports the following interconnection scenarios:
l IP (OTN board) + WDM (OTN tributary board) + WSON
Optical-layer ASON (WSON) is enabled on the WDM network. Routers use OTN
boards to interconnect with OTN tributary boards on WDM devices.
Multi-Layer Protection
The IP+optical solution provides multi-layer protection and can protect IP+optical services
with minimum resources to reduce TCO.
In an IP+optical scenario, faults can be classified into optical faults and non-optical faults
based on the locations of points of failure.
l Optical fault: refers to a fault on a device that resides on the optical layer, such as an
optical port or NE fault.
l Non-optical fault: refers to a fault on a device that does not reside on the optical layer,
such as a pigtail fault.
Figure 8-45 shows typical fault scenarios.
IP+optical multi-layer protection uses methods such as optical fiber redundancy, port
redundancy, and router redundancy to protect services against different fault scenarios. Table
8-14 describes these protection methods.
Multi-Layer Restoration Optical Optical fiber After an optical fault occurs, the
by Optical ASON port redundancy transport network automatically
(MLR-O) fault triggers ASON rerouting.
9 Specifications
Concurrent NE upgrade ≤ 60
NOTE
Direct 1
Static 1
OSPF 1.5
IS-IS 1.5
BGP 1.6
VPN 1.6
FIB 1
Table 9-4 Bandwidth configuration requirements for NCE (Transport Domain), NCE (Access
Domain) or NCE (IP Domain)
Item Requirement
Bandwidth between A replication link is configured between the primary and secondary
the primary and sites in the DR system for real-time data synchronization.
secondary sites of CIRs of the replication link bandwidth are as follows:
the DR system
l If N is 6000, the CIR is 30 Mbit/s.
l If N is 15,000, the CIR is 50 Mbit/s.
l If telemetry second-level collection is deployed, additional 20
Mbit/s needs to be added to the bandwidth.
NOTE
N indicates the number of equivalent NEs.
Bandwidth between A replication link is configured between the primary and secondary
the primary and sites in the DR system for real-time data synchronization.
secondary sites of CIRs of the replication link bandwidth are as follows:
the DR system
l If N is 15,000, the CIR is 50 Mbit/s.
l If N is 30,000, the CIR is 100 Mbit/s.
l If N is 50,000, the CIR is 150 Mbit/s.
NOTE
N indicates the number of equivalent NEs.
Table 9-7 describes the maximum management capabilities (measured by lines or ports) in
typical access scenarios (FTTH, FTTB/C, and MSAN/DSLAM). If more than one scenario is
involved, convert the management capabilities in these scenarios to equivalent NEs. The sum
of these equivalent NEs is the management capability of NCE.
Definition
l Equivalent NE: a uniform criterion used to describe and calculate the management
capabilities of NCE. This criterion is needed because different types of NEs occupy
different system resources to support different functions, features, cross-connect
capacities, and numbers of boards, ports, and channels. Therefore, different types of NEs
and ports must be converted to equivalent NEs based on the number of system resources
they occupy. An equivalent NE occupies as many system resources as an STM-1
transport NE.
l Equivalent coefficient: Resources occupied by physical NEs or ports/Resources occupied
by equivalent NEs
Calculation
The number of equivalent NEs that NCE can manage is calculated according to the following
rules:
l Basic unit of equivalent NEs: OptiX OSN 1800 I
l The equivalent coefficients of virtual and third-party NEs are 1. A preconfigured NE is
equal to a real NE. The equivalent coefficient of OEM devices is the same as that of
Huawei devices.
l Number of equivalent NEs = Number of NEs of type 1 x Equivalent coefficient of type 1
+ ... + Number of NEs of type n x Equivalent coefficient of type n
NOTE
For example, if there are 5 OptiX OSN 9500s (equivalent coefficient: 10), 10 OptiX OSN 7500s
(equivalent coefficient: 6.5), and 100 OptiX OSN 3500s (equivalent coefficient: 4.5), then:
Number of equivalent NEs in the transport ___domain = 5 x 10 + 10 x 6.5 + 100 x 4.5 = 565
Table 9-8 describes the equivalent coefficients for NEs in the transport ___domain.
OptiX OSN 80 2
OptiX 155S 1
OptiX 155/622B 2
OptiX 2500 3
OptiX OTU40000 1
HUAWEI OSN902 1
NEC 5000S 1
PTP 250 1
PTP 500 1
PTP 650 1
PMP 450 1
X-1200 1
NOTE
For example, if there are 5 NE5000Es (equivalent coefficient: 10), 200 S5300s (equivalent coefficient:
1.25), and 1000 CX200s (equivalent coefficient: 0.625), then:
Number of equivalent NEs in the IP ___domain = 5 x 10 + 200 x 1.25 + 1000 x 0.625 = 925
Equivalent coefficients of NEs in the IP ___domain are shown in Table 9-9 describes the
equivalent coefficients for NEs in the IP ___domain.
NE05E-S/NE05E-M 0.5
NE08E-S/NE08E-M 1.0
NE20/NE20E 1.25
NE20E-S4 0.5
NE20E-S8/S16 1.0
NE20E-M2E/NE20E-M2F 0.5
NE40/NE80 5.0
NE40E-X1 0.5
NE40E-X2 1.0
NE40E-X3 1.25
NE40E-4 1.25
NE40E-X8 2.5
NE40E-8 2.5
NE40E-X16 5.0
NE40E-M2E/NE40E-M2F/ 0.5
NE40E-M2H
NE80E 5.0
NE5000E 10.0 x N
N indicates the number of
chassis.
AR150 0.125
AR200 0.125
AR1200/AR2200/AR3200/ 0.25
AR3600
NE16EX 0.25
R series 1.0
NE9000 10.0
RM9000 1.0
CE12808 8.0
CE12812 10.0
PTN6900-2/PTN6900-2-M8/ 1.0
PTN6900-2-M16
PTN6900-3 1.25
PTN6900-8 2.5
PTN6900-16 5.0
CX600-X1 0.5
CX600-X2 1.0
CX600-X3 1.25
CX600-4 1.25
CX600-X8 2.5
CX600-8 2.5
CX600-X16 5.0
CX600-16 5.0
CX600-M2E/CX600-M2F/CX600- 0.5
M2H
CX6620 10.0
CX6608 5.0
NGFW 0.5
NE40E-FW 4.0
NE80E-FW 8.0
USG9120 4.0
USG9310 4.0
USG9320 8.0
USG9520 1.5
USG9560 4.0
USG9580 8.0
USG3000 0.25
USG50 0.25
SIG9820 8.0
SIG9800-X3 1.5
SIG9800-X8 4.0
SIG9800-X16 8.0
SeMG9811-X8 4.0
SeMG9811-X16 8.0
NE80E-DPI 8.0
SVN2200 0.25
SVN5300 0.75
SVN5500 0.75
ASG2200 0.25
ASG2600 0.75
ASG2800 0.75
BGW9916 5.0
NOTE
Table 9-10 describes the equivalent coefficients for NEs in the access ___domain.
CM 1/200
Amplifier 1/40
10 Version Requirements
ZTE V300R002C00
NOTE
Rules for defining commercial versions of NE software: a.bb.cc.dd. "a" indicates the NE platform
version. Currently, 4, 5 and 8 are supported. "bb" indicates the product name. "cc" indicates the R
version of the product. To be specific, 01 represents R001, 02 represents R002, and so on. "dd" indicates
the C version of the product. Mapping principle:
1. If the mapping table provides only the information that a.bb.cc.20 is supported by version A of the
NCE, a.bb.cc.2x is supported by version A of the NCE.
2. If the mapping table provides only the information that a.bb.cc is supported by version A of the
NCE, a.bb.cc.xx is supported by version A of the NCE.
NOTE
l Different from OptiX OSN 8800 T32 and other devices, OptiX OSN 8800 is a platform-based
device.
l Different from OptiX OSN 9600 U32 and other devices, OptiX OSN 9600 is a platform-based
device.
l Different from OptiX OSN 9800 U32 and other devices, OptiX OSN 9800 is a platform-based
device.
l The OptiX BWS 1600G OLA is an independent power supply subrack. It is supported by the OptiX
BWS 1600G backbone DWDM optical transmission system V100R004 and later versions.
l Rules for defining commercial versions of NE software: a.bb.cc.dd. "a" indicates the NE platform
version. Currently, 4, 5 and 8 are supported. "bb" indicates the product name. "cc" indicates the R
version of the product. To be specific, 01 represents R001, 02 represents R002, and so on. "dd"
indicates the C version of the product. Mapping principle:
1. If the mapping table provides only the information that a.bb.cc.20 is supported by version A of
the NCE, a.bb.cc.2x is supported by version A of the NCE.
2. If the mapping table provides only the information that a.bb.cc is supported by version A of the
NCE, a.bb.cc.xx is supported by version A of the NCE.
NOTE
Rules for defining commercial versions of NE software: a.bb.cc.dd. "a" indicates the NE platform
version. Currently, 4, 5 and 8 are supported. "bb" indicates the product name. "cc" indicates the R
version of the product. To be specific, 01 represents R001, 02 represents R002, and so on. "dd" indicates
the C version of the product. Mapping principle:
1. If the mapping table provides only the information that a.bb.cc.20 is supported by version A of the
NCE, a.bb.cc.2x is supported by version A of the NCE.
2. If the mapping table provides only the information that a.bb.cc is supported by version A of the
NCE, a.bb.cc.xx is supported by version A of the NCE.
NOTE
Rules for defining commercial versions of NE software: a.bb.cc.dd. "a" indicates the NE platform
version. Currently, 4, 5 and 8 are supported. "bb" indicates the product name. "cc" indicates the R
version of the product. To be specific, 01 represents R001, 02 represents R002, and so on. "dd" indicates
the C version of the product. Mapping principle:
1. If the mapping table provides only the information that a.bb.cc.20 is supported by version A of the
NCE, a.bb.cc.2x is supported by version A of the NCE.
2. If the mapping table provides only the information that a.bb.cc is supported by version A of the
NCE, a.bb.cc.xx is supported by version A of the NCE.
VG10 N/A V1
VG20 N/A V1
VG80 N/A V1
XE Series N/A V1
11 Standards Compliance
NCE complies with ITU-T, IETF, and TMF standards and protocols.
ITU-T G.707 Network node interface for the synchronous digital hierarchy
(SDH)
Standard/Protocol Name
Standard/Protocol Name
ITU-T M.3101 Managed Object Conformance statements for the generic network
information model
Standard/Protocol Name
RFC1215 A Convention for Defining Traps for use with the SNMP
Standard/Protocol Name
Standard/Protocol Name
A Glossary
Numerics
3G See Third Generation.
802.1Q in 802.1Q A VLAN feature that allows the equipment to add a VLAN tag to a tagged frame. The
(QinQ) implementation of QinQ is to add a public VLAN tag to a frame with a private VLAN
tag to allow the frame with double VLAN tags to be transmitted over the service
provider's backbone network based on the public VLAN tag. This provides a layer 2
VPN tunnel for customers and enables transparent transmission of packets over
private VLANs.
A
ACL See Access Control List.
ADMC automatically detected and manually cleared
ADSL See asymmetric digital subscriber line.
ADSL2+ asymmetric digital subscriber line 2 plus
ANCP See Access Node Control Protocol.
API See application programming interface.
APS automatic protection switching
AS See autonomous system.
ASBR See autonomous system boundary router.
ASN.1 See Abstract Syntax Notation One.
ASON automatically switched optical network
Abstract Syntax A syntax notation type employed to specify protocols. Many protocols defined by the
Notation One (ASN.1) ITU-T use this syntax format. Other alternatives are standard text or Augmented
Backus-Naur Form (ABNF).
Access Control List A list of entities, together with their access rights, which are authorized to access a
(ACL) resource.
Access Node Control An IP-based protocol that operates between the access node (AN) and the network
Protocol (ANCP) access server (NAS), over a DSL access and aggregation network.
B
B/S Browser/Server
BFD See Bidirectional Forwarding Detection.
BGP Border Gateway Protocol
BIOS See basic input/output system.
BITS See building integrated timing supply.
BOD bandwidth on demand
BRA See basic rate access.
BRAS See broadband remote access server.
BSS Business Support System
BWS backbone wavelength division multiplexing system
Bidirectional A fast and independent hello protocol that delivers millisecond-level link failure
Forwarding Detection detection and provides carrier-class availability. After sessions are established between
(BFD) neighboring systems, the systems can periodically send BFD packets to each other. If
one system fails to receive a BFD packet within the negotiated period, the system
regards that the bidirectional link fails and instructs the upper layer protocol to take
actions to recover the faulty link.
basic input/output Firmware stored on the computer motherboard that contains basic input/output control
system (BIOS) programs, power-on self test (POST) programs, bootstraps, and system setting
information. The BIOS provides hardware setting and control functions for the
computer.
basic rate access An ISDN interface typically used by smaller sites and customers. This interface
(BRA) consists of a single 16 kbit/s data (or "D") channel plus two bearer (or "B") channels
for voice and/or data. Also known as Basic Rate Access, or BRI.
broadband remote A new type of access gateway for broadband networks. As a bridge between backbone
access server (BRAS) networks and broadband access networks, BRAS provides methods for fundamental
access and manages the broadband access network. It is deployed at the edge of
network to provide broadband access services, convergence, and forwarding of
multiple services, meeting the demands for transmission capacity and bandwidth
utilization of different users. BRAS is a core device for the broadband users' access to
a broadband network.
building integrated In the situation of multiple synchronous nodes or communication devices, one can use
timing supply (BITS) a device to set up a clock system on the hinge of telecom network to connect the
synchronous network as a whole, and provide satisfactory synchronous base signals to
the building integrated device. This device is called BITS.
C
CAS See Central Authentication Service.
CBU See cellular backhaul unit.
CC See continuity check.
CCC circuit cross connect
CES See circuit emulation service.
CIR committed information rate
CLEI common language equipment identification
CLI See command-line interface.
CORBA See Common Object Request Broker Architecture.
CPE See customer-premises equipment.
CPU See central processing unit.
CSV See comma separated values.
Central Authentication A single sign-on protocol for the web. Its purpose is to permit users to access multiple
Service (CAS) applications by providing their credentials (such as user names and passwords) only
once. It also allows web applications to authenticate users without gaining access to
the users' security credentials (such as passwords). CAS also refers to a software
package that implements this protocol.
Common Object A specification developed by the Object Management Group in 1992 in which pieces
Request Broker of programs (objects) communicate with other objects in other programs, even if the
Architecture (CORBA) two programs are written in different programming languages and are running on
different platforms. A program makes its request for objects through an object request
broker, or ORB, and therefore does not need to know the structure of the program
from which the object comes. CORBA is designed to work in object-oriented
environments.
Coordinated Universal The world-wide scientific standard of timekeeping. It is based upon carefully
Time (UTC) maintained atomic clocks and is kept accurate to within microseconds worldwide.
cellular backhaul unit A network access unit used for access base transceiver stations. It provides Ethernet,
(CBU) IP, and TDM services; has multiple Ethernet and 1PPS+ToD interfaces and optionally
E1 interfaces. It is mainly applicable to backhaul in mobile base transceiver stations.
central processing unit The computational and control unit of a computer. The CPU is the device that
(CPU) interprets and executes instructions. The CPU has the ability to fetch, decode, and
execute instructions and to transfer information to and from other resources over the
computer's main data-transfer path, the bus.
circuit emulation A function with which the E1/T1 data can be transmitted through ATM networks. At
service (CES) the transmission end, the interface module packs timeslot data into ATM cells. These
ATM cells are sent to the reception end through the ATM network. At the reception
end, the interface module re-assigns the data in these ATM cells to E1/T1 timeslots.
The CES technology guarantees that the data in E1/T1 timeslots can be recovered to
the original sequence at the reception end.
comma separated A CSV file is a text file that stores data, generally used as an electronic table or by the
values (CSV) database software.
command-line A means of communication between a program and its user, based solely on textual
interface (CLI) input and output. Commands are input with the help of a keyboard or similar device
and are interpreted and executed by the program. Results are output as text or graphics
to the terminal.
continuity check (CC) An Ethernet connectivity fault management (CFM) method used to detect the
connectivity between MEPs by having each MEP periodically transmit a Continuity
Check Message (CCM).
customer-premises Customer-premises equipment or customer-provided equipment (CPE) is any terminal
equipment (CPE) and associated equipment located at a subscriber's premises and connected with a
carrier's telecommunication channel at the demarcation point ("demarc"). The demarc
is a point established in a building or complex to separate customer equipment from
the equipment located in either the distribution infrastructure or central office of the
communications service provider. CPE generally refers to devices such as telephones,
routers, network switches, residential gateways (RG), set-top boxes, fixed mobile
convergence products, home networking adapters and Internet access gateways that
enable consumers to access communications service providers' services and distribute
them around their house via a local area network (LAN).
D
D-CCAP See Distributed Converged Cable Access Platform.
DB database
DC data center
DCC data communication channel
DCI See Data Center Interconnect.
DCM See dispersion compensation module.
DCN See data communication network.
DDoS See distributed denial of service.
DES-CBC Data Encryption Standard-Cipher Block Chaining
DSLAM See digital subscriber line access multiplexer.
DWDM See dense wavelength division multiplexing.
Data Center DCI refers to network interconnection between two data centers for cross-DC service
Interconnect (DCI) transmission and migration.
Distributed Converged Distributed converged cable access platform (D-CCAP) meets the requirements of
Cable Access Platform triple-play network services over hybrid fiber coaxial (HFC) networks of multiservice
(D-CCAP) operators (MSOs) because of unique advantages of the D-CCAPs, such as high
bandwidth and supporting HFC networks.
DoS See denial of service.
data communication A communication network used in a TMN or between TMNs to support the data
network (DCN) communication function.
denial of service (DoS) DoS attack is used to attack a system by sending a large number of data packets. As a
result, the system cannot receive requests from the valid users or the host is suspended
and cannot work normally. DoS attack includes SYN flood, Fraggle, and others. The
DoS attacker only stops the valid user from accessing resources or devices instead of
searching for the ingresses of the intranet.
dense wavelength The technology that utilizes the characteristics of broad bandwidth and low
division multiplexing attenuation of single mode optical fiber, employs multiple wavelengths with specific
(DWDM) frequency spacing as carriers, and allows multiple channels to transmit simultaneously
in the same fiber.
digital subscriber line A network device, usually situated in the main office of a telephone company, that
access multiplexer receives signals from multiple customer Digital Subscriber Line (DSL) connections
(DSLAM) and uses multiplexing techniques to put these signals on a high-speed backbone line.
dispersion A type of module that contains dispersion compensation fibers to compensate for the
compensation module dispersion of the transmitting fiber.
(DCM)
distributed denial of Distributed Denial of Service attack is one in which a multitude of compromised
service (DDoS) systems attack a single target, therefore causing denial of service for users of the
targeted system. The flood of incoming messages to the target system essentially and
occupies the resources of it, therefore denying services to legitimate users.
E
E-LAN See Ethernet local area network.
E-Line See Ethernet line.
E2E end to end
ECC See embedded control channel.
EDFA See erbium-doped fiber amplifier.
EPL See Ethernet private line.
EPON See Ethernet passive optical network.
EVPL See Ethernet virtual private line.
EVPN See Ethernet VPN.
EoO Ethernet over OTN
EoW Ethernet over WDM
Ethernet VPN (EVPN) Ethernet Virtual Private Network (EVPN) is a VPN technology used for Layer 2
network interconnection. EVPN uses a mechanism similar to BGP or MPLS IP VPN.
By extending BGP and using extended reachability information, EVPN enables MAC
address learning and advertisement between Layer 2 networks at different sites to be
transferred from the data plane to the control plane.
Ethernet line (E-Line) A type of Ethernet service that is based on a point-to-point EVC (Ethernet virtual
connection).
Ethernet local area A type of Ethernet service that is based on a multipoint-to-multipoint EVC (Ethernet
network (E-LAN) virtual connection).
Ethernet passive An Ethernet Passive Optical Network (EPON) is a passive optical network based on
optical network Ethernet. It is a new generation broadband access technology that uses a point-to-
(EPON) multipoint structure and passive fiber transmission. It supports upstream/downstream
symmetrical rates of 1.25 Gbit/s and a reach distance of up to 20 km. In the
downstream direction, the bandwidth is shared based on encrypted broadcast
transmission for different users. In the upstream direction, the bandwidth is shared
based on TDM. EPON meets the requirements for high bandwidth.
Ethernet private line A type of Ethernet service provided by SDH, PDH, ATM, or MPLS server layer
(EPL) networks. This service is carried over dedicated bandwidth between point-to-point
connections.
Ethernet virtual A type of Ethernet service provided by SDH, PDH, ATM, or MPLS server layer
private line (EVPL) networks. This service is carried over shared bandwidth between point-to-point
connections.
embedded control A logical channel that uses a data communications channel (DCC) as its physical layer
channel (ECC) to enable the transmission of operation, administration, and maintenance (OAM)
information between NEs.
erbium-doped fiber An optical device that amplifies optical signals. This device uses a short optical fiber
amplifier (EDFA) doped with the rare-earth element, Erbium. The signal to be amplified and a pump
laser are multiplexed into the doped fiber, and the signal is amplified by interacting
with doping ions. When the amplifier passes an external light source pump, it
amplifies the optical signals in a specific wavelength range.
F
FCAPS Fault, Configuration, Accounting, Performance, Security
FDN fixed dialing number
FIB See forwarding information base.
FPGA See field programmable gate array.
FRR See fast reroute.
FTP See File Transfer Protocol.
FTTB See fiber to the building.
FTTC See fiber to the curb.
FTTH See fiber to the home.
File Transfer Protocol A member of the TCP/IP suite of protocols, used to copy files between two computers
(FTP) on the Internet. Both computers must support their respective FTP roles: one must be
an FTP client and the other an FTP server.
fast reroute (FRR) A technology which provides a temporary protection of link availability when part of
a network fails. The protocol enables the creation of a standby route or path for an
active route or path. When the active route is unavailable, the traffic on the active
route can be switched to the standby route. When the active route is recovered, the
traffic can be switched back to the active route. FRR is categorized into IP FRR, VPN
FRR, and TE FRR.
fiber to the building A fiber-based networking scenario. There are two types of FTTB scenarios: multi-
(FTTB) dwelling unit (MDU) and business buildings. Each scenario includes the following
service types: FTTB to the MDU and FTTB to the business buildings.
fiber to the curb A fiber-based networking scenario. The FTTC scenario provides the following
(FTTC) services: asymmetric broadband services (such as digital broadcast service, VOD, file
download, and online gaming), symmetric broadband services (such as content
broadcast, email, file exchange, distance education, and distance medical care), POTS,
ISDN, and xDSL backhaul services.
fiber to the home A fiber-based networking scenario. The FTTH scenario provides the following
(FTTH) services: asymmetric broadband services (digital broadcast service, VoD, file
download, and online gaming), symmetric broadband services (content broadcast,
email, file exchange, distance education, and distance medical care), POTS, and ISDN
services.
field programmable A semi-customized circuit that is used in the Application Specific Integrated Circuit
gate array (FPGA) (ASIC) field and developed based on programmable components. FPGA remedies
many of the deficiencies of customized circuits, and allows the use of many more gate
arrays.
forwarding A table that provides information for network hardware (bridges and routers) for them
information base (FIB) to forward data packets to other networks. The information contained in a routing
table differs according to whether it is used by a bridge or a router. A bridge relies on
both the source (originating) and destination addresses to determine where and how to
forward a packet.
G
GMPLS generalized multiprotocol label switching
GNE See gateway network element.
GPON gigabit-capable passive optical network
GRE See Generic Routing Encapsulation.
GUI See graphical user interface.
Generic Routing A mechanism for encapsulating any network layer protocol over any other network.
Encapsulation (GRE) GRE is used for encapsulating IP datagrams tunneled through the Internet. GRE
serves as a Layer 3 tunneling protocol and provides a tunnel for transparently
transmitting data packets.
gateway network An NE that serves as a gateway for other NEs to communicate with a network
element (GNE) management system.
graphical user A visual computer environment that represents programs, files, and options with
interface (GUI) graphical images, such as icons, menus, and dialog boxes, on the screen.
I
IAM See Identity and Access Management.
IANA See Internet Assigned Numbers Authority.
ICMP See Internet Control Message Protocol.
IDC See Internet Data Center.
IDN See integrated digital network.
Internet Protocol The current version of the Internet Protocol (IP). IPv4 utilizes a 32bit address which is
version 4 (IPv4) assigned to hosts. An address belongs to one of five classes (A, B, C, D, or E) and is
written as 4 octets separated by periods and may range from 0.0.0.0 through to
255.255.255.255. Each IPv4 address consists of a network number, an optional
subnetwork number, and a host number. The network and subnetwork numbers
together are used for routing, and the host number is used to address an individual host
within the network or subnetwork.
Internet Protocol An update version of IPv4, which is designed by the Internet Engineering Task Force
version 6 (IPv6) (IETF) and is also called IP Next Generation (IPng). It is a new version of the Internet
Protocol. The difference between IPv6 and IPv4 is that an IPv4 address has 32 bits
while an IPv6 address has 128 bits.
Internet service An organization that offers users access to the Internet and related services.
provider (ISP)
IoT Internet of Things
iSStar See Integration Script Star.
integrated digital A set of digital nodes and digital links that uses integrated digital transmission and
network (IDN) switches to provide digital connections between two or more defined points.
K
KPI key performance indicator
L
L2VPN Layer 2 virtual private network
L3VPN Layer 3 virtual private network
LAG See link aggregation group.
LAN See local area network.
LB See loopback.
LLDP See Link Layer Discovery Protocol.
LSA link-state advertisement
LSR See label switching router.
LTE See Long Term Evolution.
Link Layer Discovery The Link Layer Discovery Protocol (LLDP) is an L2D protocol defined in IEEE
Protocol (LLDP) 802.1ab. Using the LLDP, the NMS can rapidly obtain the Layer 2 network topology
and changes in topology when the network scales expand.
Long Term Evolution LTE, an abbreviation for Long-Term Evolution, commonly marketed as 4G LTE, is a
(LTE) standard for wireless communication of high-speed data for mobile phones and data
terminals. It is based on the GSM/EDGE and UMTS/HSPA network technologies,
increasing the capacity and speed using a different radio interface together with core
network improvements.[1][2] The standard is developed by the 3GPP (3rd Generation
Partnership Project) and is specified in its Release 8 document series, with minor
enhancements described in Release 9.
label switching router Basic element of an MPLS network. All LSRs support the MPLS protocol. The LSR
(LSR) is composed of two parts: control unit and forwarding unit. The former is responsible
for allocating the label, selecting the route, creating the label forwarding table,
creating and removing the label switch path; the latter forwards the labels according to
groups received in the label forwarding table.
link aggregation group An aggregation that allows one or more links to be aggregated together to form a link
(LAG) aggregation group so that a MAC client can treat the link aggregation group as if it
were a single link.
local area network A network formed by the computers and workstations within the coverage of a few
(LAN) square kilometers or within a single building, featuring high speed and low error rate.
Current LANs are generally based on switched Ethernet or Wi-Fi technology and run
at 1,000 Mbit/s (that is, 1 Gbit/s).
loopback (LB) A troubleshooting technique that returns a transmitted signal to its source so that the
signal or message can be analyzed for errors. The loopback can be a inloop or outloop.
M
MA maintenance association
MAC See Media Access Control.
MBB mobile broadband
MD See maintenance ___domain.
MD5 See message digest algorithm 5.
MDF See main distribution frame.
MDU See multi-dwelling unit.
ME See managed element.
MEP maintenance association end point
MIB See management information base.
MIP maintenance association intermediate point
MO managed object
MOS mean opinion score
MPLS See Multiprotocol Label Switching.
MPLS VPN See multiprotocol label switching virtual private network.
MS-PW See multi-segment pseudo wire.
MSAN multiservice access node
MSTP See multi-service transmission platform.
MTOSI Multi-Technology Operations System Interface
MTTR See Mean Time to Repair.
Mean Time to Repair The average time that a device will take to recover from a failure.
(MTTR)
Media Access Control A protocol at the media access control sublayer. The protocol is at the lower part of
(MAC) the data link layer in the OSI model and is mainly responsible for controlling and
connecting the physical media at the physical layer. When transmitting data, the MAC
protocol checks whether to be able to transmit data. If the data can be transmitted,
certain control information is added to the data, and then the data and the control
information are transmitted in a specified format to the physical layer. When receiving
data, the MAC protocol checks whether the information is correct and whether the
data is transmitted correctly. If the information is correct and the data is transmitted
correctly, the control information is removed from the data and then the data is
transmitted to the LLC layer.
Multiprotocol Label A technology that uses short tags of fixed length to encapsulate packets in different
Switching (MPLS) link layers, and provides connection-oriented switching for the network layer on the
basis of IP routing and control protocols.
main distribution A device at a central office, on which all local loops are terminated.
frame (MDF)
maintenance ___domain The network or the part of the network for which connectivity is managed by
(MD) connectivity fault management (CFM). The devices in a maintenance ___domain are
managed by a single Internet service provider (ISP).
managed element A particular entity or resource in a networked system environment. It can also
(ME) represent a physical piece of equipment on the network, the components of the device
on the network, or parts of the network itself.
management A type of database used for managing the devices in a communications network. It
information base comprises a collection of objects in a (virtual) database used to manage entities (such
(MIB) as routers and switches) in a network.
message digest A hash function that is used in a variety of security applications to check message
algorithm 5 (MD5) integrity. MD5 processes a variable-length message into a fixed-length output of 128
bits. It breaks up an input message into 512-bit blocks (sixteen 32-bit little-endian
integers). After a series of processing, the output consists of four 32-bit words, which
are then cascaded into a 128-bit hash number.
multi-dwelling unit A network access unit used for multi-dwelling units. It provides Ethernet and IP
(MDU) services and optionally VoIP or CATV services; has multiple broadband interfaces on
the user side and optionally POTS ports or CATV RF ports. It is mainly applicable to
FTTB, FTTC, or FTTCab networks.
multi-segment pseudo A collection of multiple adjacent PW segments. Each PW segment is a point-to-point
wire (MS-PW) PW. The use of MS-PWs to bear services saves tunnel resources and can transport
services over different networks.
multi-service A platform based on the SDH platform, capable of accessing, processing and
transmission platform transmitting TDM services, ATM services, and Ethernet services, and providing
(MSTP) unified management of these services.
multiprotocol label An Internet Protocol (IP) virtual private network (VPN) based on the multiprotocol
switching virtual label switching (MPLS) technology. It applies the MPLS technology for network
private network routers and switches, simplifies the routing mode of core routers, and combines
(MPLS VPN) traditional routing technology and label switching technology. It can be used to
construct the broadband Intranet and Extranet to meet various service requirements.
N
NBI See northbound interface.
O
OAM See operation, administration and maintenance.
OCS optical core switching
OCh optical channel with full functionality
ODN optical distribution network
ODU Optical channel Data Unit
ODUk optical channel data unit - k
OLA optical line amplifier
OLT optical line terminal
OMC See operation and maintenance center.
OMS optical multiplexing section
ONT See optical network terminal.
ONU See optical network unit.
OP operator variant algorithm configuration field
OPEX See operating expense.
OPS See optical physical section.
OSI open systems interconnection
OSN optical switch node
OSNR See optical signal-to-noise ratio.
OSPF See Open Shortest Path First.
OSPF-TE Open Shortest Path First-Traffic Engineering
OSS operations support system
OTN optical transport network
OTS See optical transmission section.
OTT over the top
OVPN Optical Virtual Private Network
Open Shortest Path A link-state, hierarchical interior gateway protocol (IGP) for network routing that uses
First (OSPF) cost as its routing metric. A link state database is constructed of the network topology,
which is identical on all routers in the area.
OpenStack An open cloud computing management platform that consists of multiple components.
It features simple installation and scalability and can provide various functions and a
unified standard in various cloud environments.
operating expense An operating expense, operating expenditure, operational expense, operational
(OPEX) expenditure or OPEX is an ongoing cost for running a product, business, or system.
operation and An element within a network management system responsible for the operations and
maintenance center maintenance of a specific element or group of elements. For example an OMC-Radio
(OMC) may be responsible for the management of a radio subsystem where as an OMC-
Switch may be responsible for the management of a switch or exchange. However,
these will in turn be under the control of a NMC (Network Management Centre)
which controls the entire network.
operation, A set of network management functions that cover fault detection, notification,
administration and ___location, and repair.
maintenance (OAM)
optical network A device that terminates the fiber optical network at the customer premises.
terminal (ONT)
optical network unit A form of Access Node that converts optical signals transmitted via fiber to electrical
(ONU) signals that can be transmitted via coaxial cable or twisted pair copper wiring to
individual subscribers.
optical physical section A network segment in the physical layer of optical network.
(OPS)
optical signal-to-noise The ratio of signal power to noise power in a transmission link. OSNR is the most
ratio (OSNR) important index for measuring the performance of a DWDM system.
optical transmission A section in the logical structure of an optical transport network (OTN). The OTS
section (OTS) allows the network operator to perform monitoring and maintenance tasks between
NEs.
P
P2MP point-to-multipoint
P2P See point-to-point service.
PE See provider edge.
PER packed encoding rules
PKI See public key infrastructure.
PMS performance management system
PON passive optical network
POTS See plain old telephone service.
PPP Point-to-Point Protocol
PRA See primary rate access.
PSN See packet switched network.
PSTN See public switched telephone network.
PTN packet transport network
PVC permanent virtual channel
PW See pseudo wire.
PWE3 See pseudo wire emulation edge-to-edge.
packet switched A telecommunications network that works in packet switching mode.
network (PSN)
plain old telephone The basic telephone service provided through the traditional cabling such as twisted
service (POTS) pair cables.
point-to-point service A service between two terminal users. In P2P services, senders and recipients are
(P2P) terminal users.
primary rate access A standardized ISDN user-network interface structure utilizing the capacity of the
(PRA) primary level of the digital hierarchy, that is, 1544 kbit/s or 2048 kbit/s digit rate.
Note: The digit rate of any D-channel in this interface structure is 64 kbit/s.
provider edge (PE) A device that is located in the backbone network of the MPLS VPN structure. A PE is
responsible for managing VPN users, establishing LSPs between PEs, and exchanging
routing information between sites of the same VPN. A PE performs the mapping and
forwarding of packets between the private network and the public channel. A PE can
be a UPE, an SPE, or an NPE.
pseudo wire (PW) An emulated connection between two PEs for transmitting frames. The PW is
established and maintained by PEs through signaling protocols. The status information
of a PW is maintained by the two end PEs of a PW.
pseudo wire emulation An end-to-end Layer 2 transmission technology. It emulates the essential attributes of
edge-to-edge (PWE3) a telecommunication service such as ATM, FR or Ethernet in a packet switched
network (PSN). PWE3 also emulates the essential attributes of low speed time
division multiplexing (TDM) circuit and SONET/SDH. The simulation approximates
to the real situation.
public key A concept based on Public Key Cryptography. It provides the system created and
infrastructure (PKI) managed by the public key and supports the efficient implementation of data
encryption and key exchange.
public switched A telecommunications network established to perform telephone services for the
telephone network public subscribers. Sometimes it is called POTS.
(PSTN)
Q
QinQ See 802.1Q in 802.1Q.
R
RADIUS See Remote Authentication Dial In User Service.
RAID redundant array of independent disks
RAN See radio access network.
REG See regenerator.
REST See Representational State Transfer.
RESTCONF See RESTCONF.
RESTCONF An HTTP-based protocol that provides a programmatic interface for accessing data
(RESTCONF) defined in YANG, using the datastore concepts defined in the Network Configuration
Protocol (NETCONF).
RESTful RESTful is a software architecture style rather than a standard. It provides a set of
software design guidelines and constraints for designing software for interaction
between clients and servers. RESTful software is simpler and more hierarchical, and
facilitates the implementation of the cache mechanism.
S
SAML See Security Assertion Markup Language.
SAN See storage area network.
U
UNI See user-to-network interface.
UPE user-end provider edge
URL See uniform resource locator.
V
VCSA VMware vCenter Server Appliance
VDSL2 See very-high-speed digital subscriber line 2.
VE virtual Ethernet
VIP very important person
VLAN See virtual local area network.
VLL virtual leased line
VM See virtual machine.
VP See virtual path.
VPLS virtual private LAN segment
VPN virtual private network
VRF VPN routing and forwarding
VRRP See Virtual Router Redundancy Protocol.
VXLAN Virtual Extensible LAN
Virtual Router A protocol designed for multicast or broadcast LANs such as an Ethernet. A group of
Redundancy Protocol routers (including an active router and several backup routers) in a LAN is regarded as
(VRRP) a virtual router, which is called a backup group. The virtual router has its own IP
address. The host in the network communicates with other networks through this
virtual router. If the active router in the backup group fails, one of the backup routers
in this backup group becomes active and provides routing service for the host in the
network.
VoIP voice over IP
vCPU virtualized CPU
very-high-speed digital An extension of the VDSL technology, which complies with ITU G.993.2, supports
subscriber line 2 multiple spectrum profiles and encapsulation modes, and provides short-distance and
(VDSL2) high-speed access solutions to the next-generation FTTx access service.
virtual local area A logical grouping of two or more nodes which are not necessarily on the same
network (VLAN) physical network segment but which share the same IP network number. This is often
associated with switched Ethernet.
virtual machine (VM) A software simulation of a complete computer system, which runs in an independent
environment and provides all hardware system functions. A physical machine can be
virtualized as multiple VMs based on application requirements, which allows multiple
operating systems to run on the same physical machine. Each operating system can be
virtually partitioned and configured, and users can switch between operating
systems.A virtual machine is a software computer that, like a physical computer, runs
an operating system and applications. Multiple virtual machines can operate on the
same host system concurrently.
virtual path (VP) A bundle of virtual channels, all of which are switched transparently across an ATM
network based on a common VPI.
W
WAN wide area network
WDM wavelength division multiplexing
X
xDSL x digital subscriber line