Neptune (Hybrid) Reference Manual V7
Neptune (Hybrid) Reference Manual V7
Version 7.0
Reference Manual
Neptune (Hybrid) Reference Manual
V7.0
Catalog No: X93305
Drawing No: 417006-2710-093-A00
September 2018
Rev01
ECI's qualification lab is accredited by A2LA for competence in electrical testing according to
the International Standard ISO IEC 17025-2005 General Requirements for the Competence of
Testing and Calibration Laboratories.
Related documents
Neptune General Description
Neptune What's New
Neptune System Specifications
NPT-1010 Installation and Maintenance Manual
NPT-1020 Installation and Maintenance Manual
NPT-1030 Installation and Maintenance Manual
NPT-1050 Installation and Maintenance Manual
NPT-1200 Installation and Maintenance Manual
EMS-NPT Documentation Suite
LCT-NPT Documentation Suite
LightSOFT® Documentation Suite
Contact information
Telephone Email
Revision history
Revision Section Description
1 N/A New
2
The second variant, Packet NPT, is equipped with a central Ethernet/MPLS switch and supports TDM
services through Circuit Emulation Service (CES). NPT is equipped with a broad mix of Ethernet and TDM
interfaces, supporting both packet and TDM based services over a converged packet infrastructure.
Both Hybrid and Packet NPT comply with all MEF CE2.0 service standards, as well as offering extensive
synchronization, protection, and resiliency schemes. Whether the network traffic is transported over legacy
equipment, supporting only TDM, or over packet equipment, supporting only Ethernet, NPT can provide
the optimal solution. As the network evolves, there is no need for costly replacements of existing
infrastructure or cumbersome external adaptive boxes.
NPT's flexible traffic handling architecture offers the most cost efficient traffic handling in a mixed TDM and
packet environment while supporting all transport attributes. The result is the lowest TCO throughout the
network life cycle and over the course of the network transition from TDM to packet. This is also correct
when building new carrier Ethernet and packet based transport networks.
The NPT's value propositions include:
Lowest TCO
Flexible multi-service (Packet, Optics, TDM)
Cost-effective scalability through the modular architecture
Dual Stack MPLS, offering seamless interworking, service optimized
Transport grade service assurance
Performance: Predictable and guaranteed
Availability: Carrier grade redundancy and protection
Security: Secure and transparent
E2E control
Intuitive GUI: Easy point-and-click operation
Unified multi-layer NMS: Enabling smooth, converged control
Visibility: Providing extensive OAM for E2E SLA visibility
This reference manual describes the shelf layout and system architecture of each platform in the Neptune
family. Additional sections describe the I/O cards, transceiver modules, and system features available
through the STMS management system.
NOTE: Some of the Neptune platforms can optionally be configured with an EXT-2U expansion
unit. This modular approach maximizes efficiency while minimizing expense.
For enhanced readability, descriptions of the EXT-2U slot layout are not repeated in each
Neptune platform section. The information is provided in EXT-2U expansion unit and
referenced from other sections.
Neptune platforms have been designed to facilitate simple installation and easy maintenance. Hot
insertion of cards and modules is allowed to support quick maintenance and repair activities without
affecting traffic. The cage design and mechanical practice of all platforms conform to international
mechanical standards and specifications.
NOTE: All installation instructions, technical specifications, restrictions, and safety warnings
are provided in the Neptune Installation and Maintenance Manuals. See these manuals for
specific instructions before beginning any Neptune platform installation.
Used in many sub network topologies, NPT-1200 can handle a mixture of P2P, hub, and mesh traffic
patterns. This combined functionality means that operators benefit from improved network efficiency and
significant savings in terms of cost and footprint.
The NPT-1200 platform is housed in a 243 mm deep, 442.4 mm wide, and 88.9 mm high equipment cage
with all interfaces accessible from the front of the unit. The platform includes:
Two slots for redundant matrix cards (CPS/CPTS/XIO, in slots XSA and XSB)
One slot for a controller card (MCP1200, in slot MS) that provides the following functionalities:
Alarm indications and monitoring
Timing and synchronization interfaces (T3/T4)
In-band management interfaces (10/100BaseT)
Modules cage with seven I/O slots (TS1-TS7), for installing I/O cards of any type, including PDH, SDH,
Ethernet Layer 1, Ethernet Layer 2/MPLS
Compact flash card (NVM)
Two power-supply module slots (INF_1200 or AC_PS-1200, in slots PSA and PSB)
One fan unit slot
Traffic connector to the (optional) EXT-2U expansion unit with additional three slots
The NPT-1200 is fed from -48 VDC. Two INF_1200 modules can be configured in two power supply module
slots for redundant power supply. The AC_PS-1200 module offers a 100-240 VAC power source option. The
NPT-1200 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. Typical power consumption
for the NPT-1200 is less than 500 W. Power consumption is monitored through the management software.
For more information about power consumption requirements, see the Neptune Installation and
Maintenance Manual and the Neptune System Specifications.
The following figure identifies the slot arrangement in the NPT-1200 platform.
Figure 2-3: NPT-1200 slot layout
For a complete list of the modules that can be configured in each NPT-1200 slot, see NPT-1200 Tslot I/O
modules.
All cards support live insertion. All cards are connected using a backplane that supports one traffic
connector to connect the NPT-1200 and the EXT-2U. The NPT-1200 platform provides full 1+1 redundancy
in power feeding, cross connections, and the TMU, as well as 1:N redundancy in the fans.
The MCP-1200 main controller card is the most essential card of the system, creating virtually a complete
standalone native packet system. Moreover, it accommodates one service traffic slot for flexible
configuration of virtually any type of PDH, SDH, and Ethernet interfaces. This integrated flexible design
ensures a very compact equipment structure and reduces costs, making Neptune an ideal native choice for
the access and metro access layers.
Built-in test
The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
Management reports
System reset
Maintenance alarms
Fault detection
Protection switch for the main switching card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the platform is switched on, a BIT program is automatically activated for both initialization and
normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.
NOTES:
The NPT-1200 (with CPS100/320) supports in-band and management communication
channel (MCC) connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
10 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate, up to 100M full duplex
The following routing protocols are supported via DCN and in-band:
-- IPv4: OSPFv2, static routes
-- IPv6: Over management VLAN, static routes
To support reliable timing, the platform provides multiple synchronization reference options. Up to four
timing references can be monitored simultaneously:
1PPS and ToD interfaces, using external timing input sources
2 x 2 MHz (T3) external timing input sources
2 x 2 Mbps (T3) external timing input sources
STM-n line timing from any SDH interface card
E1 2M PDH line timing from any PDH interface card
Local interval clock
Holdover mode
SyncE
1588v2 – Master, Slave, transparent, and boundary clock
In these platforms, any timing signal can be selected as a reference source. The TMU provides direct control
over the source selection (received from the system software) and the frequency control loop. The
definition of the synchronization source depends on the source quality and synchronization mode of the
network timing topology (set by the EMS-NPT or LCT-NPT):
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The synchronization subsystem synchronizes to the best available timing
source using the Synchronization Status Marker (SSM) protocol. The TMU is frequency-locked to this
source, providing internal system. The platform is synchronized to this central timing source.
The platform provides synchronization outputs for the synchronization of external equipment within
the exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
The platform supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP or multicast packets over Ethernet. IEEE
1588v2 (G.8265.1/G.8275.1) is supported in the platform, providing Ordinary Clock (OC) and Boundary
Clock (BC) capabilities.
2.5.1 INF_1200
The INF_1200 is a DC power-filter module that can be plugged into the NPT-1200 platform. Two INF_1200
modules are needed for power feeding redundancy. It performs the following functions:
Single DC power input and power supply for all modules in the NPT-1200
Input filtering function for the entire NPT-1200 platform
Adjustable output voltage for fans in the NPT-1200
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
High-power INF for up to 550 W and 650W (INF_1200 HW revision D02 and above)
Figure 2-5: INF_1200 front panel
2.5.2 AC_PS-1200
The NPT-1200 can be configured with the AC_PS-1200, a 100-240 VAC power source utilizing an external
power line connection through a power conversion module to implement AC/DC conversion. It performs
the following functions:
Single AC power input and power supply for all modules in the NPT-1200
Input filtering function for the entire NPT-1200 platform
Adjustable output voltage for fans in the NPT-1200
High-power AC power supply for up to 420W (100-120 VAC and 45°C max working temperature) or
480W (220-240 VAC and 55°C max working temperature)
Figure 2-6: AC_PS-1200 front panel
2.5.3 FCU_1200
The FCU_1200 is a pluggable fan control module with eight fans for cooling the NPT-1200 platform. The
fans’ running speed can be set to 16 different levels. The speed is controlled by the MCP1200 according to
the installed cards temperature.
Figure 2-7: FCU_1200 front panel
2.5.4 FCU_1200B
The FCU_1200B is a pluggable fan control module with eight fans for cooling the NPT-1200 platform. The
unit features enhanced PWM (Pulse Width Modulation), which helps optimizing the cooling efficiency and
increases the fan operation life.
FCU_1200B is fully backward compatible with FCU_1200 , so it can replace FCU_1200 in any version. When
FCU_1200B is installed in NPT-1200 V6.0 or older versions, it behaves the same as FCU_1200 with 16
different levels that can be set for the fan speed control by adjusting the fan power supply voltage. When
FCU_1200B is installed in NPT-1200 V6.1 or higher versions, the PWM feature will be enabled and fan
speed control is based on PWM duty cycle with totally 8 levels that can be set.
By default, fan speed is controlled by SW according to the installed cards temperature, and “force turbo” is
supported for maintenance purpose.
Figure 2-8: FCU_1200B front panel
2.5.5 MCP1200
The MCP1200 card is the main processing card of the NPT-1200. It integrates functions such as control,
communications and overhead processing. It provides:
Control-related functions:
Communications with and control of all other modules in the NPT-1200 and EXT-2U through the
backplane (by the CPU)
Communications with the EMS-APT, LCT-APT, or other NEs through a management interface
(MNG) or DCC, or MCC, or VLAN
Routing and handling of up to 32 x RS DCC, 32 x MS DCC, (total 32 channels), and two clear
channels
Alarms and maintenance
Fan control
Overhead processing, including overhead byte cross connections, OW interface, and user channel
interface
External timing reference interfaces (T3/T4), which provide the line interface unit for one 2 Mbps
T3/T4 interface and one 2 MHz T3/T4 interface
The MCP1200 supports the following interfaces:
MNG and T3/T4 directly from its front panel
RS-232, OW access, housekeeping, alarms, and V.11 through a concentrated SCSI auxiliary I/F
connector (on the front panel)
NOTE: Failure of the MCP1200 does not affect any existing packet traffic on the platform.
In addition, the MCP1200 has LED indicators and one reset pushbutton. As the NPT-1200 is a front-access
platform, all its interfaces, LEDs, and pushbutton are located on the front panel of the MCP1200.
NOTE: An MCP30_ICP can be used to distribute the concentrated auxiliary connector into
dedicated connectors for each function.
NOTE: ACT, FAIL, MJR, and MNR. LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1200 Installation, Operation, and Maintenance Manual.
2.6.1 CPTS100
The CPTS100 is a powerful, high capacity, non-blocking dual cross connect matrix. It includes a TDM matrix
for native-SDH switching and a packet switch to support native packet-level switching.
Legacy TDM-level cross connect through the TDM matrix (like in the XIO cards) consume much of the
available bandwidth; the bandwidth allocation was done statically. In the CPTS100, bandwidth is
dynamically allocated, ensuring high flexibility and efficient utilization of this limited resource.
Furthermore, in case unassignment and reassignment of slots is required, the matrix uses a sophisticated
bandwidth rearrangement algorithm to support best bandwidth utilization.
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.
The CPTS100 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
2.6.2 CPS100
The CPS100 is a powerful, high capacity, non-blocking switching card. It includes a pure switching and a
packet switch to support native packet-level switching.
In the CPS100, bandwidth is dynamically allocated, ensuring high flexibility and efficient utilization of this
limited resource. Furthermore, in case unassignment and reassignment of slots is required, the matrix uses
a sophisticated bandwidth rearrangement algorithm to support best bandwidth utilization.
A functional diagram of the CPS100 matrix is shown in the following figure.
Figure 2-12: CPS100 functional diagram
The CPS100 matrix includes the following main components and functions:
Packet switch with 100 Gbps capacity (72 Gbps TM), providing:
Management and internal control, in addition to user traffic switching
Non-blocking data switch fabric
Guaranteed CIR
Eight CoS with differentiated services
P2P MPLS internal links via the packet switch
Any slot to any slot connectivity
Any card installed in any slot
Synchronization 1588V2, Master, Slave, Transparent, and Boundary Clock
Two SFP+ based 10GbE aggregate ports with OTN framing option (OTU-2e FEC/EFEC)
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic affecting.
The CPS100 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
2.6.3 CPTS320
CPTS320 dual matrix cards are centralized packet and TDM switches that support any to any direct data
card connectivity as well as native TDM switching capacity. These matrix cards, designed for use in the NPT-
1200 metro access platform, offer a choice of capacity and configuration options, including:
All Native Ethernet packet switch, supporting native packet-level switching with a capacity of up to
320 Gbps with 240 Gbps TM, providing:
Management and internal control, in addition to user traffic switching
Non-blocking data switch fabric
P2P MPLS internal links via the packet switch
Any slot to any slot connectivity
Any card installed in any slot
HO/LO nonblocking TDM cross connections, enabling native SDH/SONET switching with a capacity of
up to 40G (256 x VC 4 fully LO traffic)
5G HEoS connectivity between the packet and TDM matrix
Aggregate ports:
1 x STM 64 XFP based interface
2 x STM 16/STM 4/STM 1 SFP based configurable interfaces
4 x 10 GbE SFP+ based interfaces
Comprehensive range of timing and synchronization capabilities (IEEE 1588v2, SyncE)
The following figure shows the traffic flow in an NPT-1200 configured with a CPTS320 matrix card.
Figure 2-14: CPTS320 traffic flow
NOTE: The CPTS320 HEoS functionality is supported as of V6.0; make sure to use the version
with HEoS FIX.
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade is non-traffic-affecting.
The CPTS320 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
2.6.5 CPS320
The CPS320 is a centralized packet switch that supports any-to-any direct data card connectivity. This
switch card, designed for use in the NPT-1200 metro access platform, offers a choice of capacity and
configuration options, including:
Ethernet packet switch, supporting native packet-level switching with a 320G switching capacity and
up to 240G traffic management (MPLS processing), providing
Management and internal control, in addition to user traffic switching
Non-blocking data switch fabric
P2P MPLS internal links via the packet switch
Traffic management including:
Guaranteed CIR
Two CoS (within the switch)
E2E flow control
Any card installed in any slot
Any slot to any slot connectivity
Four SFP+ based 10GE aggregate ports with OTN framing option (OTU-2e FEC/EFEC)
Comprehensive range of timing and synchronization capabilities (ToD, 1pps)
NPT-1200 platforms support smooth migration from CPS100 to the higher-capacity CPS320 cards.
The following figure shows the traffic flow in an NPT-1200 configured with a CPS320 matrix card.
Figure 2-17: CPS320 traffic flow
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic affecting.
The CPS320 has an RJ-45 connector marked TOD/1PPS that provides timing and synchronization
input/output signals, supporting IEEE 1588v2 standard.
2.6.6 XIO64
The XIO64 card is the cross-connect matrix card with one aggregation line interface for the NPT-1200. It
also includes the TMU. The NPT-1200 should always be configured with two XIO64 cards for the cross-
connect matrix and TMU redundancy. The XIO64 has a cross-connect capability of 40 Gbps.
In addition, the XIO64 provides one STM-64 aggregate line interface based on the XFP module. The XFP
housing on the XIO64 panel supports STM-64 optical transceivers with a pair of LC optical connectors. The
card also supports OTN (with FEC) by a unique XFP type, the OTRN_xx.
Figure 2-19: XIO64 front panel
2.6.7 XIO16_4
The XIO16_4 is the cross-connect matrix card with four aggregation line interfaces for the NPT-1200. It also
includes the TMU. The NPT-1200 should always be configured with two XIO16_4 cards for the cross-
connect matrix and TMU redundancy. The XIO16_4 has a cross-connect capability of 40 Gbps.
In addition, the XIO16_4 provides four STM-16/4/1 aggregate line interfaces based on the SFP modules. The
SFP housings on the XIO16_4 panel support STM-1, STM-4, and STM-16 (colored, non-colored, and BD)
optical transceivers, each with a pair of LC optical connectors. The type of the interface can be configured
separately for each port through the management.
Figure 2-20: XIO16_4 front panel
NOTES:
Failure of the MCP1200 does not affect any existing traffic on the platform.
The NPT-1200 platform must be configured with identical switching card types.
The NPT-1200 platform with CPTS100/CPS100 supports max. 48 x GbE or max. 10 x 10
GbE.
The NPT-1200 platform with CPTS320/CPS320 supports max. 64 x GbE or max. 32 x 10 GbE
(MBP-1200HW revision should be>=B01).
This fully redundant Packet Optical Access (POA) platform offers enhanced MPLS-TP data network
functionality, including full traffic and IOP protection and the complete range of Ethernet based services
(CES, MoE, and PoE).
The NPT-1050 is designed around a centralized dual matrix card that supports any to any direct data card
connectivity as well as native TDM switching capacity. The platform can be configured with the MCPTS100
matrix card (100G packet switch + 15G TDM switch) or MCPS100 switching card (100G packet switch).
MCPTS100 cards provide a TDM capacity of up to 15G (96 x VC 4 fully LO traffic).
The NPT-1050 is a 1U base platform housed in a 243 mm deep, 465 mm wide, and 44 mm high equipment
cage with all interfaces accessible from the front of the unit. The platform includes the following
components:
Redundant dual matrix cards (MXS A, MXS B) for robust provisioning of the following functionalities:
All native packet switching (MCPTS) or pure packet switching (MCPS).
HO/LO nonblocking TDM cross connections (MCPTS).
Two SFP+ based 10 GbE interfaces (MCPTS and MCPS).
Two SFP based GE interfaces (MCPTS and MCPS).
One SFP based STM-16/STM-4/STM-1 interface (MCPTS).
Comprehensive range of timing and synchronization capabilities (T3/T4, ToD, and 1pps).
In band management interfaces.
Three I/O card slots (TS1-TS3), for processing a comprehensive range of traffic interfaces, including
PDH/Async, SDH/SONET, Ethernet Layer 1, and Ethernet Layer 2/MPLS. The Tslots can be configured
for 2.5G, 20GbE, or 40GbE service.
Traffic connector for the (optional) EXT-2U expansion unit.
Fan unit (FCU_1050, in slot FS) with alarm indications and monitoring.
Power supply (PSA, PSB), available in two modes:
-48 VDC power feed (INF_B1UH), configured in two power supply module slots for external
power line connection, with a dual power feed for redundancy.
100-240 VAC power source (AC_PS-1050) utilizes an external power line connection through a
power conversion module to implement AC/DC conversion.
The NPT-1050 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C. Typical
power consumption for the NPT-1050 is less than 250 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the NPT-1050
Installation and Maintenance Manual and the Neptune System Specifications.
The following figure identifies the slot arrangement in the NPT-1050 platform.
Figure 3-2: NPT-1050 slot layout
For a complete list of the modules that can be configured in each NPT-1050 slot, see NPT-1050 Tslot
modules.
All cards support live insertion. All cards are connected using a backplane that supports one traffic
connector to connect the NPT-1050 and the EXT-2U. The NPT-1050 platform provides full 1+1 redundancy
in power feeding, cross connections, and the TMU, as well as 1:N redundancy in the fans.
The NPT-1050 main controller card (MCPS) is the most essential card of the system, creating virtually a
complete standalone native packet system. NPT-1050 control and communication functions include:
Internal control and processing
Communication with external equipment and management
Network element (NE) software and configuration backup
Built-in Test (BIT)
Built-in test
The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
Management reports
System reset
Maintenance alarms
Fault detection
Protection switch for the main switching card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the platform is switched on, a BIT program is automatically activated for both initialization and
normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.
NOTES:
The NPT-1050 (with MCPS100) supports in-band and management communication
channel (MCC) connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
10 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate, up to 100M full duplex
The following routing protocols are supported via DCN and in-band:
-- IPv4: OSPFv2, static routes
-- IPv6: Over management VLAN, static routes
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The synchronization subsystem synchronizes to the best available timing
source using the Synchronization Status Marker (SSM) protocol. The TMU is frequency-locked to this
source, providing internal system. The platform is synchronized to this central timing source.
The platform provides synchronization outputs for the synchronization of external equipment within
the exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
The platform supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588 Precision Time Protocol (PTP) provides a standard method for high precision synchronization
of network connected clocks. PTP is a time transfer protocol enabling slave clocks to synchronize to a
known master clock, ensuring that multiple devices operate using the same time base. The protocol
operates in master/slave configuration using UDP packets over IP or multicast packets over Ethernet. IEEE
1588v2 (G.8265.1/G.8275.1) is supported in the platform, providing Ordinary Clock (OC) and Boundary
Clock (BC) capabilities.
3.5.1 INF_B1UH
The INF_B1UH is a DC power-filter module that can be plugged into the NPT-1050 platform. Two INF_B1UH
modules are needed for power feeding redundancy. It performs the following functions:
Single DC power input and power supply for all modules in the NPT-1050
Input filtering function for the entire NPT-1050 platform
Adjustable output voltage for fans in the NPT-1050
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
High-power INF for up to 450 W
Figure 3-4: INF_B1UH front panel
3.5.2 AC_PS-1050
The AC_PS-1050 is a 100-240 VAC power source utilizing an external power line connection through a
power conversion module to implement AC/DC conversion. This module occupies two power slots, working
in a non-redundant mode. The AC_PS-1050 performs the following functions:
Single AC power input and power supply for all modules in the NPT-1050
Input filtering function for the entire NPT-1050 platform
Adjustable output voltage for fans in the NPT-1050
High-power AC power supply for up to 420W (100-120 VAC and 45°C max working temperature) or
480W (220-240 VAC and 55°C max working temperature)
Figure 3-5: AC_PS-1050 front panel
3.5.3 FCU_1050
The FCU_1050 is a pluggable fan control module with four fans for cooling the NPT-1050 platform. The
fans’ running speed can be set to 16 different levels. The speed is controlled by the MCPS/MCPTS according
to the installed cards temperature.
In addition the FCU_1050 includes the ALARM interface connector of the NPT-1050 platform.
Figure 3-6: FCU_1050 front panel
MCPTS100 : Main controller processor (MCP) and dual high-capacity, nonblocking 4/4/3/1 HO/LO
cross-connect matrix card; provides system control and management. This central packet and TDM
switch supports 100G packet switching, 15G TDM, 96 VC-4 x 96 VC-4 as ADM-16/MADM-16, as well as
timing control. The card also supports 1 x STM-16/4/1 SFP based port, 4 x GE CSFP based, or 2 x GE
SFP based ports, and 2 x 10 GbE SFP+ based interfaces.
MCPS100 : Main controller processor (MCP) and central packet switching card; provides system
control and management. This central packet switch supports 100G packet switching, including timing
control. The card also supports 4 x GE CSFP based, or 2 x GE SFP based ports, and 2 x 10 GbE SFP+
based interfaces.
AIM100 : Aggregate interface module, configured together with a single switching card to provide
additional 4 x 1G/10G ports in 1+0 (nonredundant) configuration.
The following sections detail each card functionality.
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.
Aggregate ports:
2 x 10 GbE SFP+ based interfaces
4 x GbE CSFP based interfaces
2 x GbE SFP based interfaces
Comprehensive range of timing and synchronization capabilities (ToD, 1pps)
The following figure illustrates the traffic flow in an NPT-1050 configured with an MCPS100 switching card.
Figure 3-9: MCPS100 traffic flow
NOTE: During an upgrade, a different card version or release can be installed in the platform.
With appropriate planning, the upgrade can be non-traffic-affecting.
3.6.4 AIM100
The AIM100 is an aggregate interface module (AIM) for the aggregate (MCPS/MCPTS) slot in a non-
redundant configuration. The card enables to achieve the max interfaces with one MCPS/MCPTS card as a
non-redundant installation .The AIM100, designed for use in the NPT-1050 metro access platform, offers a
choice of configuration option, including:
Aggregate ports:
2 x 10 GbE SFP+ based interfaces
4 x GbE CSFP based interfaces
2 x GbE SFP based interfaces
1 x STM-1/4/16 SFP based interface
NOTE: The PME1_21, PME1_21B, and PME1_63 modules only support balanced E1 interfaces.
For unbalanced E1 interfaces, use the xDDF-21, an external balanced-to-unbalanced
conversion unit.
The NPT-1030 paves the way for service provisioning without sacrificing equipment reliability, robustness,
and hard QoS (H-QoS). Thus, both operators and service providers benefit from the best of both worlds: the
cost-effectiveness and universality of Ethernet and H-QoS, and the scalability and survivability of TDM.
Used in many subnetwork topologies, NPT-1030 can handle a mixture of point-to-point, hub, and mesh
traffic patterns. This combined functionality means operators benefit from improved network efficiency
and significant savings in terms of cost and footprint. The NPT-1030 platform:
Increases the number of STM-1 interfaces, or upgrades from STM-1 to STM-4/STM-16 easily and
smoothly.
Allows you to start very small and attain ultra-high expandability in a build-as-you-grow® fashion by
combining the standard NPT-1030 unit with an expansion unit.
Aggregates traffic arriving over Ethernet, PCM low-bitrate interfaces, E1, E3, and STM-1 directly over
STM-1/STM-4/STM-16 and GbE.
Is suitable for indoor and outdoor installations.
Supports an extended operating temperature range up to 70°C.
The NPT-1030 is a compact (1U) base platform housed in a 243 mm deep, 440 mm wide, and 44.4 mm high
equipment cage with all interfaces accessible from the front of the unit. The NPT-1030 can be installed in
2,200 mm or 2,600 mm ETSI racks or in 19” racks.
Figure 4-2: NPT-1030 front view
NOTE: The MCP slot can be equipped with the MCP30B and an NVM (CF).
The following figure identifies the slot arrangement in the NPT-1030 platform.
Figure 4-3: NPT-1030 platform slots layout
For a complete list of the modules that can be configured in each NPT-1030 slot, see NPT-1030 Tslot I/O
modules.
All cards support live insertion. All cards are connected using a backplane that supports one traffic
connector to connect the NPT-1030 and the EXT-2U. The NPT-1030 platform provides full 1+1 redundancy
in power feeding, cross connections, and the TMU, as well as 1:N redundancy in the fans. Failure of the
MCP30B does not affect any existing traffic on the platform.
The NPT-1030 main controller card (MCP30B) is the most essential card of the system, creating virtually a
complete standalone native packet system. Moreover, it accommodates one service traffic slot for flexible
configuration of virtually any type of PDH, SDH, and Ethernet interfaces. This integrated flexible design
makes sure a very compact equipment structure and reduces costs, making NPT an ideal native choice for
the access and metro access layers.
NPT-1030 control and communication functions include:
Internal control and processing
Communication with external equipment and management
Network element (NE) software and configuration backup
Built-in Test (BIT)
Built-in test
The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
Management reports
System reset
Maintenance alarms
Fault detection
Protection switch for the main switching card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the platform is switched on, a BIT program is automatically activated for both initialization and
normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.
NOTE: The NPT-1030 supports in band and DCN management connections for PB and MPLS:
4Mbps policer for PB UNI which connects to external DCN
No rate limit for the MNG port rate up to 100M full duplex
In the XIO30_16, the matrix core uses 96 VC-4 equivalents (4/4/3/1) and provides STM-4 or STM-16
optical interface.
In the XIO30Q_1&4, the matrix core uses 96 VC-4 equivalents (4/4/3/1) and provides four STM-1 or
STM-4 optical interfaces.
NOTE: Typically, the NPT-1030 platform must be configured with two XIO cards. However, in
pure optical configurations that only include OBC cards, the XIO card is not required.
4.6.1 INF_B1U
The INF_B1U is a DC power-filter module for high-power applications that can be plugged into the NPT-
1030 platform. Two INF_B1U modules are needed for power feeding redundancy. It performs the following
functions:
High-power INF for up to 200 W for more than one DMXE_22_L2, DMGE_4_L2, or DMCES1_4 module
Single DC power input and power supply for all modules in the NPT-1030
Input filtering function for the entire NPT-1030 platform
Adjustable output voltage for fans in the NPT-1030
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
CAUTION: When more than one DMXE_22_L2, DMGE_4_L2, or DMCES1_4 card is installed in
the NPT-1030, an INF_B1U must be configured in the platform.
4.6.2 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1030 platform. It performs the
following functions:
Converts AC power to DC power for the NPT-1030
Filters input for the entire NPT-1030 platform
Supplies adjustable output voltage for fans in the NPT-1030
Supplies up to 180 W, with an AC input range of 100-240 VAC
4.6.3 FCU_1030
The FCU_1030 is a pluggable fan control module for high power applications with four fans for cooling the
NPT-1030 platform. The FCU_1030 fans provide cooling air in an environment that dissipates up to 200 W,
and are intended to work in conjunction with DMGE_4_L2 modules. The fans’ running speed can be low,
normal, or turbo. The speed is controlled by the MCP30B according to the environmental temperature and
fan failure status.
The following figure shows the front panel of the FCU_1030.
Figure 4-7: FCU_1030 front panel
4.6.4 MCP30B
The MCP30B is the second generation of MCP30 cards and serves as the main processing card of the NPT-
1030. It integrates functions such as control and communication and overhead processing. It provides the
following functions:
Control-related functions:
Communications with and control of all other modules in the NPT-1030 and EXT-2U through the
backplane (by the CPU)
Communications with the EMS-APT, LCT-APT, or other NEs through a management interface
(MNG) or DCC
Routing and handling of up to 32 x RS DCC, 32 x MS DCC (total 32 channels), and two clear
channels
Alarms and maintenance
Fan control
Accommodates the Compact Flash (CF) memory (NVM)
Overhead processing, including overhead byte cross connections, OW interface, and user channel
interface
External timing reference interfaces (T3/T4), which provide the line interface unit for one 2 Mbps
T3/T4 interface and one 2 MHz T3/T4 interface
The MCP30B supports the following interfaces:
MNG and T3/T4 directly from its front panel
RS-232, OW access, housekeeping alarms, and V.11 through a concentrated SCSI auxiliary I/F
connector (on the front panel)
In addition, the MCP30B has LED indicators and one reset push button. As the NPT-1030 is a front-access
platform, all its interfaces, LEDs, and push button are on the front panel of the MCP30B.
Figure 4-8: MCP30B front panel
NOTE: An MCP30 ICP can be used to distribute the concentrated auxiliary connector into
dedicated connectors for each function.
NOTE: ACT, FAIL, MJR, and MNR LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1030 Installation, Operation, and Maintenance Manual.
NOTE: The connectivity of the XIO30_4 to the Tslots (TS1, TS2, and TS3) is limited for up to 6 x
VC-4s. There are no limitations for Hardware Rev. B00 and above.
The total NPT-1030 capacity accommodating two XIO30_4 cards is 2.5 Gbps. The capacity is
distributed as follows: 1 slot with 622 Mbps and 2 slots with 2 x 622 Mbps.
XIO30Q_1&4: In addition to 15 Gbps cross-connect matrix and TMU, this card provides four STM-
1/STM-4 compatible aggregate line interfaces based on the SFP modules. The interface rate and STM-
1 or STM-4 is configurable per port from the management. The SFP housing on the XIO30Q_1&4
panel support STM-1 and STM-4 optical transceivers with a pair of LC optical connectors (bidirectional
STM-1 and STM-4 Tx/Rx over a single fiber using two different lambdas). STM-1 electrical SFPs with
coaxial connectors are also supported.
XIO30_16: In addition to 15 Gbps cross-connect matrix and TMU, this card provides one STM-4/16
aggregate line interface based on the SFP module. The SFP housing on the XIO30_16 panel supports
STM-4 or STM-16 optical transceivers with a pair of LC optical connectors (bidirectional STM-4 and
STM-16 Tx/Rx over a single fiber using two different lambdas).
The total NPT-1030 capacity accommodating two XIO30Q_1&4 or two XIO30_16 cards is 15 Gbps. The
capacity is evenly distributed between the three I/O slots and is 2.5 Gbps per slot.
The slot capacity is depicted in the following figure.
Figure 4-10: NPT-1030 with two XIO30Q_1&4 or two XIO30_16 slot capacity
The following figures show the front panel of the XIO30 cards.
Figure 4-11: XIO30_4 front panel
The panels of the XIO30_4, XIO30Q_1&4, and XIO30_16 include the LED indications described in the
following table.
Description Card TS #1 to TS #3
with XIO30
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21 TS1-TS3
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B TS1-TS3
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63 TS1-TS3
Electrical PDH E3/DS-3 interface Tslot module PM345_3 TS1-TS3
2 x STM-1 ports SDH interface card SMD1B TS1-TS3
2 x STM-4 ports SDH interface card SMD4 TS2-TS3
4 x STM-1 ports SDH interface card SMQ1 TS1-TS3
4 x STM-1 or STM-4 ports SDH interface card SMQ1&4 TS1-TS3
1 x STM-1 port SDH interface card SMS4 TS1-TS3
1 x STM-16 port SDH interface card SMS16 TS1-TS3
Electrical Ethernet interface module with L1 functionality DMFE_4_L1 TS1-TS3
Optical Ethernet interface module with L1 functionality DMFX_4_L1 TS1-TS3
Electrical/optical GbE interface module with L1 functionality DMGE_1_L1 TS1-TS3
Electrical/optical GbE interface module with L1 functionality DMGE_4_L1 TS1-TS3
Electrical Ethernet interface module with L2 functionality DMFE_4_L2 TS1-TS3
Optical Ethernet interface module with L2 functionality DMFX_4_L2 TS1-TS3
Electrical/optical GbE interface module with L2 functionality DMGE_4_L2 TS2-TS3
Optical 10 GbE and GbE interface module with L2 functionality DMXE_22_L2 TS2-TS3
CES services for STM-1/STM-4 interfaces module DMCES1_4 TS1-TS3
NOTE: The PME1_21, PME1_21B, and PME1_63 modules only support balanced E1 interfaces.
For unbalanced E1 interfaces, use the xDDF-21, an external balanced-to-unbalanced
conversion unit.
The NPT-1020 can be fed by 24 VDC, -48 VDC or 100-240 VAC. In DC power feeding, two INF modules can
be configured in two power supply module slots for redundant power supply. AC power feeding requires
the use of a conversion module to implement AC/DC conversion.
The NPT-1020 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” racks. The rugged platform
design also makes this platform a good choice for street cabinet use, withstanding temperatures up to
70°C.
The NPT-1020 can also be configured as an NPT-1020E, when combined with the EXT-2U expansion unit , as
illustrated in the following figure.
Figure 5-2: NPT-1020 platform with expansion unit
Typical power consumption for the NPT-1020 is 50 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the Neptune
Installation and Maintenance Manual and the Neptune System Specifications.
Built-in test
The BIT hardware and its related software assist in the identification of any faulty card or module.
The BIT outputs provide:
Management reports
System reset
Maintenance alarms
Fault detection
Protection switch for the main switching card
Dedicated test circuits implement the BIT procedure under the control of an integrated software package.
After the platform is switched on, a BIT program is automatically activated for both initialization and
normal operation phases. Alarms are sent to the EMS-NPT if any failures are detected by the BIT.
BIT testing covers general tests, including module presence tests and periodic sanity checks of I/O module
processors. It performs traffic path tests, card environment tests, data tests, and detects traffic-affecting
failures, as well as failures in other system modules.
NOTES:
NPT-1020 supports in band and DCN management connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
10 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate up to 100M full duplex
The following routing protocols are supported via DCN and in-band:
-- IPv4: OSPFv2, static routes
-- IPv6: Over management VLAN, static routes
NPT-1020 doesn't support Order Wire (OW)
Synchronization references are classified at any given time according to a predefined priority and
prevailing signal quality. The synchronization subsystem synchronizes to the best available timing
source using the Synchronization Status Marker (SSM) protocol. The TMU is frequency-locked to this
source, providing internal system and SDH line transmission timing. The platform is synchronized to
this central timing source.
The platform provides synchronization outputs for the synchronization of external equipment within
the exchange. The synchronization outputs are 2 MHz and 2 Mbps. These outputs can be used to
synchronize any peripheral equipment or switch.
The platform supports SyncE synchronization, which is fully compatible with the asynchronous nature of
traditional Ethernet. SyncE is defined in ITU T standards G.8261, G.8262, G.8263, and G.8264.
The IEEE 1588v2 Precision Time Protocol (PTP) (G.8265.1) provides a standard method for high precision
synchronization of network connected clocks. PTP is a time transfer protocol enabling slave clocks to
synchronize to a known master clock, ensuring that multiple devices operate using the same time base. The
protocol operates in master/slave configuration using UDP packets over IP.
5.5.1 INF-B1U
The INF-B1U is a -48 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020 platforms. Two INF-B1U modules are needed for power feeding redundancy. It performs the
following functions:
High-power INF for up to 200 W
Single DC power input and power supply for all modules in the NPT-1020
Input filtering function for the entire NPT-1020 platforms
Adjustable output voltage for fans in the NPT-1020
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
Figure 5-5: INF-B1U front panel
5.5.2 INF-B1U-24V
The INF-B1U-24V is a 24 VDC power-filter module for high-power applications that can be plugged into the
NPT-1020 platforms. Two INF-B1U-24V modules are needed for power feeding redundancy. It performs the
following functions:
Feed power supply for all modules in the NPT-1020 products
Input filtering function for the entire NPT-1020 platforms
Adjustable output voltage for fans in the NPT-1020
Support of fan power loss alarm and LED display
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply in the event of under-/over-voltage
Single DC power input: 18 VDC to 36 VDC
Maximum power consumption 85W (the CPS50 card isn’t supported)
The front panel of the INF-B1U-24V is shown in the following figure.
Figure 5-6: INF_B1U front panel
5.5.3 INF-B1U-D
The INF-B1U-D is a DC power-filter module that can be plugged into the NPT-1020 platforms. It performs
the following functions:
Dual DC power input and power supply for all modules in the NPT-1020
Input filtering for the entire NPT-1020 platforms
Adjustable output voltage for fans in the NPT-1020
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage is detected
Figure 5-7: INF-B1U-D front panel
5.5.4 AC_PS-B1U
The AC_PS-B1U is an AC power module that can be plugged into the NPT-1020 platforms. It performs the
following functions:
Converts AC power to DC power for the NPT-1020
Filters input for the entire NPT-1020 platform
Supplies adjustable output voltage for fans in the NPT-1020
Supplies up to 180 W power, with an AC input range of 100-240 VAC
The NPT-1020 supports the following range of non-blocking cross connection configurations:
16 VC-4 x 16 VC-4 as ADM-1/4 XC
10 Gbps packet for Ethernet and MPLS-TP switching
60 Gbps packet for Ethernet and MPLS-TP switching with the CPS50 card installed in the Tslot
5.6.1 CPS50
CPS50 is a T-slot 60 Gbps central packet switch card for the NPT-1020 with up to 4 x 10GbE aggregate ports
or 2 x 10GbE aggregate ports plus 4 x GbE ports. The card supports the following main functions:
60 Gbps packet switching capacity with MPLS-TP and PB functionality
Flexible port type definition for front panel ports:
Two SFP+ based 10GbE ports, each can be configured as 10GBase-R or 10GBase-W with EDC
support
Two SFP+/SFP/CSFP compatible cages, each one can be configured as:
1 x 10 GbE port with SFP+ (10GBase-R/10GBase-W with EDC support)
1 x GbE port with SFP (1000Base-X)
2 x GbE ports with CSFP (1000Base-X)
Summary – supported port assignments in CPS50:
4 x 10GbE
3 x 10GbE + 2 x GbE
2 x 10GbE + 4 x GbE
When the CPS50 is assigned and switch engine is enabled; the 10G switch on the base card is
disabled, built-in 12 x GbE ports in base card, and the Ethernet bus of three E-slots are connected to
the switch core of CPS50.
A CPS50 card can be inserted and replaced without affecting the traffic flow.
The following figure shows the front panel of the CPS50.
Figure 5-8: CPS50 front panel
Type Designation
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21
Electrical PDH E1 interface Tslot module with 21 interfaces PME1_21B
Electrical PDH E1 interface Tslot module with 63 interfaces PME1_63
Electrical PDH E3/DS-3 interface Tslot module PM345_3
2 x STM-1 electrical or optical ports SDH interface card SMD1B
1 x STM-4 port SDH interface card SMS4
CES services for STM-1/STM-4 interfaces module DMCES1_4
Electrical GbE interface module with direct connection to the packet switch DHGE_4E
Optical GbE interface module with direct connection to the packet switch DHGE_8
CES multi-service card with 16 x E1/T1 interfaces MSE1_16
CES multiservice card with 8 x E1/T1 and 2 x STM-1/OC-3 interfaces MSC_2_8
CES multi-service module for 4 x OC3/STM-1 or 1 x OC12/STM-4 interfaces MS1_4
CES multi-service module with 32 x E1/T1 interfaces MSE1_32
Central packet switching card CPS50
NFV module with 4 x GbE front panel ports for Virtual Network Functions NFVG_4
NPT-1010AC, where a 100-240 VAC power source utilizes an external power line connection
through a power conversion module to implement AC/DC conversion.
Figure 6-2: NPT-1010AC front panel
The NPT-1010 can be installed in 2,200 mm or 2,600 mm ETSI racks or in 19” or 23" racks. Up to two NPT-
1010 platforms can be installed in the width of an ETSI or 23" rack using a dedicated mounting platform.
One unit can be installed in the width of a 19" rack, using mounting brackets. The rugged platform design
also makes this platform suitable for street cabinet use, withstanding temperatures up to 70°C.
NOTE: ACT, FAIL, MJR, and MNR LEDs are combined to show various failure reasons during
the system boot. For details, see the Troubleshooting Using Component Indicators section in
the NPT-1010 Installation, Operation, and Maintenance Manual.
The four SFP housings on the NPT-1010 support four types of SFP module:
GE SFP optical transceivers with a pair of LC optical connectors
Electrical GE SFP electrical transceivers with a RJ-45 connectors
Bidirectional GE SFP optical transceivers with one LC optical connector (bidirectional GE Tx/Rx over a
single fiber using two different lambdas)
Colored GE SFP optical transceivers with a pair of LC optical connectors (colored C/DWDM SFP)
NOTE: NPT-1010 supports in band and DCN management connections for PB and MPLS:
4 Mbps policer for PB UNI which connects to external DCN
3 Mbps shaper for MCC packet to MCP
No rate limit for the MNG port rate, up to 100M full duplex
The following routing protocols are supported via DCN and in-band:
IPv4: OSPFv2, static routes
IPv6: Over management VLAN, static routes
6.3.1 TMSE1_8
The TMSE1_8 is a CES and timing module that provides Circuit Emulation Services (CES) for up to 8 x E1/T1
interfaces. It supports the SAToP and CESoPSN standards and has a SCSI 36-pin connector for connecting
the customer E1/T1 signals. It also provides the Time of Day (ToD) and 1PPS signals for supporting Ethernet
timing per IEEE 1588v2 standard.
The front panel of the TMSE1_8 is shown in the following figure.
Figure 6-5: TMSE1_8 front panel
6.3.2 TM10
The TM10 is optional in the NPT-1010 mini slot. It provides the Time of Day (ToD) and 1PPS signals for
supporting Ethernet timing per IEEE 1588v2 standard.
The front panel of the TM10 is shown in the following figure.
Figure 6-6: TM10 front panel
7.1.1 PME1_21
The PME1_21 is a Tslot module with 21 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_21 can
be configured in any Tslot and supports retiming of up to 8 x E1s.
The cabling of the PME1_21 module is directly from the front panel with a 100-pin SCSI female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21 modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21. For a detailed description of
this procedure, see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.
NOTE: The PME1_21 supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.
7.1.2 PME1_21B
The PME1_21B is a Tslot module with 21 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_21B
can be configured in any Tslot and supports retiming of up to 21 x E1s.
The cabling of the PME1_21B module is directly from the front panel with a 100-pin SCSI female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21B modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21B, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21B. For a detailed description
of this procedure, see the corresponding IMM.
NOTES:
The PME1_21B supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.
The PME1_21B is backward-compatible in the Neptune product line from V1.2. In the BG
product line it is backward-compatible from V14.
When the card is installed in a platform of previous supported versions, it simulates
the behavior of a PME1_63 card, but with 21 x E1s only. The management system
displays PME1_63 instead of PME1_21B.
When trying to assign PME1_21 you will see a "Card-Underutilized" warning. Ignore
this alarm.
In the inventory info it is displayed as PME1_63. This in normal for this card.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.
7.1.3 PME1_63
The PME1_63 is a Tslot module with 63 x E1 (2.048 Mbps) balanced electrical interfaces. The PME1_63 can
be configured in any Tslot and supports retiming of up to 63 x E1s. It supports LOS inhibit functionality (very
low sensitivity signal detection). This actually means that the LOS alarm is masked up to a level of -20 dB
signals.
The cabling of the PME1_63 module is directly from the front panel with one dense unique 272-pin VHDCI
female connector.
PME1_63 enables easy expansion of I/O slots equipped with PME1_21 modules by additional 42 x E1
interfaces, while significantly reducing the cost per E1 interface. This is done by removing the working
PME1_21, replacing it with a PME1_63, and connecting it with appropriate cables. The I/O slot must then
be reassigned through the management as a PME1_63. The attributes (including cross connects, trails, and
so on) of the first 21 x E1s are retained as they were in the replaced PME1_21. For a detailed description of
this procedure, see the corresponding IMM.
NOTE: The PME1_63 supports only balanced E1s directly from its connectors. For unbalanced
E1s, configure an xDDF 21, an external DDF with E1 balanced-to-unbalanced conversion.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E1 interfaces are listed in the following table.
7.1.4 PM345_3
The PM345_3 is a Tslot module with 3 x E3/DS-3 (34 Mbps/45 Mbps) unchannelized electrical interfaces.
Each interface can be configured independently as E3 or DS-3 by the EMS-APT or the LCT-APT. The
PM345_3 can be configured in any Tslot.
The cabling of the PM345_3 module is directly from the front panel with six DIN 1.0/2.3 connectors.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of E3/DS-3/STS-1 interfaces are listed in the following table.
7.1.5 SMD1B
The SMD1B is an SDH interface card used to expand ring closures and SDH tributaries. It provides two STM-
1 ports, which can be optical or electrical.
SMQ1 enables easy expansion of I/O slots equipped with SMD1B modules by additional two STM-1
interfaces, while significantly reducing the cost per STM-1 interface. This is done by removing the working
SMD1B, replacing it with a SMQ1, and connecting it with appropriate fibers. The I/O slot must then be
reassigned through the management as a SMQ1. The attributes (including cross-connects, trails, and so on)
of the first two STM-1s are retained as they were in the replaced SMD1B. For a detailed description of this
procedure refer to corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.
7.1.6 SMQ1
The SMQ1 is an SDH interface card used to expand ring closures and SDH tributaries. It provides four STM-1
ports, which can be optical or electrical.
SMQ1 enables easy expansion of I/O slots equipped with SMD1B modules by additional two STM-1
interfaces, while significantly reducing the cost per STM-1 interface. This is done by removing the working
SMD1B, replacing it with a SMQ1, and connecting it with appropriate fibers. The I/O slot must then be
reassigned through the management as a SMQ1. The attributes (including cross-connects, trails, and so on)
of the first two STM-1s are retained as they were in the replaced SMD1B. For a detailed description of this
procedure see the corresponding IMM.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1 interfaces are listed in the following table.
7.1.7 SMQ1&4
The SMQ1&4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides four
configurable STM-1 or STM-4 ports, which can be optical or electrical for STM-1 configuration. The interface
rate is configurable per port from the management.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-1/STM-4 interfaces are listed in the following table.
7.1.8 SMS4
The SMS4 is an SDH interface card used to expand ring closures and SDH tributaries. It provides one STM-4
port.
The maximum number of modules that can be installed in the supported platforms and the resulting total
number of STM-4 interfaces are listed in the following table.
7.1.9 SMD4
The SMD4 is an SDH interface card used to expand ring closures and SDH tributaries, providing two STM-4
ports. The SMD4 is only installed on NPT-1030 platforms, and is only applicable in an ADM16 or QADM-1/4
(4 x ADM-1/4) system.
Platform Max SMD4 modules Max STM-4 interfaces Installed into Tslots
NPT-1030 2 4 TS2 and TS3
7.1.10 SMS16
The SMS16 is an SDH interface card used to expand ring closures and SDH tributaries. It provides one STM-
16 SFP-based port.
Neptune platforms enable CES and CEP emulation, providing TDM transport over PSNs for backhaul
applications offering a wide range of new broadband data services. These boost the advantages inherent in
packet based networks, including flexibility, simplicity, and cost effectiveness. Neptune platforms support:
CESoPSN and SAToP for E1/T1 interfaces with encapsulation support for CES over MPLS-TP
(CESoMPLS) and CES over Ethernet (CESoETH)
CESoPSN and SAToP for STM-1/4 channelized and OC-3/12 interfaces with encapsulation support for
CES over MPLS-TP (CESoMPLS) and CES over Ethernet (CESoETH)
CEP service based on VC-3, VC-4, VC4-4c, CEP service based on STS-1, STS-3c, STS-12c
For cellular operators managing 2G, 2.5G, and 3G base stations connected to the BSC/RNC via multiple
E1/T1 lines, Neptune enables lower cost transport between these locations, replacing more expensive
leased E1/T1 lines.
At the hub or BSC/RNC sites, the Neptune functions as a carrier class multiservice aggregator, optimizing
cellular backhaul by multiplexing various TDM services into a single ChSTM-n. STM-1/OC-3 support includes
channelized STM-1/OC-3 with up to 63 x VC 12 channels for SDH or 84 VT1.5 channels for SONET.
7.2.2 MSE1_16
The MSE1_16 is a CES multiservice card that provides CES for up to 16 x E1/T1 interfaces. It supports the
SAToP and CESoPSN standards and has a SCSI 100-pin female connector on the front panel for connecting
the E1/T1 customer signals. Connectivity to the packet network is made by direct 1.25G SGMII connection
to the central packet switch on CPS card through the backplane.
The card supports MSP1+1 protection in the following modes:
MSP1+1 protection between two STM-1/OC-3 port intra-card
MSP1+1protection between STM-1/OC3 ports cross-card
STM-1/OC-3 ports are either protected or non-protected; mixed configurations are not supported.
Beginning from V6.0, CEP (RFC-4842) is supported on the 2 x STM-1/OC-3 interfaces for VC-4, STS-1, STS-3c.
A mixture of channelized VC-4 (E1 CES) and VC-4 clear channel (VC-4 CEP) is supported on per VC-4 basis.
The cabling of the MSE1_16 module is directly from the front panel with a 100-pin SCSI female connector.
Figure 7-14: MSE1_16 front panel
7.2.3 MSE1_32
MSE1_32 is a CES multiservice card that provides CES for up to 32 x E1/T1 balanced interfaces. It supports
the SAToP and CESoPSN standards and has two SCSI 68-pin female connectors on the front panel for
connecting the E1/T1 customer signals. Connectivity to the packet network is made by direct 1.25G SGMII
connection to the central packet switch on CPS card through the backplane.
NOTES: Two external xDDF-21 units are required to connect 32 x E1/T1 unbalanced interfaces
to the MSE1_32.
Platform Max. MSE1_32 modules Max. E1/T1 interfaces Installed into slots
NPT-1020 1 32 All Tslots
NPT-1050 3 96 All Tslots
NPT-1200 5 160 Any Tslot except for TS5
The cabling of the MSE1_32 module is directly from the front panel with two 100-pin SCSI female
connectors.
Figure 7-15: MSE1_32 front panel
7.2.4 MSC_2_8
The MSC_2_8 is a CES multiservice card that provides CES for up to 8 x E1/T1 and 2 x STM-1/OC-3
interfaces. It supports the SAToP and CESoPSN standards and has two SFP housings for connecting STM-
1/OC-3 customer signals and a 36-pin SCSI female connector for connecting E1/T1 customer signals on front
panel.
Connectivity to the packet network is made by direct 1.25G SGMII connection to central packet switch on
CPS card through backplane.
The card support MSP1+1 protection in the following modes:
MSP1+1 protection between two STM-1/OC-3 port Intra-card
MSP1+1protection between STM-1/OC3 ports cross-card
STM-1/OC-3 ports are either protected or non-protected; mixed configuration is not supported. Beginning
from V6.0, CEP (RFC-4842) is supported on the 2 x STM-1/OC-3 interfaces for VC-4, STS-1, STS-3c. a mixture
of channelized VC-4 (E1 CES) and VC-4 clear channel (VC-4 CEP) is supported on a per VC-4 basis.
Table 7-27: MSC_2_8 modules and STM-1/OC-3 and E1/T1 interfaces per platform
Platform Max. MSC_2_8 Max. STM-1/OC-3 Max. E1/T1 Installed into slots
modules interfaces interfaces
NPT-1020 1 2 8 All Tslots
NPT-1050 3 6 24 All Tslots
NPT-1200 6 12 48 All Tslots except TS5
7.2.5 DMCES1_4
The DMCES1_4 is a CES multiservice module that provides Circuit Emulation Services (CES) for up to 4 x
Channelized STM-1 interfaces, or a single STM-4 interface. It supports the SAToP and CESoPSN standards
and has four SFP housings for connecting STM-1 or STM-4customer signals on the front panel, totally it can
support up to 252 E1 CES services.
Connectivity to the packet network is made through one of the following options:
Direct 1.25G SGMII connection to central packet switch on CPS cards through backplane.
Connection to 3rd party device (router/switch) through SFP based GbE port on the front panel,
working in standalone mode with CESoETH and CESoIP/UDP encapsulation.
Each client port can be configured to support an STM-1 interface. Port No. 1 can also be configured to
support channelized STM-4; when this is the case all three other ports are disabled.
NOTE: The GbE port is not needed by the supported platforms; connection to this port is
made through the backplane.
7.2.6 MS1_4
MS1_4 is a CES multiservice card that provides Circuit Emulation Services (CES) for up to 4 x STM-1
interfaces, or a single STM-4 interface. It supports the SAToP and CESoPSN standards and has four SFP
housings for connecting STM-1 or STM-4customer signals on the front panel. In total this card can support
up to 252 E1 CES services. In addition, it supports the CESoETH and CESoMPLS emulation formats.
NOTE: STM-4 interface is supported only in the leftmost port (P1) of the MS1_4.
Platform Max. MS1_4 modules Max. STM-1 / OC-3 Installed into slots
interfaces
NPT-1020 1 4 All Tslots
NPT-1050 3 12 All Tslots
NPT-1200 6 24 All Tslots except TS5
NOTES:
All modules have a handle to enable easy removal and insertion. The handle has been
removed from the illustrations in this section so as not to obscure the front panel
markings.
MPLS and Ethernet data cards, with optical ports, incorporate SFP transceivers with LC
connectors. Purchase these SFPs only through your local sales representative.
Each MPLS card includes an Ethernet switch, an MPLS switch, and an SDH mapper. A powerful Network
Processor Unit (NPU) incorporated in each card fulfills the functions of Ethernet and MPLS switches. The
NPU is software programmable, allowing the cards to work as an Ethernet Provider Bridge (QinQ) switch
and/or as Ethernet Provider Bridge plus MPLS switch.
7.3.2 DMFE_4_L1
The DMFE_4_L1 is an EoS processing module with L1 functionality. It provides 4 x 10/100BaseT LAN
interfaces, and four EoS WAN interfaces. The total WAN bandwidth is up to 4 x VC-4. The DMFE_4_L1 can
be configured in any Tslot.
The cabling of the DMFE_4_L1 module is directly from the front panel with four RJ-45 connectors.
Figure 7-19: DMFE_4_L1 front panel
7.3.3 DMFX_4_L1
The DMFX_4_L1 is an EoS processing module with L1 functionality. It provides four optical FE (also referred
to as FX) LAN interfaces for the insertion of SFP transceivers, and four EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. The DMFX_4_L1 can be configured in any Tslot.
7.3.4 DMGE_1_L1
The DMGE_1_L1 is an L1 data Tslot module with one GbE interface on the LAN side and one EoS interface
on the WAN side. The total WAN bandwidth is 4 x VC-4. The DMGE_1_L1 supports electrical or optical
inputs (both inputs are internally connected to the GbE interface) as follows:
RJ-45 connector for connecting electrical signals
SFP housing for connecting optical signals
7.3.5 DMGE_4_L1
The DMGE_4_L1 is an EoS processing module with L1 functionality. It provides four GbE LAN interfaces for
the insertion of SFP transceivers, and four EoS WAN interfaces. Both electrical and optical GbE interfaces
are supported by insertion of different types of SFP - copper SFP for electrical GbE with RJ45 connector, and
optical SFP for optical GbE with LC connectors. The total WAN bandwidth is up to 16 x VC-4.
7.3.6 DMFE_4_L2
The DMFE_4_L2 is an EoS/MoT processing module with L2 functionality (MPLS ready). It provides 4 x
10/100BaseT LAN interfaces and 8 x EoS WAN interfaces. The total WAN bandwidth is up to 4 x VC-4. The
DMFE_4_L2 can be configured in any Tslot. The cabling of the DMFE_4_L2 module is directly from the front
panel with four RJ-45 connectors.
7.3.7 DMFX_4_L2
The DMFX_4_L2 is an EoS/MoT processing module with L2 functionality (MPLS ready).It provides four
optical FX LAN interfaces for the insertion of SFP transceivers, and 8 x EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. The DMFX_4_L2 can be configured in any Tslot.
7.3.8 DMGE_2_L2
The DMGE_2_L2 is an L2 data Tslot module with two GbE interfaces on the LAN side and 64 x EoS interfaces
on the WAN side. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 14 x
VC-4. The DMGE_2_L2 supports electrical or optical GbE by insertion of different types of SFP – copper or
optical.
DMGE_4_L2 enables easy expansion of I/O slots equipped with DMGE_2_L2 modules by additional two GbE
interfaces, while significantly reducing the cost per GbE interface. This is done by removing the working
DMGE_2_L2, replacing it with a DMGE_4_L2, and connecting it with appropriate fibers. The I/O slot must
then be reassigned through the management as a DMGE_4_L2. The attributes of the first two GbE
interfaces are retained as they were in the replaced DMGE_2_L2. For a detailed description of this
procedure refer to corresponding IMM.
7.3.9 DMGE_4_L2
The DMGE_4_L2 is an L2 data Tslot module with four GbE interfaces on the LAN side and 64 x EoS or up to
30 x MoT interfaces on the WAN side. The module supports MPLS by appropriate licensing. The total WAN
bandwidth can be configured to 16 x VC-4. The DMGE_4_L2 supports electrical or optical GbE by insertion
of different types of SFP – copper or optical.
DMGE_4_L2 enables easy expansion of I/O slots equipped with DMGE_2_L2 modules by additional two GbE
interfaces, while significantly reducing the cost per GbE interface. This is done by removing the working
DMGE_2_L2, replacing it with a DMGE_4_L2, and connecting it with appropriate fibers. The I/O slot must
then be reassigned through the management as a DMGE_4_L2. The attributes of the first two GbE
interfaces are retained as they were in the replaced DMGE_2_L2. For a detailed description of this
procedure see the corresponding IMM.
NOTE: Expanding slot capacity from two GbE (with a DMGE_2_L2) to four GbE interfaces (with
a DMGE_4_L2) is not relevant for the NPT-1030, as it doesn't support the DMGE_2_L2.
NOTE: It is highly recommended to install the DMGE_4_L2 close to the fan units (FCUs) in TS2
and TS3 of the NPT-1030 and TS2, TS3, TS4, and TS7 of the NPT-1200.
7.3.10 DMGE_8_L2
The DMGE_8_L2 is an L2 data Tslot module with 8 GbE interfaces on the LAN side and 96 x EoS or up to 60 x
MoT interfaces on the WAN side. The module occupies a double slot in the Tslot module space, and must
be installed in two adjacent horizontal slots. A spacer between the slot pair must be removed to enable the
installation of the DMGE_8_L2. The procedure for removing this spacer is described in the NPT-1200
Installation, Operation, and Maintenance Manual.
The module supports MPLS by appropriate licensing. The total WAN bandwidth is 32 x VC-4. The
DMGE_8_L2 has two combo ports and six optical ports. The combo ports support direct connection of
electrical signals through dedicated RJ-45 connectors, or optical signals through SFP housings. The other six
ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP – copper or
optical. The DMGE_8_L2 supports in-band management over MOE by the NPT-1200.
TIP: It is more efficient to install one DMGE_8_L2 instead of two DMGE_4_L2 cards.
7.3.11 DMXE_22_L2
The DMXE_22_L2 is an L2 data Tslot module with two 10GbE and 2 x GbE interfaces on the LAN side and 64
x EoS or up to 30 x MoT interfaces on the WAN side. The module occupies a single slot in the Tslot module
space. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 16 x VC-4.The card
supports 1588v2 master, slave, and transparent modes.
NOTES:
The DMXE_22_L2 supports a unique TM mechanism. Make sure to read about the card
TM functionalities and features before implementation; see DMXE_22_L2 Traffic
Management (TM).
The DMXE_22_L2 supports in band management over MoE by the NPT-1200.
The 10 GbE ports use SFP+ (OTP10_xx) transceivers that provide 10 GbE connectivity in a small form factor,
similar in size to legacy SFPs. The SFP+ enables design modules with higher density and lower power
consumption.
The two GbE ports are SFP-based and enable electrical or optical GbE through insertion of different types of
SFP – copper or optical.
Table 7-52: DMXE_22_L2 modules, GbE, and 10 GbE interfaces per platform
7.3.12 DMXE_48_L2
The DMXE_48_L2 is an L2 (MPLS-ready) data Tslot module with four 10GbE and 8 x GbE interfaces on the
LAN side and 96 x EoS/MoT interfaces on the WAN side. The module occupies a double slot in the Tslot
module space and can be installed only in slot pairs TS1+TS2 and TS6+TS7 of the NPT-1200. A spacer
between each of these slot pairs must be removed to enable the installation of the DMXE_48_L2. The
procedure for removing this spacer is described in the NPT-1200 Installation, Operation, and Maintenance
Manual. The module supports MPLS by appropriate licensing. The total WAN bandwidth is 32 x VC-4. The
card supports 1588v2 master, slave, and transparent modes.
The 10 GbE ports use new SFP+ (OTP10_xx) transceivers that provide 10 GbE connectivity in a small form
factor, similar in size to the legacy SFPs. The SFP+ enables design modules with higher density and lower
power consumption relative to XFP transceivers.
The eight GbE ports are SFP-based and enable electrical or optical GbE by insertion of different types of SFP
– copper or optical.
NOTES:
The DMXE_48_L2 is supported only in the NPT-1200 platform with XIO matrix (up to two
modules in slots TS1+TS2 and TS6+TS7).
The DMXE_48_L2 supports in band management over MOE by the NPT-1200.
Because the DMXE_48_L2 occupies a double slot, it can be installed only in two adjacent horizontal slots
(TS1+TS2 and TS6+TS7) in the NPT-1200. Therefore, a maximum of two DMXE_48_L2 modules can be
installed in the NPT-1200, totaling in 8 x 10 GbE and 16 x GbE (electrical/optical) interfaces per platform.
7.3.13 DHGE_4E
The DHGE_4E is a data hybrid card that supports up to 4 x 10/100/100BaseT ports connected to the packet
switching matrix, as well as PoE+ functionality.
NOTES:
When the DHGE_4E is installed in the NPT-1020, only optics cards are supported by the
EXT-2U.
PoE+ notes:
When the DHGE_4E is configured with PoE, the main power feeding voltage must be
less than 58 VDC.
The DHGE_4E card MAX power consumption for PoE is 62 W; any mixture of PD
devices is allowed up to 62W.
The cabling of the DHGE_4E module is directly from the front panel with four RJ-45 connectors. Front panel
ports and indicators for the DHGE_4E card are illustrated in the following figure.
Figure 7-30: DHGE_4E front pane
7.3.14 DHGE_8
The DHGE_8 is a data hybrid card that supports up to 8 x GbE/FX ports connected to the packet switching
matrix (CSFP for 8 ports, SFP for 4 ports).
NOTES:
Support for dense 100Base-X was added as of V7 with CTFE_xx transceivers.
When the DHGE_8 is installed in the NPT-1020, only 4 x GbE interfaces are supported on
the card and only TDM and EoS/MoT cards are supported by the EXT-2U.
The cabling of the DHGE_8 module is directly from the front panel with four SFP or CSFP transceivers. The
card ports are grouped in pairs: P1~P5, P2~P6, P3~P7, and P4~P8. Each pair can house one SFP or one CSFP.
Each SFP supports one optical GbE/FX port, totaling 4 x GbE/FX ports in a card. Each CSFP supports two
GbE/FX ports, totaling 8 x GbE/FX ports in a card. A mixture of SFP and CSFP transceivers in the same card is
also supported.
Figure 7-31: DHGE_8 front panel
7.3.15 DHGE_16
The DHGE_16 is a data hybrid card that supports up to 8 x 10/100/1000BaseT ports and 8 x GbE/FX ports
with connection to the packet switching matrix (CSFP support for 8 optical ports, SFP for 4 optical ports).
The module supports MPLS by appropriate licensing. The card supports 1588v2 master, slave, and
transparent modes.
Table 7-59: DHGE_16 modules, 1000Base-X/100Base-FX and 10/100/1000BaseT interfaces per platform
The module occupies a double slot in the Tslot module space. A spacer between the slot pairs must be
removed to enable the installation of the DHGE_16. The procedure for removing this spacer is described in
each platform's Installation, Operation, and Maintenance Manual.
Ports P1 to P8 are RJ-45 connectors for 8 x 10/100/1000BaseT electrical interfaces. Ports P9 to P16 are
grouped in pairs: P9~P13, P10~P14, P11~P15, and P12~P16. Each pair position can house one SFP or one
CSFP transceiver, supporting one 1000Base-X/100Base-FX port (for SFP) or two bidirectional 100/1000Base-
X ports (for CSFP).
Figure 7-32: DHGE_16 front panel
The module supports MPLS by appropriate licensing. The card supports 1588v2 master, slave, and
transparent modes.
7.3.16 DHGE_24
The DHGE_24 is a data hybrid card that supports up to 24 x GbE/FX ports connected to the packet switching
matrix (CSFP/SFP support).
Table 7-61: DHGE_24 modules, 1000Base-X/100Base-FX and 10/100/1000BaseT interfaces per platform
The module occupies a double slot in the Tslot module space. A spacer between the slot pairs must be
removed to enable the installation of the DHGE_24. The procedure for removing this spacer is described in
the platform Installation, Operation, and Maintenance Manuals.
The card ports are grouped in pairs: P1~P13, P2~P14, P3~P15, and so on, to P12~P24. Each pair position can
house one SFP or one CSFP transceiver, supporting one 1000Base-X/100Base-FX port (for SFP) or two
bidirectional 1000Base-X ports (for CSFP).
Figure 7-33: DHGE_24 front panel
7.3.17 DHXE_2
The DHXE_2 is a data hybrid card that supports up to 2 x 10GbE ports connected to the packet switching
matrix. Each 10GBE interface can be configured as 10GBase-R, 10GBase-W, or 10GBase-R over OTU2e. The
card supports FEC and EFEC (I4, I7) with OTU2e wrapping.
Platform Max. DHXE_2 modules Max. 10GbE interfaces Installed into slots
NPT-1050 3 6 All Tslots
NPT-1200 3 6 Tslots:
(with CPS100) All slots except TS5
NPT-1200 6 12 Tslots:
(with CPS320) All slots except TS5
The cabling of the DHXE_2 module is directly from the front panel with two SFP+ transceivers. The card has
two positions for installing SFP+ transceivers.
Figure 7-34: DHXE_2 front panel
7.3.18 DHXE_4
The DHXE_4 is a data hybrid card that supports up to 4 x 10GbE ports connected to the packet switching
matrix.
The cabling of the DHXE_4 module is directly from the front panel with four SFP+ transceivers. The card has
four positions for installing SFP+ transceivers.
Figure 7-35: DHXE_4 front panel
7.3.19 DHXE_4O
The DHXE_4O is a data hybrid card that supports up to 4 x 10GbE/OUT2e ports with OTN wrapping,
connected to the packet switching matrix. Each 10GBE interface can be configured as 10GBase-R, 10GBase-
W, or 10GBase-R over OTU2e. The card support FEC and EFEC (I4, I7) with OTU2e wrapping.
The cabling of the DHXE_4O module is directly from the front panel with four SFP+ transceivers. The card
has four positions for installing SFP+ transceivers.
Figure 7-36: DHXE_4O front panel
The cards features and functions are described in the following section.
7.4.1 NFVG_4
The NFVG_4 card is a common Tslot card for Neptune platforms that can implement various VNFs
(embedded NFV solution). NFVG_4 is a single slot NFV card with four GE ports. It can be installed in any
Neptune platform with Tslots. The max. connection bandwidth to central packet switch is 4 x 1GBE; this
may be affected by the NP NIF resource limitation (due to dynamic allocation mechanism).
The following figure shows the NFVG_4 general view.
Figure 7-37: NFVG_4 general view
GE4/MNG. (Port 4) on the front panel is used for management. The management can be IB (In Band) or
OOB (Out of Band). When Port 4 is used for IB management (always with untagged traffic), data networks
can be created on it (with tagged traffic as a must).
The following figure shows the NFVG_4 front panel.
Figure 7-38: NFVG_4 front panel
The VNF traffic processing is based on Intel x86 (E3-1105Cv2) CPU with DH8903CC PCH (Platform Controller
Hub), 2 x 8GB DDR-3, and 64GB (MLC-SSD). This system can process the packets from 4 x GbE ports arriving
through the i350 Ethernet controller. The GbE lanes can be from the front panel ports (SFP based) or from
internal backplane SGMII ports. To support flexible routing, all GE lanes from the panel and backplane are
connected to the matrix (X-point). This supports traffic path provisioning.
The FPGA block mainly implements the control interfaces for MCP to manage the NFVG_4 card in Tslot of
Neptune platforms, and the timing interfaces to/from CPS/CIPS. The NFVG_4 includes an IDPROM so that
MCP can read it via IIC to identify the card.
The NFVG_4 block diagram include the following main parts:
Power supply
Traffic subsystem
Control subsystem
Timing
Backplane and front panel Interfaces
Because NFVG_4 has to be supported in all Neptune platforms (excluding NPT-1020) with Tslot and
different Neptune's have different control interface, the control interface of the card must be flexible and
compatible with all supported Neptune platforms.
NOTE: In general it is recommended to install NFVG_4 cards as close as possible to the cooling
fans of the platform.
Backplane connectivity of Neptune platforms should be considered when planning a system with NFVG_4
cards. The connectivity of the platforms is as follows:
NPT-1020: 2 x 1 GbE
NPT-1050: 4 x 1 GbE in all Tslots
[Recommended to install up to two cards in slots TS2/TS3]
NPT-1200 with CPS320:
2 x 1 GbE in TS2/3/4/7
4 x 1 GbE in TS1/6
NPT-1200 with CPS100, CPTS100, or CPTS320 matrix card: 4 x 1 GbE in all Tslots
1. Local:
Input and output traffic via the front panel ports
No backplane connectivity to the NPT platform
The NPT provides power and basic management
2. Service port on NFV card:
The panel port is a service port for the NPT platform
Traffic flows to/from the NPT ports and the NFV panel ports
Normal mode: possible NFV processing
Transparent mode: no NFV processing is done
UNI, NNI, MoE, CESoETH, CESoMPLS
CES is a private case of MPLS/PB
SCADA may be over ETH or TDM
There is no possibility to analyze TDM traffic (GFP demapping is required)
3. Inline NFV:
The Neptune service runs between the NPT ports
The NFV is inline data path (using the NFV card backplane connectivity)
Per Port mode: All VSIs on the port are NFV enabled
Per VSI mode: Some VSIs on a port are NFV enabled
4. Mirroring:
The NPT service runs between the NPT ports
Ports are mirrored to the NFV card (using the NFV backplane connectivity)
NOTE: Mirroring applications will be supported in later versions of the NFV cards.
The Compact Small Form Factor Pluggable (CSFP) optical transceiver is a bidirectional single fiber optical
module designed for high density platforms. This module supports a new technology which combines two
single fiber duplex/ bidirectional transmissions in SFP form factor. It doubles the port density of current
equipment and line cards. Moreover, The CSFP supports very low power consumption and complies with
green environmental technology. The CSFP can work with bidirectional SFPs as well as regular
unidirectional SFPs (with a Y-cable). CSFP transceivers are available in 2 rates:
CTGbE for GbE interfaces
CTFE for FX interfaces (supported as of V7.0)
The transceiver modules are used for the entire spectrum of interfaces, including intraoffice, short, and
long ranges, and the interchangeable transceiver components are utilized throughout the product line. The
standardized modular design of the transceiver components facilitates network maintenance and upgrades.
Instead of replacing an entire circuit board, a single module can be removed or replaced, a considerable
cost savings.
All transceivers provide power monitoring capabilities. The SFPs for STM-1/STM-4/STM-16/GE/10GE/100GE
have the added ability to use low-cost colored interfaces (C/DWDM), further reducing maintenance costs.
Transceivers provide a significant advantage for the cards used in Neptune platforms.
Figure 7-41: Transceiver examples
Neptune platforms offer ETR-1 SFP electrical transceivers that enable full duplex STM-1 electrical (155
Mbps) SDH transport over coaxial cables. The interface is fully compliant with ITU T G.703 signal
specifications. This electrical transceiver is interchangeable with STM-1 optical SFP modules, providing easy
migration to STM-1 electrical interfaces. With this SFP module, any system that already supports STM-1
optical SFPs can also support STM-1 electrical.
Neptune platforms also offer ETGbE SFP electrical transceivers that enable full duplex 1000BaseT electrical
traffic over CAT5E SFTP copper cables. The interface is fully compliant with 1000BaseT and GbE standards
as per IEEE 802.3. This electrical transceiver is interchangeable with GbE optical SFP modules, providing
easy migration to 1000BaseT electrical interfaces. With this SFP module, any system that already supports
GbE optical SFPs can now also support 1000BaseT.
The ETGbE enables a mix of electrical and optical interfaces on the same module for DMGE_4_L1,
DMGE_2_L2, DMGE_4_L2, DMGE_8_L2, DMXE_22_L2, DMXE_48_L2, DHGE_8, DHGE_16, and DHGE_24
optical modules.
Neptune's comprehensive support for both electrical and optical transceiver modules improves equipment
flexibility, reducing inventory and spare part expenses. Switching between optical and electrical SFPs can be
done in the field while the system is in operation to optimize the system port types for the application.
Neptune platforms support hot insertion that is non-traffic-affecting.
For detailed information about the transceiver options available with Neptune platforms, please refer to
the Neptune System Specifications.
NOTE: For help with slot or card reassignments and product replacement or upgrades,
contact ECI customer support.
NOTE: The EXT-2U expansion unit can be combined with the most of the Neptune platforms.
For easier reading, the slot layout is not repeated in the sections describing each of those
platforms. The reader is simply referred back to this slot layout description.
The EXT-2U expansion unit is housed in a 243 mm deep, 465 mm wide, and 88 mm high equipment cage
with all interfaces accessible from the front of the unit. The expansion unit includes its own independent
power supply and fan unit, for additional reliability and security. The platform includes the following
components:
Three multipurpose slots (ES1 to ES3) for any combination of extractable traffic cards. PCM, TDM,
ADM, Ethernet, and CES traffic are all handled through cards in these traffic slots. All interfaces are
configured through convenient SFP modules, supporting up to 2.5G or 2GbE traffic per slot. Each slot
in the EXT-2U has a TDM capacity of up to 16 x VC-4s; the total capacity of the EXT-2U is 48 x VC-4s.
Two slots for INF power supply units. There are two units for system redundancy. Note that the INF
modules are extractable in the EXT-2U.
One FCU fan unit consisting of multiple separate fans to support cooling system redundancy.
The following figure shows the slot layout for the EXT-2U platform.
Figure 9-2: EXT-2U slot layout
Typical power consumption of the EXT-2U is less than 150 W. Power consumption is monitored through the
management software. For more information about power consumption requirements, see the
corresponding Neptune platform Installation and Maintenance Manual and the Neptune System
Specifications.
9.1.1 INF_E2U
The INF_E2U is a DC power-filter module that can be plugged into the EXT-2U platform. Two INF_E2U
modules are needed for power feeding redundancy. The module performs the following functions:
Single DC power input and power supply for all modules in the EXT-2U
Input filtering function for the entire EXT-2U platform
Adjustable output voltage for fans in the EXT-2U
Indication of input power loss and detection of under-/over-voltage
Shutting down of the power supply when under-/over-voltage detected
Supplies up to 503 W of power
Figure 9-3: INF_E2U front panel
9.1.2 AC_PS-E2U
The AC_PS-E2U is an AC power module that can be plugged into the EXT-2U platform. It performs the
following functions:
Converts AC power to DC power for the EXT-2U
Filters input for the entire EXT-2U platform
Supplies adjustable output voltage for fans in the EXT-2U
Supplies up to 180 W of power, with an AC input range of 100-240 VAC
Figure 9-4: AC_PS-E2U front panel
NOTE: When using the MPoE_12G with PoE+ functionality with AC_PS-E2U feeding, check the
power consumption calculation. Only one card of this type is allowed.
9.1.3 FCU_E2U
The FCU_E2U is a pluggable fan control module with four fans for cooling the EXT-2U platform. The fans’
running speed can be low, medium, or turbo and is controlled by the corresponding MCP card in the base
platform according to the environmental temperature and fan failure status.
Figure 9-5: FCU_E2U front panel
9.2.1 PE1_63
The PE1_63 is an electrical traffic card with 63 x E1 (2 Mbps) balanced electrical interfaces that supports
retiming of up to 63 x E1s. A maximum of three PE1_63 cards can be installed in one EXT-2U platform. The
PE1_63 supports LOS inhibit functionality (very low sensitivity signal detection). This actually means that
the LOS alarm is masked up to a level of -20 dB signals.
The cabling of the PE1_63 card is directly from the front panel with three twin 68-pin VHDCI female
connectors.
Figure 9-6: PE1_63 front panel
9.2.2 P345_3E
The P345_3E is an electrical traffic card with 3 x E3/DS-3 (34 Mbps/45 Mbps) electrical interfaces. A
maximum of three P345_3E cards can be installed in one EXT-2U platform.
The cabling of the P345_3E card is directly from the front panel with DIN 1.0/2.3 connectors.
Figure 9-7: P345_3E front panel
NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the NPT-1200 Installation, Operation, and Maintenance Manual.
9.2.3 S1_4
The S1_4 card is an SDH expansion card with four STM-1 (155 Mbps) interfaces (either optical or electrical).
Each SFP house in the S1_4 supports three types of SFP module, as follows:
SFP STM-1 optical transceivers with a pair of LC optical connectors. Interfaces can be S1.1, L1.1, or
L1.2, depending on the SFP module.
SFP STM-1 electrical transceivers with a pair of DIN 1.0/2.3 connectors.
SFP STM-1 optical transceivers with one LC optical connector (bidirectional STM-1 TX/RX over a single
fiber using two different lambdas). The wavelength of the Tx laser can be 1310 nm (BD3) or
1550 nm (BD5).
The four STM-1 interfaces in the S1_4 can be assigned using these three SFP module types independently.
Figure 9-8: S1_4 front panel
NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the NPT-1200 Installation, Operation, and Maintenance Manual.
9.2.4 S4_1
The S4_1 is an SDH expansion card with one STM-4 (622 Mbps) interface. The SFP house in the S4_1
supports SFP modules, as follows:
SFP STM-4 optical transceivers with a pair of LC optical connectors. Interfaces can be S4.1, L4.1, or
L4.2, depending on the SFP module.
Figure 9-9: S4_1 front panel
NOTE: ACTIVE, FAIL, and ALARM LEDs are combined to show various failure reasons during
the expansion card boot. For details, see the Troubleshooting Using Component Indicators
section in the Neptune Installation, Operation, and Maintenance Manuals.
9.2.5.1 OM_BA
The OM_BA is a single channel booster amplifier module with constant output power for links up to 10
Gbps. The OM_BA can be installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be
installed in each OBC, totaling six modules in an EXT-2U platform.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
9.2.5.2 OM_PA
The OM_PA is a single channel amplifier working in Channel 35 of the C-band for links up to 10 Gbps. The
amplifier works in a constant power mode and provides a power output of -15 dBm. The OM_PA can be
installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each OBC,
totaling six modules in an EXT-2U platform.
The module can be connected in two link applications:
Receives optical signals from an SFP/XFP transmitter and the preamplifier connected before the
receiver. In this mode the module is capable of delivering signals between 80 to 120 Km.
Includes a booster amplifier after the SFP/XFP transmitter and the preamplifier connected before the
receiver. In this option the total power budget enables the amplifier to deliver signals between 120
km to 180 km.
Indicator Functions
Tx Active Green indicator, lights when the module's output power is at a normal level.
LOS Loss of signal indicator, which is normally off. The indicator lights red when the
stage input signal is missing or is too low for normal operation.
AC Green indicator, lights when the card is powered and running normally.
FL Red indicator, lights when a general fault condition is detected.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
9.2.5.3 OM_ILA
The OM_ILA is a DWDM amplifier working in the C-band for links up to 44/88 channels. It is a fixed 21 dB
gain EDFA based DWDM amplifier for links of up to 500 km with up to 80 channels. The OM_ILA can be
installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each OBC,
totaling six modules in an EXT-2U platform.
The OM_ILA provides the following main functions:
Operation as a preamplifier, booster, or inline amplifier
Output power of 16 dBm with a gain of 21 dB
Minimum input power of -24 dBm
Monitoring and alarms
Support for DWDM filters (Mux/DeMux or OADM) in a separate Artemis shelf
Indicator Functions
Tx Active Green indicator, lights when the module's output power is at a normal level.
LOS Loss of signal indicator, which is normally off. The indicator lights red when the
stage input signal is missing or is too low for normal operation.
AC Green indicator, lights when the card is powered and running normally.
FL Red indicator, lights when a general fault condition is detected.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
9.2.5.4 OM_LVM
The OM_LVM is a DWDM two stage VGA amplifier working in the C-band for links up to 44/88 DWDM
channels. The module includes a 20.5 dBm variable gain EDFA with mid-stage access (MSA). The OM_LVM
can be installed in the Optical base card (OBC) wide sub-slots. Up to two modules can be installed in each
OBC, totaling six modules in an EXT-2U platform.
The OM_LVM provides the following main functions:
Operation as a preamplifier, booster, or inline amplifier
Output power of 20.5 dBm with a variable gain of 15 to 30 dB
Minimum input power of -28 dBm
Monitoring and alarms
Support for DWDM filters (Mux/DeMux or OADM) in a separate Artemis shelf
Indicator Functions
OUT Green indicator, lights when the EDFA output power is at a normal level.
IN Red indicator, lights when the EDFA input power is missing or is too low for
normal operation.
ACT Green indicator, lights when the card is powered and running normally.
FAIL Red indicator, lights when a general fault condition is detected.
9.2.5.5 OM_DCMxx
The OM_DCMxx is a micro dispersion compensation module used to correct excessive dispersion on long
fibers. The OM_DCMxx is available for several distance ranges: 40, 80, and 100 km (xx in the module name
designates the distance in km). The OM_DCMxx can be installed in the Optical base card (OBC) narrow sub-
slot. One module can be installed in the OBC, totaling three modules in an EXT-2U platform.
The module has two LC connectors: Rx (input), and Tx (output), protected by a spring-loaded cover.
9.2.6 MXP10
The MXP10 is a muxponder base card supporting up to 12 (CSFP) built in client interfaces, which are
multiplexed into G.709 multiplexing structure and sent via two OTU-2/2e line interfaces. It can be installed
in the Eslots of EXT-2U platforms; up to three MXP10 cards can be installed in an EXT-2U.
The MXP10 can also be configured to operate as a transponder where it can map any 10 GbE/STM-64/FC-
800/FC-1200 signal into an OTU2/2e line.
In addition, the MXP10 has an optical module slot for installing an OM_AOC4. This module expands the
client interface capacity by 4 additional ports, totaling 16 client ports per MXP10.
Any of the client interfaces can be configured to accept an STM-1, STM-4, STM-16, GbE, FC/FC2/FC4, OTU-
1, or HD-SDI signal. The card has integrated cross-connect capabilities, providing more efficient utilization
of the lambda. Any of the signals can be added or dropped at each site, while the rest of the traffic
continues on to the next site. Broadcast TV services can be dropped and continued (duplicated), eliminating
the need for external equipment to provide this functionality.
Hardware protection is supported; using a pair of MXP10 cards, configured is slots ES1 and ES2 of the EXT-
2U. In the protection mode each service is connected to both MXP10 cards by splitters/couplers. In case a
traffic or equipment failure occurs it will trigger a switch to the protection card.
The MXP10 is a single Eslot card with the following main features:
12 CSFP-based client ports, software configurable to support GbE, FC/FC2/FC4, STM-1, STM-4, STM-
16, and OTU-1 services
Client interfaces can be expanded by 4, by installing a OA_AOC4 module in the card's Tslot
Two independent SFP+ based OTU-2/2e line ports
Can be used as a multi-rate combiner up to OTU-2/2e
Can be used as a multi OTU-1 transponder – up to 5
Can operate as two separate muxponders with sets of eight clients multiplexed into one OTU-2 line
Can operate as 5 separate 2.5G muxponders with up to 5 clients multiplexed into OUT-1 line
Regeneration mode is supported for OTU-2 (single) and OUT-1 (up to 5)
Any mix of functionality is supported as long occupied resources are not exceeding MXP10 OTN
capacity of 40G.
Per port HW protection
Supports G.709 FEC for OUT-1 and G.709 FEC and EFEC (I.4 and I.7) for OTU-2 and ignore-FEC modes
towards the line
Supports Subnetwork Connection Protection (SNCP) mechanisms
Complies with ITU-T standards for 50 GHz and 100 GHz multichannel spacing (DWDM)
Support two GCC channels one for each OTU-2 interface, to allow management over OTN interface
Supports in-service module insertion and removal without any effect on other active ports
Supports interoperability with Apollo AoC cards
The cabling of the MXP10 card is directly from the front panel. It includes 6 positions for installing CSFP
client transceivers; the positions are gathered in pairs: P1~P7, P2~P8, P3~P9, P4~P10, P5~P11, and P6~P12.
Each pair can house one CSFP. Each CSFP supports two configurable ports, totaling 12 client ports on the
base card. In addition, the MXP10 has two positions for installing SFP transceivers that serve the line ports.
Figure 9-16: MXP10 front panel
9.2.6.1 OM_AOC4
The OM_AOC4 is an optical ADM on a card module for installing in the MXP10 card. It enables to expand
the MXP10 capacity with 4 client ports.
The OM_AOC4 module provides 4 client ports; each port can be configured to operate as one of the
following interfaces:
STM-1/STM-4/STM-16
GbE
FC1/2/4/8
HD-SDI
ODU-1
When operating in the base card, each port supports the same functionality as the client ports incorporated
on the MXP10.
The following figure shows the front panel of the OM_AOC4.
Figure 9-17: OM_AOC4 front panel
9.2.7 DHFE_12
The DHFE_12 is a data hybrid card that supports up to 12 x FE ports with connection to the packet
switching matrix. The cabling of the DHFE_12 module is directly from the front panel with RJ-45 based
connectors.
NOTES:
When installed in an NPT-1020, the DHFE_12 card can support up to 8 x FE ports.
When installed in an NPT-1020 with a CPS50 card, the DHFE_12 card can support up to 12
x FE ports.
When installed in an NPT-1200 or NPT-1050, the base unit max GE fan out is decreased by
16 ports.
The DHFE_12 is not supported in NPT-1800, NPT-1300, NPT-1200 with
MCIPS320/MCIPS560, and NPT-1050 with MCIPS300.
9.2.8 DHFX_12
The DHFX_12 is a data hybrid card that supports up to 12 x 100Base-FX ports with connection to the packet
switching matrix. The cabling of the DHFX_12 module is directly from the front panel with SFP based slots.
NOTES:
When installed in an NPT-1020, the DHFX_12 card can support up to 8 x 100Base-FX ports.
When installed in an NPT-1020 with a CPS50 card, the DHFX_12 card can support up to 12
x 100Base-FX ports.
When installed in an NPT-1200 or NPT-1050, the base unit max GE fan out is decreased by
16 ports.
The DHFX_12 is not supported in NPT-1800, NPT-1300, NPT-1200 with
MCIPS320/MCIPS560, and NPT-1050 with MCIPS300.
9.2.9 MPS_2G_8F
The MPS_2G_8F is an EoS metro Ethernet L2 switching card with MPLS capabilities. It includes 8 x
10/100BaseT LAN interfaces, 2 x GbE/FE combo LAN interfaces, and 64 EoS WAN interfaces. The total WAN
bandwidth is up to 4 x VC-4. A maximum of three MPS_2G_8F cards can be installed in one EXT-2U
platforms.
Figure 9-20: MPS_2G_8F front panel
9.2.10 MPoE_12G
The MPoE_12G is a metro Ethernet L2 and MPLS switching card with MPLS capabilities and Power over
Ethernet support (PoE). It can be installed in the EXT-2U providing four GbE/FX and eight
10/100/1000BaseT interfaces with power over Ethernet functionality (IEEE802.af and IEEE802.at). It
provides Layer 1 and Layer 2 with MPLS-TP switch functionality (64 EoS WAN interfaces) over native
Ethernet (MoE) and SDH (MoT) virtual concatenated streams. Suitable for IP Phone, IP cameras and RF "all
outdoor unit" power feeding directly from the Ethernet port.
The card supports 1588v2 master, slave, and transparent modes. It provides up to 64 EoS WAN interfaces
and the total WAN bandwidth is up to 4 x VC-4. A maximum of three MPoE_12G cards can be installed in
one EXT-2U platform.
9.2.11 DMCE1_32
The DMCE1_32 is a CES multiservice card that provides CES for up to 32 x E1 interfaces. It supports the
SAToP and CESoPSN standards and has two SCSI 68-pin connectors for connecting the E1 customer signals
on the front panel.
Connectivity to the packet network is made through one of the following options:
Direct 1.25G SGMII connection to the central packet switch on CPS cards through the backplane.
Connection to 3rd party device (router/switch) through the combo GbE port on the front panel,
working in standalone mode with CESoETH and CESoIP/UDP encapsulation.
Connectivity to the packet network is made through backplane connection (to central packet switch), or
combo GbE port on front panel.
9.2.12 SM_10E
The SM_10E is a multiservice access card platform that introduces various 64 Kbps, N x 64 Kbps PCM
interfaces, and DXC1/0 functionality. It provides the mappers for up to 44 E1s, and a DXC1/0 with a total
capacity of 1,216 DS-0 x 1,216 DS-0. There are three module slots, each of which accommodates traffic
bandwidth of six E1s per slot. Through the configuration of different types of traffic modules, the SM_10E
can provide up to 24 channels of different types of PCM interfaces, such as FXO, FXS, 2W, 4W, 6W, E&M,
V.24, V.35, V.11, Omni, V.36, RS-422, RS-449 C37.94, EoP, and codirectional 64 Kbps interfaces. A maximum
of three SM_10E cards can be installed in one EXT-2U platform.
The SM_10E base card has no external interfaces. Each traffic module for the SM_10E has its own external
interfaces on its front panel.
Figure 9-23: SM_10E front panel
9.2.13 EM_10E
The EM_10E is a multiservice access card that introduces various 64 Kbps, N x 64 Kbps PCM interfaces, and
DXC1/0 functionality. It provides the mappers for up to 16 E1s, and a DXC1/0 with a total capacity of 589
DS-0 x 589 DS-0. There are three module slots, each of which accommodates traffic bandwidth of six E1s
per slot. Through the configuration of different types of traffic modules, the EM_10E can provide up to 24
channels of different types of PCM interfaces, such as FXO, FXS, 2W, 4W, 6W, E&M, V.24, V.35, V.11, Omni,
V.36, RS-422, RS-449 C37.94, and codirectional 64 Kbps interfaces. A maximum of three EM_10E cards can
be installed in one EXT-2U platform.
The EM_10E base card has no external interfaces. Each traffic module for the EM_10E has its own external
interfaces on its front panel.
Figure 9-24: EM_10E front panel
Three traffic module slots are available in an EM_10E card to accommodate the various types of PCM traffic
modules.
The following EM_10E/SM_10E traffic modules are supported:
SM_FXO_8E : Traffic module for eight FXO interfaces.
SM_FXS_8E : Traffic module for eight FXS or FXD interfaces. Each interface can be set to FXS or FXD
independently.
SM_EM_24W_6E : Traffic module for six 24W E&M interfaces. Each interface can be set to 2W, 4W,
6W, 2WE&M, or 4WE&M independently.
SM_V24E : Traffic module for V.24 interfaces (RS232) that supports three modes: Transparent (eight
channels), Asynchronous with controls (four channels), and Synchronous with controls (two
channels). Both point-to-point and point-to-multipoint services are supported.
SM_V35_V11 : Traffic module for two V.35/V.11/V.24/V.36/RS-422/RS-449 (64 Kbps only) compatible
interfaces with full controls. Each interface can independently be configured as V.35 or V.11/X.24 or
V.24 64 Kbps.
SM_CODIR_4E : Traffic module for four codirectional 64 Kbps (G.703) interfaces.
SM_OMNI_E : Traffic module for one OmniBus 64 Kbps interface.
SM_IO18 : Traffic module for 18 input/output configurable ports (dry contacts) for utilities
teleprotection interfaces.
SM_C37.94S/SM_C37.94D : Traffic module for two teleprotection (IEEE C37.94) interfaces. Includes
support for two SFP-based interfaces.
SM_C37.94 : Traffic module for two teleprotection (IEEE C37.94) interfaces.
SM_EOP : SM_10E traffic module for Ethernet data. It supports standard EoP functionality of E1 VCAT,
GFP and LCAS.
Additional types of EM_10E traffic modules will be supported in future versions. Each EM_10E traffic
module can be inserted into any of the three module slots in the EM_10E. All EM_10E traffic modules
support live insertion.
Each module provides corresponding traffic interfaces through a SCSI-36 connector on its front panel. The
cabling of these interfaces can be directly via the SCSI-36 connector, or via the corresponding ICP that
connects the SCSI-36 connector through a special cable.
Figure 9-25: Example of an EM_10 traffic module
9.2.13.1 SM_FXO_8E
SM_FXO_8E is a traffic module with eight FXO interfaces for the SM_10E/EM_10E card. Up to three
modules can be configured in one SM_10E/EM_10E card, totaling 24 FXO interfaces. The SM_FXO_8E
provides telephone line interfaces for the central office side.
9.2.13.2 SM_FXS_8E
SM_FXS_8E is a traffic module with eight FXS or FXD interfaces for the SM_10E/EM_10E card. Each
interface can be set to FXS or FXD independently. Up to three modules can be configured in one
SM_10E/EM_10E card, totaling 24 FXS or FXD interfaces. The SM_FXS_8E provides telephone line interfaces
for the remote side.
9.2.13.3 SM_EM_24W_6E
SM_EM_24W6E is a traffic module with six 2/4W/6W E&M interfaces for the SM_10E/EM_10E card. It
provides two wire and four wire voice frequency interfaces, with ear and mouth signaling interfaces. Each
interface can be set to 2W, 4W, 6W, 2WE&M, or 4WE&M independently. Up to three modules can be
configured in one SM_10E/EM_10E card, totaling 18 2W, 4W, 6W, 2WE&M, or 4WE&M interfaces.
9.2.13.4 SM_V24E
SM_V24E is a traffic module with V.24 interfaces (RS232) for the SM_10E/EM_10E. V.24 is low bit rate data
interface also known as RS232. It supports three types of the module:
Transparent mode with eight channels
Asynchronous mode with controls has four channels
Synchronous mode with controls has two channels
The SM_V24E supports a wide range of bit rates in two grades (low and high) and three operating modes as
described in the following table.
9.2.13.5 SM_V35_V11
SM_V35_V11 is a traffic module with two V.35/V.11/V.24/V.36/RS-422/RS-449 64 Kbps compatible
interfaces with full controls. Each interface can independently be configured as V.35 or V.11/X.24 and can
be mapped to unframed E1 or N x 64K of framed E1 (the interface rate N is configurable). Up to three
modules can be configured in one SM_10E/EM_10E card, totaling six V.35/V.11/V.24 64 Kbps interfaces.
9.2.13.6 SM_CODIR_4E
SM_CODIR_4E is a traffic module with four codirectional 64 Kbps (per ITU-T G.703) interfaces for the
SM_10E/EM_10E.
9.2.13.7 SM_OMNI_E
SM_OMNI_E is a traffic module with OmniBus functionality and four 2W/4W interfaces for the
SM_10E/EM_10E. Each interface can be set to 2W or 4W mode by the management.
Omnibus is a special interface for railway application featuring P2MP communications. This interface is very
similar in nature to SDH OW.
9.2.13.8 SM_IO18
The SM_IO18 is a sub module of the SM_10E/EM_10E, which provides 18 dry contact ports and is used for
substation alarm monitoring and control. Each port can be defined as input or output by configuration:
Input port
Port name and severity is configurable
Monitor type is configurable between alarm and event
Output
Support manual control
Support automatic control by associating an input port.
9.2.13.9 SM_C37.94S/SM_C37.94D
SM_C37.94S/D sub modules provide two teleprotection interfaces per IEEE C37.94 for the
SM_10E/EM_10E. The interfaces enable transparent communications between different vendors'
teleprotection equipment and multiplexer devices, using multimode optical fibers.
In general, teleprotection equipment is employed to control and protect different system elements in
electricity distribution lines.
Traditionally, the interface between teleprotection equipment and multiplexers in high-voltage
environments at electric utilities was copper-based. This media transfers the critical information to the
network operation center. These high-speed, low-energy signal interfaces are vulnerable to intra-
substation electromagnetic and frequency interference (EMI and RFI), signal ground loops, and ground
potential rise, which considerably reduce the reliability of communications during electrical faults.
The optimal solution is based on optical fibers. Optical fibers don't have ground paths and are immune to
noise interference, which eliminates data errors common to electrical connections.
NOTES:
SM_C37.94S supports two SFP based C37.94 interfaces (OTR2M_MM and OTR2M_SM,
which should be ordered separately).
The SM_C37.94D submodule works with two C37.94D oversampling interfaces (OTR2MD)
based on DC-coupling SFPs, designed for non-standard low bound rate optical interfaces.
9.2.13.10 SM_C37.94
The SM_C37.94 module provides two teleprotection interfaces per IEEE C37.94 for the EM_10E/SM_10E.
The interfaces enable transparent communications between different vendors' teleprotection equipment
and multiplexer devices, using multimode optical fibers.
In general, teleprotection equipment is employed to control and protect different system elements in
electricity distribution lines.
Traditionally, the interface between teleprotection equipment and multiplexers in high-voltage
environments at electric utilities was copper-based. This media transfers the critical information to the
network operation center. These high-speed, low-energy signal interfaces are vulnerable to intra-
substation electromagnetic and frequency interference (EMI and RFI), signal ground loops, and ground
potential rise, which considerably reduce the reliability of communications during electrical faults.
The optimal solution is based on optical fibers. Optical fibers don't have ground paths and are immune to
noise interference, which eliminates data errors common to electrical connections.
9.2.13.11 SM_EOP
The SM_EOP is a traffic module with two Ethernet interfaces for the SM_10E. It provides two 10/100BaseT
interfaces and supports EoP functionality, including N x E1 virtual concatenation, GFP-F encapsulation, and
LCAS. It also supports N x 64K HDLC encapsulation. The total bandwidth of the SM_EoP is four E1s.
MPLS-TP as a transport layer for metro Carrier Ethernet services, rather than using Ethernet as both
transport and service layers, enhances the Ethernet service, enabling it to meet a complete carrier class
standard. MPLS-TP addresses all key attributes defined by MEF for Carrier Ethernet:
Hard Quality of Service (QoS), with guaranteed end-to-end (E2E) Service Level Agreements (SLAs) for
business, mobile, and residential users that enables efficient differentiated services, allowing service
providers (SPs) to tailor the level of service and performance to the requirements of their customers
(real-time, mission-critical, BE, etc.), as well as assuring the necessary network resources for
Committed Information Rate (CIR) and Extended Information Rate (EIR).
Reliability, with a robust, resilient network that can provide uninterrupted service across each path.
This includes network protection of less than 50 msec using link/node Fast ReRoute (FRR) and
meeting a five 9s standard of E2E service availability.
Scalability of both services and bandwidth, ranging from megabits to hundreds of gigabytes with
variable granularity and hundreds of thousands of flows supporting controlled scalability for both the
number of elements and the number of services on the network.
End to End Service Management through a single comprehensive Network Management System
(NMS) that provisions, monitors, and controls many network layers simultaneously. Advancement in
the management of converged networks takes advantage of the “condensed” transport layer for
provisioning and troubleshooting while presenting operators with tiered physical and technology
views that are familiar and easy to navigate. The comprehensive NMS simplifies operations by
allowing customers and member companies to monitor and/or control well-defined and secure
resource domains with partitioning down to the port.
Security, with a safe environment that protects subscribers, servers, and network devices, blocking
malicious users, Denial of Service (DoS), and other types of attacks. Use of provider network
constraints, as well as complete traffic segregation, ensures the highest level of security and privacy
for even the most sensitive data transmissions.
Figure 10-1: Carrier class Ethernet requirements
MPLS-TP is both a subset and an extension of MPLS, already widely used in core networks. It bridges the
gap between packet and transport worlds by combining the efficiency of packet networks with the
reliability, carrier-grade features, and OAM tools traditionally found in SDH transport networks. MPLS-TP
builds upon existing MPLS forwarding and MPLS-based pseudowires, extending these features with in-band
active and reactive OAM enhancements, deterministic path protection, and a network management-based
static provisioning option. To strengthen transport and management functionality, MPLS-TP excludes
certain functions of IP/MPLS, such as label-switched path (LSP) merge, Penultimate Hop Popping (PHP) and
Equal Cost Multi Path (ECMP).
As the following figure illustrates, MPLS-TP is both a subset of MPLS and an extension of MPLS, tailored for
transport networks.
Figure 10-2: Relationship of MPLS-TP to IP/MPLS
As part of MPLS, MPLS-TP falls under the umbrella of the IETF standards. RFC 5317 outlined the general
approach for the MPLS-TP standard and has been followed by more than 10 additional requirement and
framework RFCs. There are also many more working group documents in the editor's queue or in late-stage
development. Although not fully standardized yet, operators are comfortable enough with the status to
have begun rolling out networks based on MPLS-TP.
MPLS-TP is supported across product lines, enabling E2E QoS assurance across network domains. As a
leader in MPLS-TP technology, we are participating in the standards development process as it unfolds. Our
MPLS-TP components are designed to be future proof, capable of incorporating and supporting new
standard requirements as they are defined.
To meet these evolving demands, telecommunications is moving from voice PSTN to VoIP, from TDM
leased lines to Ethernet VPNs, from TDM-based 2G and 2.5G mobile networks to 3G/4G data networks, and
from simple BE HSI access to advanced triple play networks for SMB, enterprise, and home use.
Today's operator challenge is to build an infrastructure that maximizes bandwidth capacity while
minimizing costs. Network upgrades must be relatively smooth and painless, maintaining revenue flow
from legacy services and not requiring a major change to operations. Operators must be able to provide a
carrier class standard of service with more bandwidth at less cost per bit, and still get a satisfactory ROI.
This is where our solutions come into play. Our All-Native transport solution combines native TDM service
with native Ethernet service and MPLS TP connectivity, for a NG All-Native Packet-OTS platform that
provides cost efficient transport of both TDM and Ethernet services with the scalability, reliability, and the
strict QoS requirements dictated by modern communication applications.
MPLS-based data services are the basis for the profitable triple play, enterprise, wholesale, and mobile
customer services that are in such demand today. These services are provided in Neptune platforms using
PB technology, PW technology, or mixtures of both technologies. Examples of these services, with simple
explanations of the Neptune implementation features, are provided in the following sections.
E-Tree (Rooted-Multipoint) for point-to-multipoint (P2MP) multicast tree connectivity, designed for
BTV/IPTV services. These include:
Ethernet Private Tree (EP-Tree): In its simplest form, an E-Tree service type provides a single
root for multiple leaf UNIs. Each leaf UNI only exchanges data with the root UNI. This service is
useful and enables very efficient bandwidth use for BTV or IPTV applications, such as
multicast/broadcast packet video. With this approach, different copies of the packet need to be
sent only to roots that are not sharing the same branch of the tree.
Ethernet Virtual Private Tree (EVP-Tree): An EVP-Tree is an E-Tree service that provides
rooted-multipoint connectivity across a shared infrastructure supporting statistical multiplexing
and over-subscription. EVP-Tree is used for hub and spoke architectures in which multiple
remote offices require access to a single headquarters, or multiple customers require access to
an internet SP's point of presence (POP).
E-Tree services may be implemented, for example, through an MPLS Rooted-P2MP Multicast Tree
that provides an MPLS drop-and-continue multicast tree on a shared P2MP multicast tree tunnel,
supporting multiple Digital TV (DTV)/IPTV services as part of a full triple play solution. LightSOFT
provides full support for classic E-Tree functionality as of the current release.
E-Access (Ethernet Access) for Ethernet services between UNI and E-NNI endpoints, based on
corresponding Operator Virtual Connection (OVC) associated endpoints. Ethernet services defined
within the scope of this specification use a P2P OVC which associates at least one OVC endpoint as an
E-NNI and at least one OVC endpoint as a UNI. These services are typically Ethernet access services
offered by an Ethernet Access Provider. The Ethernet Access Provider operates the access network
used to reach SP out-of-franchise subscriber locations as part of providing E2E service to subscribers.
Figure 11-1: MEF definitions for Ethernet services
The Neptune product line supports the full set of MEF services, including E2E QoS, C-VLAN translation, flow
control, and Differentiated Services Code Point (DSCP) classification.
Sites that belong to the same MPLS VPN expect their packets to be forwarded to the correct destinations.
This is accomplished through the following means:
Establishing a full mesh of MPLS LSPs or tunnels between the PE sites.
MAC address learning on a per-site basis at the PE devices.
MPLS tunneling of customer Ethernet traffic over PWs while it is forwarded across the provider
network.
Packet replication onto MPLS tunnels at the PE devices, for multicast-/broadcast-type traffic and for
flooding unknown unicast traffic.
The following figure illustrates a P2MP multicast tree with PE1 as the source PE (root), P1 as a transit P, PE2
as a transit PE (leaf PE), and PE3, PE4, and PE5 as the destination or leaf PEs. The link from PE1 to P1 is
shared by all transit and destination leaf PEs; therefore the data plane sends only one packet copy on that
link.
Figure 11-4: P2MP multicast tunnel example
The following figure illustrates a second example of a P2MP multicast tree arranged over a multi-ring
topology network. The multicast tunnel paths are illustrated in both a physical layout and a logical
presentation. In this example, PE1 is the source PE (root); P1 and P2 are transit Ps; PE2, PE3, PE5, and PE6
are transit leaf PEs; and PE4 and PE7 are destination leaf PEs.
Figure 11-5: P2MP multicast tunnel example - physical and logical networks
The full triple play solution, incorporating P2MP multicast tunnels, star VPLS, and IGMP snooping, is
illustrated in the following figure. The P2MP multicast tunnels carry IPTV content in an efficient
drop-and-continue manner from the TV channel source, headend router, and MSER, through the root PE
(PE1) to all endpoint leaf PEs. The VPLS star carries all other P2P triple play services, such as VoIP, VoD, and
HSI. The VPLS star also carries the IGMP messages both upstream (request/leave messages from the
customer) and downstream (query messages from the router). IGMP snooping is performed at the
endpoint leaf PEs to deliver only the IPTV channels requested by the user. This allows scalability in the
number of channels, as well as freeing up bandwidth for other triple play services.
Figure 11-6: Triple play network solution for IPTV VoD VoIP and HSI services
IGMP-aware MP2MP VSIs augment the network elements illustrated in the preceding figure by combining
multicast and unicast traffic on the same interfaces, and reducing multicast traffic towards subscribers at
the domain edge. This approach uses standard VPLS mechanisms for intra-domain delivery. Multicast
delivery is implemented through ingress replication across a full mesh of PWs, filtered based on subscriber
requests to eliminate unnecessary traffic. These elements are highlighted in the following figure.
On the management plane, this approach is implemented through an enhanced VSI configuration that
includes enabling IGMP proxy functionality. Upstream (host) and downstream (router) AC (link) and peer
(node) must be explicitly configured as IGMP-aware, and assigned their own IP addresses and subnet
masks. On the control plane, IGMP proxy is implemented through configuring one instance per VSI,
including the corresponding upstream and downstream node and interface parameters. IGMP queries and
responses are handled at the control plane level. On the data plane, traffic received from an IGMP-aware
AC or peer is separated and handled according to its type (IGMP traffic, non-IGMP routable IP multicast, or
other MP2MP VSI traffic).
For example, the following figure illustrates a network reference model for IGMP-aware VSI.
Figure 11-8: Simple network reference model for IGMP-aware VSI
This diagram shows an IP/MPLS domain representing a single AS with GP (IS-IS or OSPF) running on all
intra-AS links. An MP2MP L2VPN service (VPLS) is set up between some PEs, with a full mesh of PWs set up
between all VSIs representing this service in each of the affected NEs using tLDP.
An edge multicast router is connected to one of the PEs of an MP2MP L2VPN (VPLS) service. Multiple
subscribers to this content are connected to other PEs participating in this VPLS instance via access LANs.
Each subscriber indicates its interest in one or more IPTV channels using IGMPv3, with each IPTV channel
mapped to exactly one SSM Multicast Channel.
The VSI representing the VPLS service in question in each of the affected PEs is marked as IGMP-aware. Its
relevant ACs are marked as Upstream or Downstream. Each PW that connects the VSI that is directly
connected through the edge multicast router to a VSI that is directly connected to a subscriber LAN is
treated as an Upstream interface in the former and as a Downstream interface in the latter. An IGMP Proxy
instance is associated with this VSI and treats its Downstream and Upstream ACs and PWs as if they were
Upstream and Downstream.
When an Ethernet frame is received from the Upstream AC or PW associated with an IGMP-aware VSI, it is
checked to see whether it belongs to one of the following traffic types:
IGMP packets. These are identified by Ethertype being IPv4 and IP protocol number being IGMP. The
IGMP packets are trapped to the IGMP Proxy instance for processing.
Routable IP multicast packets. These are identified by Ethertype being IP, IP protocol number being
different from IGMP, and Destination IP address being a routable IP multicast address. The routable IP
multicast packets undergo normal VPLS flooding, subject to additional filtering based on the contents
of the Group Membership DB built by the corresponding IGMP Proxy instance.
All other packets. These frames receive normal VSI forwarding in accordance with the L2 FIB of the VSI
created by the normal MAC Learning process.
With this network model, these rules result in the following handling of routable multicast traffic
transmitted by the IPTV Content Server:
Unicast traffic will be forwarded as if in the normal MP2MP VSI. For example:
Unicast traffic generated by triple play services (such as VoIP, internet access, or VoD traffic)
Fast delivery of the baseline picture after selecting a new IPTV channel by the subscriber
Each routable IP multicast packet received from the server by the directly-connected PE would be
forwarded (using ingress replication) to all PEs connected to the subscriber LANs that have requested
the corresponding Multicast Channel.
The PE that is directly connected to the subscriber LANs will forward each routable IP multicast packet
received from its single Upstream PW to all subscriber LANs where subscribers have requested this
channel. The packet will not be sent it to the LANs where nobody has requested the channel.
Attach/Detach VLAN
Neptune Layer 2 cards enable the provider to add a VLAN tag to incoming untagged frames. This VLAN is
named PVID and is maintained throughout the network. The PVID enables the operator to identify different
clients arriving from different ports, even after being multiplexed in point-to-multipoint (P2MP)
configurations. The PVID is detached from the frames that are outgoing from the same port which was
configured to attach and detach PVID.
C-VLAN bundling
VLAN bundling carries the traffic of multiple VLANs. Multiple customer C-VLANs can map through a single
Ethernet service on the UNI. All-to-one bundling is a special case whereby all customer VLANs map to a
single Ethernet service at the UNI.
TIP: Neptune platforms allow you to configure both TM models within a single port,
increasing the service options available to network operators. Some of the port LSPs can be
configured with Strict TM, and other LSPs in the same port can be configured with DiffServ
TM.
Flow control with frame buffering (802.3x) reduces traffic congestion. When the input buffer
memory on an Ethernet port is nearly full, the data card sends a 'Pause' packet back to the traffic
source, requesting a halt in packet transmission for a specified time period. After the period has
passed, traffic transmission is resumed. This approach gives the overloaded input buffer a little
'breathing room' while the card clears out the input data and sends it on its way. The following figure
illustrates an NE sending a 'Pause' packet to the link partner.
Figure 13-2: Pause frame example
13.1.1 DMXE_22_L2 TM
The DMXE_22_L2 card has a unique and intelligent Traffic Management (TM), which enables reliable
provisioning of different SLA levels. For example, policer profiles encapsulating the bandwidth parameters,
defined for Ethernet services, is one of the tools used by TM, allowing greater flexibility when managing
different customer scenarios.
Basically, the TM has a simple architecture to provide high capacity infrastructure (up to 10 GbE) for access
rings with small amount of capacity per node.
Below are the basic building blocks for access applications for the egress 10GE traffic flow and traffic
management.
Ingress classification
On ingress all traffic is classified into two groups:
High CoS - traffic is CIR only
Low CoS
Egress scheduling
In general, strict priority is implemented between High CoS and Low CoS traffic.
Either High CoS or Low CoS traffic can reach 10Gbps line rate with some burst handling with tail drop
algorithm.
The 10 GbE port egress queue has a threshold as shown in the following figure.
Figure 13-3: 10 GbE port egress queue threshold
Low CoS traffic is checked upon 10 GbE port egress queue threshold. If the threshold is reached, the packet
is discarded.
High CoS traffic is colored by 10 Gbps Token Bucket. If the packet color is green, then it is allowed to egress
queue. If the packet color is red, then egress queue threshold is checked. If threshold is reached, the packet
is discarded.
Auto Shaping that provides rate limiting and burst smoothing. Optional manual shaping, where user
can configure committed and excess rate limits per CoS on non-MPLS ports.
Auto Weighted Fair Queuing (WFQ) scheduling mechanism, ensuring that bandwidth is distributed
fairly between individual queues. Optional manual scheduling, where user can configure weight per
CoS per switch.
In addition to automatic WRED, PACKET supports user-configurable (manual) WRED profiles. Each CoS
within every port can use any one of these profiles. PACKET WRED is hierarchical, meaning it is applied on
multiple levels (flow or tunnel, CoS, port). A packet is queued for transmission only if the WRED decision at
all three levels is Pass or when it is in guaranteed range. Otherwise the packet is dropped.
13.2.5 Shaping
Dual-rate token bucket shaping provides both maximum BW limits and smoothing. Shaping is applied at the
port and CoS level with the following objectives:
Rate limiting for high-CoS traffic, thereby avoiding starvation to low-CoS traffic.
Marking excess traffic (in excess of the guaranteed quota). This marking serves as input to the WFQ
scheduler, allowing it to distinguish between guaranteed and excess bandwidth usage.
Smoothing the output rate before transmission to the line.
Each element is assigned values for CIR/CBS and PIR/PBS to determine the element's committed and excess
rates and burst size limits.
13.5 Policing
High granularity policing and priority marking (802.1p) per SLA enables the provider to control the amount
of bandwidth for each individual user and service. Two-rate three-color policing enhances the service
offering, combining high priority service with BE traffic for the same user. Policer profiles, encapsulating the
bandwidth parameters defined for Ethernet services, allow greater flexibility when managing different
customer scenarios. Bandwidth allocations and traffic priority can be configured per ingress or egress ports,
as well as per EVC and per CoS. This hierarchical approach is illustrated in the following figure.
Figure 13-9: Traffic management with policer profiles
These MPLS cards implement two-rate three-color dual token bucket policing that support 1000 profiles,
defining rate limitations and achieving a notable combination of efficiency and flexibility. Intelligent
bandwidth management improves handling of bursty traffic. Based on MEF5 standards, bandwidth
management profiles are extended. Traffic policing is configured in two stages, in this order:
1. Configure policing profiles.
2. Assign policing profiles to service flows.
Each policing profile can be assigned to multiple service flows, each of which is defined by service identifier,
ingress port, and CoS. Supported traffic categories include guaranteed and BE traffic. Users configure the
following parameters when defining policers:
Guaranteed traffic is defined through two types of BW values:
Committed Information Rate (CIR) in kilobits per second, defining the SLA's average guaranteed
transmission rate commitment.
Committed Burst Size (CBS) in kilobytes, defining the maximum number of bytes that can be
carried in a single transmission burst of CIR traffic.
Best Effort traffic is defined through two types of BW values:
Excessive Information Rate (EIR) in kilobits per second, defining the SLA's average best effort
transmission rate. The EIR traffic is of lower priority and may be discarded in case of network
congestion.
Excessive Burst Size (EBS) in kilobytes, defining the maximum number of bytes that can be
carried in a single transmission burst of EIR traffic.
Within the traffic categories, packets are marked with one of three colors. Packet color is marked in the
MPLS EXP bits upon mapping into MPLS tunnels:
Green packets meet the requirements for guaranteed CIR/CBS traffic. Green packets have the least
risk of being discarded in times of traffic congestion.
Yellow packets meet the requirements for EIR/EBS traffic. Yellow frames have a greater risk of being
discarded in times of traffic congestion.
Red packets do not meet the requirements for either traffic category and are discarded.
Note that yellow packets are not discarded automatically. Yellow packets simply have a slightly higher risk
of being discarded by one of the filtering mechanisms during periods of traffic congestion. Users can
change and redefine their green and yellow packet preferences via WRED profile configuration.
The policer also sets each flow's CoS by setting the priority bit to the appropriate value assigned by the
user. PACKET cards support eight classes of service, CoS0-CoS7, where CoS7 has the highest priority and
CoS0 has the lowest. The CoS value is attached to each packet in the flow and maintained as long as the
packet travels within the network.
Per VLAN (EVC): A single ingress BW profile is applied to all ingress service frames for a specific EVC.
This BW profile attribute is associated with each VLAN (EVC) in the UNI port. The following figure
illustrates how the BW profiles are assigned per EVC.
Figure 13-11: Traffic port/VLAN based
Per Ingress UNI Port: A single ingress BW profile is applied to all ingress service frames for a specific
UNI port. This BW profile attribute is independent of the EVCs in the UNI port. The following figure
illustrates how the BW profiles are assigned per UNI.
Figure 13-12: Traffic policer port based
1
For NPT-1200 with MCIPS320 and NPT-1800.
E2E OAM can be achieved by combining the various OAM techniques, as illustrated in the following figure.
Ethernet link OAM can be used to monitor and localize failure at the connection point between the
customer and the NE. MPLS tunnel OAM can be used to monitor the connections along the provider's MPLS
network. Service OAM provides E2E service monitoring.
Figure 14-1: E2E OAM model for a mobile backhaul network
BFD provides proactive E2E tunnel CC (Continuity Check), CV (Connectivity Verification), and Remote Defect
Indication (RDI):
Continuity Check (CC): Continuously monitors the integrity of the continuity of the path. In addition to
failure indication, detection of Loss of Continuity may trigger the switch over to a backup LSP.
Connectivity Verification (CV): Monitors the integrity of routing of the path between sink and source
for any connectivity issues, continuously or on-demand. Detection of unintended continuity blocks
the traffic received from the misconnected transport path.
Remote Defect Indication (RDI): Enables an End Point to report to its peer a fault or defect condition
that it detects on a path.
NE platforms work with BFD according to IETF RFC 5880, using the CC mechanism for pro-active monitoring
of MPLS-TP LSPs. Similar to other transport technologies, Neptune provides sub-50 msec protection
switchover in case of forwarding path failure, triggered by BFD's consistent failure detection method.
Loopback: A request/response protocol similar to the classic IP Ping tool. MEPs send Loopback
Messages (LBMs) to verify connectivity with another MP (MEP or MIP) within a specific MA. The
target MP generates a Loopback Reply Message (LBR) in response. LBMs and LBRs are used to verify
bidirectional connectivity, and are initiated by operator command. The path of a typical loopback
sequence is illustrated in the following figure.
Figure 14-7: Loopback protocol
Link Trace: Another request/response protocol similar to the classic IP Traceroute tool. Link trace may
be used to trace the path to a target MP (MEP or MIP) and for fault isolation. MEPs send multicast
Link Trace Messages (LTMs) within a specific MA to identify adjacency relationships with remote MPs
at the same administrative level. When an MP receives an LTM, it completes one of the following
actions:
If the NE is aware of the target MP destination MAC address in the LTM frame and associates
that address with a single egress port, the current MP generates a unicast Link Trace Reply (LTR)
to the initiating MEP and forwards the LTM to the target MEP destination MAC address.
Otherwise the LTM frame is relayed unchanged to all egress ports associated with the MA
except for the port from which the message was received.
The path of a short link trace sequence is illustrated in the following figure.
Figure 14-8: Link trace
CFM Alarm Management: Various types of CFM alarms can be received at the service level when
Alarms functionality is enabled for an MA.
Neptune platforms provide protection at every network level. Network operators can choose from a range
of Network-level, MPLS-TP, PB, and equipment protection schemes, creating a protective structure
tailored to their specific network configuration and functionality.
In this H-VPLS network, the dual-homed PE has configured spoke PWs to H-VPLS gateways PE1 and PE2.
One of the PEs is currently active, linked to the PE via the primary PW. The primary PW is given priority by
the EMS and is responsible for forwarding traffic to the peer H-VPLS domain. Failure of an H-VPLS gateway
PE generates an OAM defect, which in turn triggers the dual-homed PE to select a new primary PW. A
hold-off timer can be used to mask temporary server layer faults.
Another option to trigger PW redundancy is by using PW status from the gateway PE. The end to end PW is
traversing two H-VPLS domains and tunnel OAM is maintained over each domain. Hence, in case of a failure
in Domain #2 which is not recovered by the tunnel protection, the gateway PE will mark the PW as down
and generate a defect status message towards the pivot node that will trigger a PWR switch.
A PW switchover requires an FDB flush at PE1, PE2, and the far H-VPLS domain. This is achieved by the
transmission of CCN messages between data cards that indicate for which PE(s) the FDB entries should be
deleted (see Configuring CCN).
PW Redundancy can also be used for load balancing between the H-VPLS gateways. By configuring some
PEs with the primary PWs toward PE1 (where PE1 becomes the default H-VPLS gateway), and other PEs
with primary PWs toward PE2, the traffic load can be reasonably balanced between the two gateway PEs.
NOTE: In dual-homing to H-VPLS topology, BFD must be used to monitor the status of the
remote PE and the status of the transport layer, in order for the pivot PE to select the
appropriate PW. BFD should therefore be enabled on the tunnel carrying the PW (see
Configuring MPLS-TP Linear Protection).
15.1.4 Multi-segment PW
An L2VPN multisegment pseudowire (MS-PW) is a set of two or more PW segments that function as a single
PW, as illustrated in the following figure.
Some routers participating in the PW segments are identified as switching provider edge (S-PE)
routers, which are located at the switching points connecting the tunnels of the participating PW
segments.
Some routers participating in the PW segments are identified as terminating provider edge (T-PE)
routers, which are located at the MS-PW endpoints.
The S-PE routers can switch the control and data planes of the preceding and succeeding PW
segments.
MS-PWs can span multiple cores or autonomous systems of the same or different carrier networks.
Figure 15-6: Stitching PE
MS-PW service enables a hierarchical network structure for data networks, similar to H-VPLS
capabilities.
MS-PW functionality improves scalability, facilitates multi-operator deployments and facilitates use of
different control plane techniques in different domains.
These are valuable capabilities in network configurations that must typically be able to integrate static PW
segments in the access domains and signaled PW segments in the IP/MPLS core.
Signaling gateways (SGW) are used to tie PW segments together into a single connection (stitching) at
a given point.
This functionality is implemented within a single platform located at the border of two network
domains. The two domains may both be static, both dynamic, or one static and one dynamic.
Network interworking enables LSP and service stitching, interaction between the data planes and E2E
OAM.
MPLS-TP and IP/MPLS domains can be connected through SGWs. In PW-based backhaul, this is
implemented through multisegment PWs (MS-PWs), including:
Static MPLS-TP segments
Dynamic IP-MPLS segments
Gateway interconnections or "stitches" of both types of segments
In LightSOFT Hybrid products, MS-PWs are used to stitch together static MPLS-TP segments. With IP/MPLS
NEs, MCIPS320, MS-PWs can also be configured as SGWs, stitching together static and dynamic segments.
MS-PWs make it possible to offer a single E2E service that seamlessly spans network domains, simplifying
service management and OAM.
Figure 15-8: PW switching point
NOTE: This network solution model is based on RFC 6073 and RFC 6718, using PW-R in a linear
protection scheme.
The following figure illustrates a VPLS service, configured as an E-Tree (P2MP) service with the root nodes
located in the network core and the leaf nodes located on the access NEs. Note that a service can be
provisioned for groups of access nodes, where all access nodes are either located in the same physical
access ring or all participating access rings are homed to the same aggregation node. Services can also be
provisioned for specific access nodes.
Figure 15-9: Basic VPLS service configuration
Within this network, the network operator can configure multiple MS-PW path segments, running between
the core, aggregation, and access nodes. The network operator can also define corresponding PW
redundancy pairs for these PW segments.
Figure 15-10: MS-PW with PW redundancy
Bidirectional E-LSP tunnels carry each PW segment. BFD is used to monitor the LSP operational status.
MPLS-TP 1:1 linear protection is provided to protect against multiple link failures.
This model provides robust, efficient protection for link and node failures. For example, the first layer of
protection covers link failures, based on MPLS-TP 1:1 linear protection and LSP BFD. The second layer of
protection covers aggregation node failure, based on PW-R and E2E PW VCCV (BFD). An alternative
implementation would use LSP OAM per PW segment and propagation of the segment status, using PW
status messages at the S-PE.
In Protection LAG, the participating ports may both be located on a single card, or located on different
cards within the same NE (inter-card LAG (IC-LAG)), or located on different cards installed in two
different NE (multichassis LAC (MC-LAG)). Participating ports are configured as master ports and share
the same LAG identification key. Each port must be provided with the global PE ID of the second
corresponding partner port.
When working with multiple ports, the member ports are organized into active and standby port
groups, providing both node and link protection as well as load sharing between ports in the active
group. MC-LAG can be integrated with other MPLS-TP protection mechanisms, supporting service
interworking towards the network either through PW redundancy or as a CCN trigger, as relevant. For
example, MC-LAG can be configured in the Ethernet segment and PW-R in the MPLS segment.
MC-LAG enables to improve the performance of data networks and provides a higher network protection
with improved reliability. It extends the link-level redundancy capabilities of link aggregation, and adds
support for the device-level redundancy. This is achieved by allowing one end of the link aggregated port
group to be dual-homed into two different devices to provide device-level redundancy.
In the MC-LAG protection scheme the CE behaves as a normal LAG device from the perception of hashing
and traffic distribution. The PE1 and PE2 devices communicate to each other, exchanging LAG messages by
multi-chassis LACP (mLACP) over Inter-Chassis Communication Protocol (ICCP).
As a result of the communication, the group of member ports on a PE is either Active or Standby. When the
member ports are active, Load Sharing is applied locally between the ports. Since port(s) can be active only
on one PE, the two PEs exchange port status information between them, so PE1 knows if the LAG on PE2 is
up or down, and vice versa. If multiple ports in the LAG, LAG Link Down Threshold is used to decide if the
LAG is up or down. A local decision is done on each PE whether to activate its local ports or to keep it as
standby. Equipment failure of the peer PE is detected via OAM (MPLS-TP BFD) and triggers the local LAG to
activate its ports.
When a failure is detected, the system reacts by triggering a switchover from the Active PE to the Standby
PE.
A link failure along LSP1 automatically triggers 1:1 protection and traffic is redirected to LSP2, the
original protection path.
Figure 15-14: Automatic restoration: phase 2 (link failure)
The NMS now recalculates and downloads a new path, restoring traffic to most of the original LSP1
route while bypassing the link failure.
Figure 15-15: Automatic restoration: phase 3 (NMS recalculates)
If multiple link failures are detected in the original LSP, LightSOFT dynamically restores the relevant tunnels
by configuring alternative routes, working link by link and taking all active failures into account when
performing restoration. As the participating links are repaired, LightSOFT reverts the tunnels where
possible to the original links.
Network restoration is a dynamic, flexible feature that intelligently chooses the most efficient route, based
on the current network status, correlating all affected tunnels and identifying the most efficient route for
the current network functional topology. As link failures are fixed, LightSOFT efficiently reverts the affected
tunnels, correlating the tunnels and repaired links and completing either full or partial reversions.
Automatic network restoration can be configured for protected and unprotected tunnels, for either one or
both main and protection paths. Operators can choose how they prefer to optimize resource usage, either
maximizing disjoint route selection or focusing on resource sharing to minimize resource utilization.
Network restoration provides protection from multiple network failures, since new LSP paths are
dynamically prepared and ready for use before they are needed.
You can view the tunnel status in the Tunnel List window. In the event of a failure, a dotted line indicates
the original path of the tunnel and a solid line of the same color indicates the active (restoration) path.
Figure 15-16: Tunnel restoration
15.2.1 SNCP
SNCP provides independent trail protection for individual subnetworks connected to the Neptune Product
Line platforms. Combined with the system’s drop-and-continue capability, SNCP is a powerful defense
against multifailure conditions in a mesh topology. By integrating SNCP into the Neptune Products,
operators achieve superior traffic availability figures. Therefore, SNCP is extremely important for leased
lines or other traffic requiring superior SLA availability.
SNCP/N and SNCP/I at any VC level (VC-4, VC-3, VC-12) are supported. The SNCP mode can be configured
through the EMS-APT/LCT-APT per VC. Automatic SNCP switching is enabled, without operator intervention
or path redefinition. The Neptune Product Line can support path protection by TDM based matrices, such
as : XIOxx, CPTSxxx, MCPTSxxx. The result is exceptionally fast protection switching in less than 30 msec,
with typical switching taking only a few milliseconds. Protection switching is performed via the cross-
connect matrix in the XIOxx, CPTSxxx, MCPTSxxx cards.
15.2.2.1 MSP
MSP is designed to protect single optical links. This protection is most suitable for appendage TM/star links
or for 4-fiber links in chain topologies.
The Neptune supports MSP in all optical line cards (STM-1, STM-4, STM-16, and STM-64). MSP 1+1
unidirectional and bidirectional modes are supported. MSP 1+1 is implemented between two SDH
interfaces (working and protection) of the same bitrate that communicate with two interfaces on another
platform. As with SNCP and path protection, in MSP mode the Neptune provides protection for both fiber
and hardware faults.
The following figure shows a 4-fiber star Neptune with all links protected. This makes sure uninterrupted
service even in the case of a double fault. The Neptune automatically performs MSP switching within
50 msec.
Figure 15-18: MSP protection modes
15.2.2.2 MS-SPRing
In addition to SNCP protection that may also be implemented in mesh topologies, the Neptune supports
MS-SPRing that provides bandwidth advantages for selected ring-based traffic patterns.
Two-fiber MS-SPRing supports any 2.5 Gbps and/or 10 Gbps rings closed by the Neptune via
XIO30_16/XIO64/XIO16_4/CPTS100//CPTS320 cards, in compliance with applicable ITU-T standards. This is
fully automatic and performed in less than 50 msec.
NOTES:
In the NPT-1030 and NPT-1200 products, MS-SPRing is supported by the following card
sets:
XIO30_16
XIO64
XIO16_4
CPTS100
As explained in this section, MS-SPRing is a network protocol that runs on the ring
aggregate cards. The PDH, STM-1, STM-4, and data cards (electrical and optical) that serve
as drop cards connected to the client are not part of the MS-SPRing ring protocol.
However, all client services can be delivered via MS-SPRing on Neptune networks through
the drop cards and the SDH aggregate cards that create the MS-SPRing protection ring.
MS-SPRing can support LO traffic arriving at the nodes in the same way it does HO traffic.
In MS-SPRing modes, the STM-n signal is divided into working and protection capacity per MS. In case of a
failure in one MS of the ring, the protection capacity loops back the affected traffic at both ends of the
faulty MS. The platform supports the full squelching protocol to prevent traffic misconnections in cases of
failure at isolated nodes. Trails to be dropped at such nodes are muted to prevent their being delivered to
the wrong destination.
MS-SPRing is particularly beneficial in ring applications with uniform or adjacent traffic patterns, as it offers
significant capacity advantages compared to other protection schemes.
The following figure shows an Neptune in a 2-fiber MS-SPRing. In this configuration, two fibers are
connected between each site. Each fiber delivers 50% of the active and 50% of the shared protection
traffic. For example, in an STM-16 ring, 8 VC-4s are active and 8 VC-4s are reserved for shared protection.
In the event of a fiber cut between sites A and D, traffic is transported through sites B and C on the black
portion of the counterclockwise fiber. The switch in traffic is triggered by the APS protocol that transmits
control signals over the K1 and K2 bytes in the fiber from site D to site A.
Figure 15-19: Two-fiber protection
The preceding figure portrays two endpoints linked by main and protection paths. Two links are configured
between the two paths, represented by the X shape link topology in the center of the figure. The first fiber
cut on the main path (labeled A), triggers a switch at both endpoints from the main path to the protection
path. A second fiber cut on the protection path (labeled B), triggers a switch at the appropriate points from
the protection path back to the main path. After each fiber cut, the optical equipment used at the DRI
configured nodes at either end of the DRI links must also switch their internal Rx/Tx settings accordingly.
OCH protection is currently the most popular optical protection method for the optical layer. The
mechanism transports each optical channel in two directions, clockwise and counterclockwise. The shortest
path is defined as the main or working channel; the longer path as the protection channel.
The main benefit of OCH protection is its ability to separately choose the shortest path as the working path
for each channel. There are no dedicated working and protection fibers. Each fiber carries traffic with both
working and protection signals in a single direction.
The OCH 1+1 protection scheme provides separate protection for each channel. For SDH, GbE, 10G, and
40G, protection switching is based on PM parameters. Switching criteria can be Loss of Signal (LOS), Loss of
Frame (LOF), or Degraded Signal (SD). The switch-to-protection mode is automatic when a malfunction is
detected in a single channel. This is very convenient as users can choose the channels for protection and
the main or protection paths. Switch-to-protection time in the OCH 1+1 protection scheme is less than
50 msec.
Common units
The Neptune provides 1+1 and 1:1 protection of the power supply, central switches, and fan units.
With traditional Fast IOP, a link failure between DM #1 and the router would result in traffic loss, since DM
#2 remains designated as standby. This means that the router would not be able to find any route available
for traffic. To prevent this loss of traffic, the links are configured over splitter/coupler cables that link both
cards to the router ports (see illustrated in the figure Fast IOP: 1+1 Card Protection).
DM cards resolve this problem through the use of eIOP, by adding LOS as an IOP trigger on selected LAN
ports. With eIOP, a failure on the link to the active DM card triggers an IOP switchover. DM #2 becomes
active and activates transmissions on the LAN ports. The router detects this link is now up and
sets/advertises a new traffic route. Traffic is restored.
With eIOP, the splitter/coupler cable is no longer required. A regular fiber cable can be used between the
DM cards and the router, as illustrated in the preceding figure. This frees a port on each DM card to carry
additional traffic.
The purpose of the protection module is to replace a malfunctioning I/O card automatically with the
redundant I/O card. When the protection is activated, the protection module disconnects the external
ports connected to the electrical protection module of the malfunctioning I/O card and connects them to
the redundant card. In parallel, the matrix card switches the traffic from the malfunctioning card slot to the
protection slot (the slot of the redundant I/O card).
The following tables list the various tributary protection options for the platforms.
15.4.3.1 TP63_1
The TP63_1 provides 1:1 protection for two PE1_63 cards installed in the EXT-2U platform and PME1_63
cards in the base unit. It is activated by the MCP1200, enabling a single I/O backup card to protect the main
(working) I/O card when a failure is detected.
15.4.3.2 TPS1_1
The TPS1_1 provides 1:1 protection for up to four high rate interfaces. It is activated by the MCP1200
according to the corresponding platform it is installed on, enabling a single I/O backup module to protect
the main (working) card when a failure is detected.
The TPS1_1 is connected as follows:
The traffic connectors on the protection I/O module are connected to the PROTECTING CARD1 coaxial
8W8 connector on the TPS1_1.
The traffic connectors on the active I/O module are connected to the PROTECTED CARD2 coaxial 8W8
connector on the TPS1_1.
The traffic cables from the DDF are connected to the CUSTOMER CONNECTION connectors on the
TPS1_1.
15.4.3.3 TPEH8_1
The TPEH8_1 provides 1:1 protection for up to eight electrical Ethernet interfaces (10/100/1000BaseT). It is
activated by the MCP1200, enabling a single I/O backup module to protect the main (working) card when a
failure is detected.
The card design also supports the protection of two separate modules, each with up to four electrical
Ethernet ports. The markings on the TPEH8_1 are divided into two groups that indicate such an option.
The TPEH8_1 is connected as follows:
The customer's Ethernet traffic is connected to the four RJ-45 connectors marked CUSTOMER
CONNECTION 1.
The protected (operating) module is connected to the SCSI connector marked PROTECTED CARD 1.
The protecting (standby) module is connected to the SCSI connector marked PROTECTING CARD 1.
The second group of connectors marked with the suffix 2 is connected similarly for protecting a
second set of four electrical Ethernet interfaces.
Layer 2 Control Protocol (L2CP) flooding protection: Neptune platforms protect against L2CP flooding
sent by malicious users. Protection is implemented by limiting the number of L2CP frames which may
be received from data ports through a combination of BPDU blocking, CFM, IGMP policing, and link
and tunnel OAM rate limiters.
MAC flooding protection: Another typical DoS that may be attempted by malicious users is MAC
flooding. In our equipment, MAC addresses are learned through Forwarding Information Base (FIB)
tables which are optimized for fast lookup of destination addresses. The data cards work with an FIB
quota system to forestall MAC DoS attacks by limiting the number of MAC addresses available for
each VPN.
Dynamic ARP Inspection (DAI): A method of protection against address resolution protocol (ARP)
spoofing attacks. It intercepts, logs, and discards ARP packets with invalid IP-to-MAC address bindings.
This capability protects the network from certain man-in-the-middle attacks.
ARP enables IP communication within a Layer 2 broadcast domain by mapping an IP address to a MAC
address. Spoofing attacks occur because ARP allows a response from a host even when an ARP
request is not actually received. After an attack occurs, all traffic from the device under attack first
flows through the attacker's system and then flows to the router, switch, or host. An ARP spoofing
attack affects the devices connected to your Layer 2 network by sending false information to the ARP
caches of the devices connected to the subnet. Sending false information to an ARP cache is known as
ARP cache poisoning.
DAI ensures that only valid ARP requests and responses are relayed. NPT checks each ARP packet
received against the binding table. If no IP-MAC entry in the table corresponds to the information in
the ARP packet, DAI drops the ARP packet and the local ARP cache is not updated with the
information in that packet. DAI also drops ARP packets when the IP address in the packet is invalid.
ARP probe packets are not subjected to dynamic ARP inspection; NPT always forwards such packets.
Management tools: The data cards, through the LightSOFT NMS and the EMS software, support a full
range of features to keep your network running smoothly and protect it from unauthorized and
malicious use. Supported features include:
Cluster solution that provides high availability and load balancing, essential features for large
networks and/or mission-critical management.
Remote Database Replication (RDR), a field-proven flexible redundancy mechanism providing
full network management backup capabilities for Disaster Recovery Plans (DRPs).
Customer Network Management (CNM), enabling SPs to lease exclusive network resources to
customers for self-management. This sophisticated scheme allows both the autonomous end
customer and the SP to concurrently manage alarms and performance, provision services, and
handle maintenance operations on the resources.
Type Designation
E1 Digital Distribution Frame with balanced-to-unbalanced xDDF-21
conversion
Power distribution and alarm panel for Neptune platforms installed RAP-BG
in racks
Power distribution and alarm panel for Neptune platforms installed RAP-4B
in racks, with alarm distribution support
Power distribution and alarm panel for Neptune platforms installed xRAP-100
in racks
Power distribution unit (PDU) PDU99
Fiber Storage Tray (FST) FST
Optical Distribution Frame (ODF) ODF
Optical Patch Panel (OPP) OPP
ICPs for auxiliary interfaces in the MCP30 ICP_MCP30
ICPs for the traffic modules in the SM_10E SM_10E ICPs
AC power platform (for NPT-1050) AC_CONV_UNIT
AC power module for the AC_CONV_ UNIT (for NPT-1050) AC_CONV_MODULE
AC/DC power converter up to 2550 W (for NPT-1200) AC/DC-DPS850-48-3 power system
Cable guiding accessories Cable guide frame , PME1_63 cable
guide and holder , Fiber guide for
ETSI A racks , and Cable stack tray
Cables Cables
17.1 RAP-4B
The RAP-4B is a power distribution and alarm panel for ECI platforms installed in racks.
NOTE: The RAP-4B supports operation with BG, XDM (100, 300, 900), 9600 series, and
OPT9603 platforms.
NOTE: The maximum power that can be supplied by the RAP-4B to a single platform is not
more than 1.1 kW.
The circuit breakers are installed during the RAP-4B installation. To prevent accidentally changing a
circuit breaker state, the circuit breakers can be reached only after removing the RAP-4B front cover.
The circuit breaker state (ON/OFF) can be seen through translucent covers.
Bay alarm indications: The RAP-4B includes three alarm indicators, one for each alarm severity. When
alarms of different severities are received simultaneously, the different alarm indications light
simultaneously.
NOTE: BG platforms support only two alarm indications, Major and Minor.
A buzzer is activated whenever a Major or Critical alarm is present in an XDM platform or a Major
alarm in a BG or 9600 series platform connected to the RAP-4B.
Connection of alarms from up to four platforms, with max. four alarm inputs and two alarm outputs.
The following figure shows the front panel of the RAP-4B, and the table lists the functions of the front panel
components corresponding to the figure callout numbers.
Figure 17-1: RAP-4B front panel
The RAP-4B alarm connectors are on its circuit board, as shown in the following figure. The table lists the
connector functions. The index numbers in the table correspond to those in the figure.
Figure 17-2: Rap-4B alarm connectors
17.2 RAP-BG
The RAP-BG is a DC power distribution panel for BG and other telecommunication platforms installed in
racks. It distributes power for up to four NPT series platforms installed on the same rack. The nominal DC
power voltage is -48 VDC, -60 VDC, or 24 VDC. Since NPT series platforms can use redundant power
sources, the RAP-BG supports connection to two separate DC power circuits.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
ON/OFF switch for the corresponding circuit. The required circuit breakers are included in the installation
parts kit supplied with the NPT series platforms, and therefore their current rating is in accordance with the
order requirements. The maximum current that can be supplied to a platform fed from the RAP-BG is 16A.
The circuit breakers are installed during the RAP-BG installation. To prevent accidental changing of a circuit
breaker state, the circuit breakers can be reached only after opening the front cover of the RAP-BG. The
circuit breaker state (ON or OFF) can be seen through translucent covers.
The following figure shows the front panel of the RAP-BG, and the table lists the functions of the front
panel components as indicated by the figure callouts.
Figure 17-3: RAP-BG front panel
17.3 xRAP-100
The xRAP-100 is a power distribution and alarm panel for different ECI communication platforms installed
in racks. The xRAP-100 performs the following main functions:
Power distribution for up to four platforms: The nominal DC power voltage is -48 VDC or -60 VDC.
Since most ECI platforms can use redundant power sources, the xRAP-100 supports connection to two
separate DC power circuits. The internal circuits of the xRAP-100 are powered whenever at least one
power source is connected. The presence of DC power within the xRAP-100 is indicated by a POWER
ON indicator.
Each DC power circuit of each platform is protected by a circuit breaker, which also serves as a power
ON/OFF switch for the corresponding circuit. The required circuit breakers are included in the
installation parts kit supplied with the platforms, and therefore their current rating is in accordance
with the order requirements. The 5-pin high power connector supplies power to one platform. The 3-
pin connector supplies power to three platforms.
The xRAP-100 is designed to support one high powered and three regular platforms, or four regular
platforms.
The circuit breakers are installed during the xRAP-100 installation. To prevent accidental changing of a
circuit breaker state, the circuit breakers can be reached only after opening the front cover of the
xRAP-100. The circuit breaker state (ON or OFF) can be seen through translucent covers.
Bay alarm indications: The xRAP-100 includes four alarm indicators, one for each alarm severity.
When alarms of different severities are received simultaneously, the different alarm indications light
simultaneously.
A buzzer is activated whenever a Major or Critical alarm is present in the platforms installed in the
rack.
Connection of alarms from up to four platforms, each one with a maximum of four alarm inputs and
two alarm outputs.
The following figure shows the front panel of the xRAP-100, and the table lists the functions of the front
panel components as indicated by the figure callouts.
The xRAP-100 connectors are on its circuit board, as shown in the following figure. The table lists the
connector functions. The index numbers in the table correspond to those in the figure.
Figure 17-5: xRAP-100 connectors
17.4 PDU99
The PDU99 is a power distribution unit installed in the racks for loads (platforms) that consume up to 9.6
kW.
Figure 17-6: PDU99 general view
IMPORTANT: When connecting the input power cable to the PDU99 you must use a
shrinkable tube to isolate the neck of the ring terminal.
Each input source is monitored by a green LED. The two LEDs, located at the middle of the main
board, enable the user to view the status of each input source.
The nominal DC power voltage is 48 VDC, however the input voltage can range from -40.5 VDC to 72
VDC. The internal circuits of the PDU99 are powered whenever at least one power source is
connected.
The required circuit breakers are included in the installation parts kit supplied with the platforms, and
therefore their current rating is in accordance with the order requirements.
Alarm indications. The PDU99 includes alarm indication of a tripped circuit breaker by a Red LED and
a dry contact per source. The Red LED and the corresponding relay will be activated only if there is a
load connected to the relevant circuit breakers.
The following figure shows the front panel of the PDU99, and the table lists the functions of the front panel
components corresponding to the figure callouts. Front panel components are accessed by lifting the
PDU99 front cover.
NOTE: The description in the table below refers to left side (source A) components of the
PDU99. The right side (source B) is a mirror image of the left side and the description is
therefore identical.
The AC/DC-DPS850-48-3 system has connectors for connecting the load, batteries, AC source, and the
system alarms, at the rear of the unit. It also includes circuit breakers that protect the power supply against
load overcurrent at the battery and rectifier outputs.
The CSU-502 module provides control and monitoring functions. It is supplied preconfigured, ready for
immediate use. It supports system voltage, load current, status, and alarms that can be changed and
displayed on the LCD display.
The AC/DC-DPS850-48-3 platform is preconfigured for fast installation and setup. All system settings are
fully software-configured and stored in transferable configuration files for repeated one-step system setup.
The AC/DC-DPS850-48-3 platform is supplied with two kits of brackets for installation in 19" or ETSI racks.
The following main features are supported by the AC/DC-DPS850-48-3 power system:
19"/ETSI power platform for 48 VDC @ 2250 W (max.) in non-redundant application
Single phase 100-240 VAC input source
Three DPR-850B-48 rectifier units
Light weight plug-in modules for simple installation and maintenance
Hot swappable rectifier and control modules
Front access to the circuit breakers and control module for simplified operation and maintenance
To enable understanding enlarged views of sections on the rear panel are provided in the following figures.
The AC/DC-DPS850-48-3 detailed battery and load connections are shown in the following figure.
The AC/DC-DPS850-48-3 AC source and alarm connections are shown in the following figure.
The following table describers the AC/DC-DPS850-48-3 rear panel component functions (connections).
17.6 ICP_MCP30
Due to limited space on the MCP30 or MCP1200 panel, there is a single connector on the front panel for
the following auxiliary interfaces: External Alarms, RS-232, OW, and V.11. The ICP_MCP30 is configured to
distribute the concentrated Auxiliary connector into dedicated connectors for each function. If none of
these interfaces is used in your application, there is no must to install the ICP_MCP30. If only an External
Alarms interface is used, there is also no must to install the ICP_MCP30 as a special alarm cable leading only
to the External Alarms interface is provided by ECI.
The ICP_MCP30 is connected to the MCP30 or MCP64 using a back-to-back cable.
Figure 17-12: ICP_MCP30 general view
J3 SCSI-36
Connector for special cable connecting ICP to applied SM_10E module
CH1-CH4 RJ-45 SM_FXO_8E FXO interface channel #1 to channel #4
SM_FXS_8E FXS interface channel #1 to channel #4
SM_EM_24W6E 2/4 wire E&M interface channel #1 to channel #4
SM_CODIR_4E Codirectional 64 Kbps interface channel #1 to channel
#4
CH5-CH6 RJ-45 SM_FXO_8E FXO interface channel #5 to channel #6
SM_FXS_8E FXS interface channel #5 to channel #6
SM_EM_24W6E 2/4 wire E&M interface channel #5 to channel #6
SM_CODIR_4E Empty
CH7-CH8 RJ-45 SM_FXO_8E FXO interface channel #7 to channel #8
SM_FXS_8E FXS interface channel #7 to channel #8
SM_EM_24W6E Empty
SM_CODIR_4E Empty
NOTE: ICP-V24F supports connection to V.24 interfaces with standard female connectors.
17.8 AC_CONV_UNIT
The AC_CONV_UNIT is an AC power platform that can be mounted separately in the rack. It performs the
following functions:
Converts AC power to DC power
Filters input for the NPT-1600CB platform
Provides backup for AC power
17.9 AC_CONV_MODULE
The AC_CONV_MODULE is an AC power module that can be plugged into the AC_CONV_ UNIT. It performs
the following functions:
Converts AC power to DC power for the NPT-1600CB only
Filters input for the NPT-1600CB platform
Provides up to 130 W of power
Figure 17-26: AC_CONV_MODULE front panel
All fiber connections are made on a swing-out tray that opens to the right at 90° and houses the splicing
trays, optical adapter panels, and the fiber support. Left-side tray opening is available on request. The
swing-out tray enables quick and easy access to all internal parts for connection or maintenance activities.
The fiber connections are protected by a front cover, which latches to the assembly and prevents
unintended disconnection of fibers.
Figure 17-29: ODF front panel
Optical terminal fibers can enter the ODF from the right or left side and be connected to the optical
adapters from one side. Pigtails connect to the adapters from the other side. Excess length of pigtails and
patch cords is threaded on a fiber support that maintains the minimum bend radius to prevent fiber breaks.
A durable and robust tube leads the external fibers cable to the swing-out tray and protects them from
breaks. The adapters are arranged on panels in groups of four or two (depending on the total number of
ports). A large space between the adapters enables easy access to each individual fiber and quick
reconfiguration.
17.13 xDDF-21
The PME1_21 supports only balanced E1s directly from its connectors. For unbalanced E1s, an external DDF
with E1 balanced-to-unbalanced conversion must be configured.
When unbalanced 75 interfaces are required, the xDDF-21 patch panel enables connection and
conversion of these interfaces to the balanced 120 interfaces of the PME1_21.
The xDDF-21 is 1U high and can be installed in ETSI A and ETSI B racks, as well as in 19” racks. It has a
capacity of 21 E1 lines.
The following figure shows a general view of the patch panel. The channel numbers of the various
connectors are marked on the patch panel, and the inside of the cover contains a label for cable
identification (illustrated in the following figure). The customer’s cables are connected to the connectors
inside the patch panel, while the cable leading from the PME1_21 connector is connected to the SCSI
connectors at the rear of the xDDF-21. A special split cable is available to convert the output from the
PME1_21 to SCSI connector pairs at the back of the xDDF-21.
The xDDF-21 can be supplied with BT43, DIN1.6/5.6, or BNC connectors for connecting to the customer’s
traffic cables.
Figure 17-31: xDDF-21 patch panel for unbalanced E1 interfaces
17.15 Cables
The product line platforms are supplied with a number of cables, as described in the following table.
LightSOFT offers on-demand service provisioning, pinpoint bandwidth allocation, and dramatic reductions
in equipment and operating costs that multiple management systems often require. It does this by
providing complete network management from a single platform, including configuration, fault
management, performance management, administrative procedures, maintenance operations, and security
control. Within one integrated management system, LightSOFT's network manager enables you to fully
control all your NEs regardless of their manufacturer, and view the complete network at a glance. Multiple
operators can simultaneously configure the network without any conflicts.
Network provisioning, particularly in the data era, has become very complex. For example, tunnels must be
pre-provisioned with various protection schemes. Service configuration requires setting multiple
parameters for each service. LightSOFT offers a number of powerful automation tools to ease the
provisioning process, thereby saving valuable OPEX for service providers. More services can be created in
the same amount of time, which is directly reflected in revenues. Automation tools include automatic
creation of tunnels per network or per service, automatic creation of bypass tunnels, reusable templates,
automatic configuration of the tunnels needed for mesh topologies, and more.
LightSOFT's comprehensive, E2E perspective supports comprehensive definition of MPLS tunnels, Ethernet
services, and SDH and optical trails, for primary and protection paths. LightSOFT supports all types of trails
and links (MoT, MoE, EoS, ETY, SDH, optical), protection schemes, and user constraints. Simply point and
click to connect any two endpoints, even in the most complex topology. LightSOFT provides powerful trail
reconstruction options to reconcile discrepancies between different layers, as well as batch traffic
management capabilities. LightSOFT provides smoothly integrated management for packet, optical, and
MSPP-based platforms.
LightSOFT functions at the NML while EMS-NPT functions at the EML. A northbound interface can connect
either EMS-NPT or LightSOFT to your Operations Support System (OSS).
At the NEL, the Neptune features the local craft terminal (LCT) system, providing fast easy connectivity to
the NE and enabling access to configuration and management functions through a user-friendly GUI as well
as an efficient CLI.
LightSOFT offers the important advantage of resource optimization, since LightSOFT’s sophisticated
pathfinding algorithm can be programmed to account for criteria such as shared risk link group (SRLG), link
cost, length, and minimum hops. In this way, the operator benefits from a built-in optimization tool that
eliminates the need for cumbersome offline planning and optimization tools.
LightSOFT’s northbound interface (NBI) was developed according to MTNM (also known as TMF-814), the
leading industry standard and based on CORBA. It is rich in functionality and allows LightSOFT to be
integrated under any OSS for alarms, and equipment and service inventory retrieval. The single point of
integration and standards-based approach means that any new equipment or version deployed under
LightSOFT does not require additional integration efforts.
LightSOFT offers easy fault management as well. A key element in network availability is identifying the
source of a problem affecting the network. While automatic protection and restoration mechanisms usually
get the service up and running again quickly, it is essential to identify and correct the cause of the failure to
prevent additional problems in the future. As networks become more complex, the risks of making
configuration mistakes increases. LightSOFT allows network operators to enjoy full control of the network,
while simultaneously providing tools for preventing mistakes. LightSOFT also features advanced tools for
detecting inconsistencies in the network between NEs, EMSs, and the NMS. Services which are altered at
the NE or EMS level for any reason are instantly identified, classified according to their degree of
non-conformance, and clearly displayed. Without these tools, it would take many hours of detective work
to detect and correct such inconsistencies.
Figure 18-3: Inconsistent Services Notification
18.3 EMS-NPT
The EMS-NPT provides full-feature support for Neptune and BroadGate platforms. It functions at the
element management layer (EML) in our network management scheme based on the TMN scheme. EMS-
NPT has been designed as an open system in compliance with the CORBA MTNM standard, allowing it to be
integrated smoothly and operate under a third party NMS TMN umbrella system.
Element management in EMS-NPT provides a state-of-the-art GUI, with superior user experience, carrier
grade in both functionality and scalability.
EMS-NPT is based on Java together with a relational database, allowing it to run on multiple platforms (e.g.,
Microsoft Windows, SUN™ Solaris, VMWare Virtualized Servers) and support multiple operators
concurrently.
Having a robust architecture enables the EMS to support hundreds of NEs at a time, letting customers
expand their network without the need to install more EMSs.
The EMS-NPT supports all the FCAPS categories, providing the complete set of management functions
(Fault, Configuration, Accounting/Administration, Provisioning, and Security).
Figure 18-6: EMS-NPT main topology view
EMS-NPT applications provide a complete set of FE utilities (ping ne, logout user, change neid, etc.) to help
field engineers monitor and troubleshoot basic network operations. For more information, see the EMS-
NPT User Manual.
The EMS-NPT supports two modes: integrated and standalone. The EMS-NPT provides full EML functions
and is integrated with LightSOFT that provides NML functionality. This mode is suitable for large networks
and those containing products from multiple vendors in addition to Neptune NEs. In standalone mode, the
EMS-NPT provides full EML functionality and several NML functions, including network topology
management and E2E trail management. Standalone mode is a very low-cost solution for small networks
containing only Neptune NEs.
Figure 18-7: View performance history
EMS-NPT features
The EMS-NPT supports an extended set of sophisticated features, including:
Network configuration and management
Software downloads and upgrades
Service configuration and management
User-friendly GUI simplifies card and sub-card slot assignments
Moving cards between slots without deleting existing traffic and configuration
Performance monitoring and fault management
Hourly exports of historical PM counters, in XML and CSV format
18.3.1 NE simulator
EMS-NPT applications installed on Windows systems offer a useful NE Simulator that simulates the
behavior of a real NE under current network conditions. NE behavior can also be simulated under different
circumstances.
Figure 18-8: NE Simulator
18.3.2 Auto-discovery
The EMS-NPT supports the following auto-discovery capabilities:
Automatic card assignment
Automatic NE recognition
Automatic topology link discovery
Automatic card assignment can be done manually and automatically. In automatic mode, cards and
modules inserted into managed NEs in the field are automatically recognized by the EMS-NPT and assigned
as a background task according to user-defined tables. This feature can also be applied manually to selected
NEs. The end result is the same: you no longer need to assign each card or module as physical insertion
automatically triggers this action.
With automatic NE recognition, each NE appears on the screen, eliminating the need to create it manually.
New NEs are automatically transferred to LightSOFT or any other NMS via the CORBA interface.
The automatic topology discovery feature is based on a new implementation of the J0 byte. When
activated, SIO-to-SIO or OSC bidirectional links (in SDH networks) are automatically identified by the EMS-
NPT and uploaded to the NMS layer via the MTNM interface. LightSOFT automatically displays such links
when managing EMS-NPT, eliminating the need to manually define topology links at the NMS level. In
addition, an EMS-level list of links is provided for viewing and deleting automatically created links.
18.4.1 LCT-NPT
The LCT-NPT is the PC-based platform field tool. The easy-to-use GUI provides direct connection to
deployed NEs using an Ethernet interface.
The LCT-NPT supports all site functionalities: installation, NE commissioning (including slot assignment, IP
routing, and DCC ports configuration), port and XC provisioning, and troubleshooting. The LCT-NPT also
supports alarm and event management, inventory, PM, security management, system administration, and
log management.
The system provides you with a clear view and control of NE internals, cards and objects, status, and
configuration. Access from the LCT is password-protected. The intuitive Java-based interface is simple to
use and runs on Windows platforms.
The LCT-NPT and EMS-NPT utilize the same GUI, offering the same 'look and feel'.
Figure 18-9: Platform view as displayed in the LCT-NPT
EN 60870-2-2 (1996) Telecontrol equipment and system - Part 2Operation condition – Section 2:
Environmental condition (3k6).
EN 60950-1 -Information Technology Equipment Safety- Part1 General Requirements
EN 61000-4-2:1995 +A1:98+A2:2001 Electrostatic Discharge (ESD) Immunity test
EN 61000-4-3:2008 Electromagnetic compatibility (EMC), Section 3: Radiated, radio frequency,
electromagnetic field immunity IEC test
EN 61000-4-4: 2008 Electromagnetic compatibility (EMC), Section 4: Electrical fast transient/burst
immunity test
EN 61000-4-5: 2006 Electromagnetic compatibility (EMC), Section 5: Surge immunity test
EN 61000-4-6: 2007 Electromagnetic compatibility (EMC), Section 6: Immunity to conducted
disturbances, induced by radio- frequency fields
EN 61000-6-2: Electromagnetic compatibility (EMC) - Part 6-2: Generic standards - Immunity for
industrial environments
EN 61000-6-4: Electromagnetic compatibility (EMC) - Part 6-4: Generic standards - Emission standard
for industrial environments
EN 61000-6-5 (2001) Generic standards – Immunity for power station and substation environments
EN 61850-3 (2002) Communication network and systems in substations – Part 3: General
requirements
ETR 114: Functional Architecture of SDH Transport Networks.
ETR 275: Considerations on Transmission Delay and Transmission Delay value for components on
connections supporting speech communication over evolving digital networks.
FTZ 1TR9: Deutsche Telekom A.G. EMC Requirements.
FTZ 153 TL 1part 1: Synchronous Multiplexing Equipment (SM) for Synchronous Multiplex Hierarchy.
RFC 1332: The PPP Internet Protocol Control Protocol (IPCP), May 1992
RFC 1493: Definition of Managed Objects for Bridges.
RFC 1542: Clarifications and Extensions for the Bootstrap Protocol
RFC 1570: PPP LCP Extensions, W. Simpson, January 1994
RFC 1643: Ethernet-like Interfaces.
RFC 1661: The Point-to-Point Protocol (PPP), July 1994
RFC 1662: PPP in HDLC-like framing, July 1994
RFC 1757: Remote Network Monitoring Management Information Base.
RFC 1812: Requirements for IP version 4 Routers, June 1995
RFC 1823: LDAP Application Program Interface (API).
RFC 1850: “OSPF Version 2 Management Information Base”, Baker F., R. Coltun R., November 1995
RFC 1901: Introduction to Community-based SNMPv2.
RFC 1902: Structure of Management Information for Version 2
RFC 1903: Textual Conventions for Version 2 of SNMPv2
RFC 1904: Conformance Statements for Version 2 of SNMPv2
RFC 1905: Protocol Operations for Version 2 of SNMPv2
RFC 1907: Management Information Base (MIB) for SNMPv2
RFC 1908: Coexistence between V1 and V2 of the Internet-standard
RFC 2058: RADIUS Authentication and Authorization
RFC 2108: Definitions of Managed Objects for IEEE 802.3 Repeater Devices using SMIv2.
RFC 2131: Dynamic Host Configuration Protocol
RFC 2132: DHCP Options and BOOTP Vendor Extensions
RFC 2138: Remote Authentication Dial In User Service (RADIUS)
RFC 2212: Specification of guaranteed quality of service
RFC 2236: Internet Group Management Protocol, Version 2
RFC 2251: Lightweight Directory Access Protocol (v3) [specification of the LDAP on-the-wire protocol].
RFC 2252: Lightweight Directory Access Protocol (v3): Attribute Syntax Definitions.
RFC 2253: Lightweight Directory Access Protocol (v3): UTF-8 String Representation of Distinguished
Names.
RFC 2254: The String Representation of LDAP Search Filters.
RFC 2255: The LDAP URL Format.
RFC 2256: A Summary of the X.500(96) User Schema for use with LDAPv3.
RFC 2328: “OSPF Version 2.”, Moy J., April 1998
RFC 2401: Security Architecture for the Internet Protocol.
RFC 2409: Internet Key Exchange Protocol (IKE).