DN 0528036
DN 0528036
DN 0528036
The information in this document applies solely to the hardware/software product (“Product”) specified
herein, and only as specified herein.
This document is intended for use by Nokia Solutions and Networks' customers (“You”) only, and it may not
be used except for the purposes defined in the agreement between You and Nokia Solutions and Networks
(“Agreement”) under which this document is distributed. No part of this document may be used, copied,
reproduced, modified or transmitted in any form or means without the prior written permission of Nokia
Solutions and Networks. If you have not entered into an Agreement applicable to the Product, or if that
Agreement has expired or has been terminated, You may not use this document in any manner and You
are obliged to return it to Nokia Solutions and Networks and destroy or delete any copies thereof.
The document has been prepared to be used by professional and properly trained personnel, and You
assume full responsibility when using it. Nokia Solutions and Networks welcome Your comments as part of
the process of continuous development and improvement of the documentation.
This document and its contents are provided as a convenience to You. Any information or statements
concerning the suitability, capacity, fitness for purpose or performance of the Product are given solely on
an “as is” and “as available” basis in this document, and Nokia Solutions and Networks reserves the right
to change any such information and statements without notice. Nokia Solutions and Networks has made all
reasonable efforts to ensure that the content of this document is adequate and free of material errors and
omissions, and Nokia Solutions and Networks will correct errors that You identify in this document. But,
Nokia Solutions and Networks' total liability for any errors in the document is strictly limited to the correction
of such error(s). Nokia Solutions and Networks does not warrant that the use of the software in the Product
will be uninterrupted or error-free.
NO WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
ANY WARRANTY OF AVAILABILITY, ACCURACY, RELIABILITY, TITLE, NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, IS MADE IN RELATION TO THE
CONTENT OF THIS DOCUMENT. IN NO EVENT WILL NOKIA SOLUTIONS AND NETWORKS BE
LIABLE FOR ANY DAMAGES, INCLUDING BUT NOT LIMITED TO SPECIAL, DIRECT, INDIRECT,
INCIDENTAL OR CONSEQUENTIAL OR ANY LOSSES, SUCH AS BUT NOT LIMITED TO LOSS OF
PROFIT, REVENUE, BUSINESS INTERRUPTION, BUSINESS OPPORTUNITY OR DATA THAT MAY
ARISE FROM THE USE OF THIS DOCUMENT OR THE INFORMATION IN IT, EVEN IN THE CASE OF
ERRORS IN OR OMISSIONS FROM THIS DOCUMENT OR ITS CONTENT.
This document is Nokia Solutions and Networks’ proprietary and confidential information, which may not be
distributed or disclosed to any third parties without the prior written consent of Nokia Solutions and
Networks.
Nokia is a registered trademark of Nokia Corporation. Other product names mentioned in this document
may be trademarks of their respective owners, and they are mentioned for identification purposes only.
Only trained and qualified personnel may install, operate, maintain or otherwise handle this
product and only after having carefully read the safety information applicable to this product.
The safety information is provided in the Safety Information section in the “Legal, Safety and
Environmental Information” part of this document or documentation set.
Nokia Solutions and Networks is continually striving to reduce the adverse environmental effects of its
products and services. We would like to encourage you as our customers and users to join us in working
towards a cleaner, safer environment. Please recycle product packaging and follow the recommendations
for power use and proper disposal of our products and their components.
If you should have questions regarding our Environmental Policy or any of the environmental services we
offer, please contact us at Nokia Solutions and Networks for any additional information.
Table of Contents
This document has 234 pages
4 Migrating AoIP..............................................................................13
List of Figures
Figure 1 Changing Iu interconnection from ATM to IP - before migration.........10
Figure 2 Changing Iu interconnection from ATM to IP - after migration............ 11
Figure 3 SCTP configuration for Iu over IP.......................................................12
Figure 4 Example of a network without pool areas (non-pooled network)........20
Figure 5 Possible pool area configurations (Multipoint Iu)................................21
Figure 6 Possible pool area configurations (Multipoint A)................................ 21
Figure 7 Creating a pool when RAN independent multipoint A/Iu support in
MGW is used–primary MSS............................................................... 27
Figure 8 IP CAC at IP-based route level configuration with UPDs in the MSS.....
32
Figure 9 IPnwR usage...................................................................................... 33
Figure 10 L2 connectivity....................................................................................37
Figure 11 Step 1: Migration for SWU-2...............................................................37
Figure 12 Step 2: Preparations for migrating SWU1.......................................... 39
Figure 13 Step 3: Migrating SWU-1....................................................................40
Figure 14 L3 migration is complete.....................................................................41
Figure 15 Before migration................................................................................. 42
Figure 16 Step 1: Migrating SWU2..................................................................... 42
Figure 17 Step 2: Migrating SWU1..................................................................... 44
Figure 18 L2 to L3 migration is complete for the Control LAN............................45
Figure 19 Step 1: Migrating SWU 2.................................................................... 45
Figure 20 Step 2: Making the interface SWO interface to OMU......................... 47
Figure 21 Step 3: Migrating SWU-1....................................................................48
Figure 22 L3 migration is complete.....................................................................49
Figure 23 Before migration................................................................................. 49
Figure 24 Step 1: L2 to L3 migration, user plane via own SWU pair.................. 50
Figure 25 Step 2: Making unit switchover (SWO) for user plane traffic.............. 51
Figure 26 Step 3: Migrating SWU1..................................................................... 52
Figure 27 Migration complete............................................................................. 53
Figure 28 Before migration................................................................................. 54
Figure 29 Step 1: Preparations for migrating SWU1.......................................... 54
Figure 30 Step 2: L3 configuration for SWU1..................................................... 55
Figure 31 Step 3: Recovering O+M traffic via SWU1......................................... 56
Figure 32 Step 4: L3 configuration for SWU0..................................................... 57
Figure 33 Step 5: Normalizing O&M traffic......................................................... 59
Figure 34 L2 to L3 migration, for the Billing LAN................................................ 61
Figure 35 L3 migration for Billing LAN is completed...........................................65
Figure 36 Step 1: L2 connectivity in use, preparations for SWU40 and SWU41....
66
List of Tables
Table 1 Changes in Feature Migration for MSS/VLR........................................ 8
Table 2 Error text - clear core mapping........................................................... 34
Table 3 Internal LANs and IPv4 subnets in the example.................................80
Table 4 External LANs and IPv4 subnets .......................................................80
Table 5 MSS internal LANs and IPv4 subnets in the example........................ 89
Table 6 MSS external LANs and IPv4 subnets in the example....................... 90
Table 7 MGW IPv4 subnets.............................................................................90
Table 8 L3 AHUB subnets in the example prior to the migration...................101
Table 9 Internal LANs and IPv4 subnets in the example...............................101
Table 10 External LANs and IPv4 subnets .....................................................103
Table 11 Internal LANs and IPv4 subnets in the example (H.248 LB)............ 121
Table 12 External LANs and IPv4 subnets (H.248LB).................................... 122
Table 13 MGW IPv4 subnets...........................................................................123
Table 14 Internal LANs and IPv4 subnets in the example...............................156
Table 15 External LANs and IPV4 subnets..................................................... 158
Table 16 Internal LANs and IPv4 subnets in the example - multi-homing.......187
Table 17 Internal LANs and IPv4 subnets in the example - single-homing..... 189
Table 18 External LANs and IPv4 subnets - multi-homing.............................. 190
Table 19 External LANs and IPv4 subnets - single-homing............................ 190
The RNC used in the MML example procedures is a Nokia WCDMA RNC (RU-release).
Other vendors' network elements can also be used as long as the required SW features
are supported.
• The network element upgrades have been done. See the network element product
documentation for the required SW and HW upgrades.
• MGW has been created and connections (M3UA, H.248) have been established, that
is, MGW has been integrated properly and registered successfully to MSS.
• Iu-over-ATM configuration exists and is functioning.
• The necessary SW licences have been acquired. The IP-based Iu-CS interface is
optional and supported by the MGW Iu IP 3G port capacity and MGW Iu ATM IP 3G
port type capacity licences. Check from the RNC vendor if a licence is required.
In the Multipoint Iu environment, one RNC can be connected to different MSC Servers
using either ATM or IP. However, check from the RNC vendor if the RNC supports this
kind of configuration.
See also chapter Planning Iu over IP in MSS System Network Planning.
Summary
The figure below illustrates the process of changing Iu interconnection from ATM to IP.
ATM/AAL2
MSS
IP
Iub Iu-CSRANAP/AAL5
ATM/AAL2 ATM/AAL2(ALCAP)
ATM/AAL2 MSS
IP
RANAP/IP
UTRAN H.248
Iub Iu-CS
IP IP/RTP
The following steps give an overview of the steps required. For detailed instructions, see
the network element configuration instructions in network element product
documentation libraries.
Procedure
• Plan the replacement of ATM units by IP units if necessary, and make re-
dimensioning if necessary.
• Install MGW Iu IP 3G port capacity licence.
• Check the current IP user plane capacity and add new capacity as needed.
• After Installing the MGW Iu IP 3G port capacity licence and adding IP user plane
capacity in MGW, it is possible to carry out corresponding procedures in MSC
Server and RNC.
When the Iu over IP is working:
• Remove the ALCAP configuration from the MGW.
• Remove the RANAP link towards RNC and MSS.
• Remove the obsolete ATM resources.
• Plan and create the User Plane Destinations (UPD) in MSS. See chapter
Planning of control and user plane analyses in MSS System Network Planning.
• Plan and create the SCTP associations.
The recommendation is to create a minimum of 2 SCTP associations from each
MSS to each RNC. Put the SCTP associations on one SCTP association set.
This arrangement provides resiliency for the computer unit failure. See figure
SCTP configuration for Iu over IP below.
• Connect the RANAP signaling directly to the MSC Server using the SCTP
associations created in the previous step, instead of going via MGW:
– In the MSS, create a new signaling Link Set towards RNC.
– Attach this signaling Link Set to the same existing signaling Route Set that
already includes the link via MGW (added as signaling route).
– Allow load-sharing between IP and via MGW with equal priority.
– Do the corresponding step also in the RNC.
– Block the RNC in the MSS. Existing calls to RNC will be released by the
MSS. This step is necessary also when the RNC is multi-homed via multiple
pMGWs.
– Change the UPDR of the RNC data to point to the new IP-based UPD. From
this point on, the user plane uses IP for the new calls.
– Unblock the RNC.
– Remove the RANAP link via MGW.
– Remove the ATM based UPD from the User Plane Analysis.
Expected outcome
The figure below shows how the final configuration should look like. The RNC in this
example is a Nokia WCDMA RNC (RU release):
ICSU-0 SIGU-0
ICSU-1 SIGU-1
ICSU,SIGU:SignalingUnits
4 Migrating AoIP
Purpose
MSS System supports A-over-IP (AoIP) functionality when the transcoder is in the MGW.
These instructions apply for scenarios without multipoint, with RAN independent
multipoint and with RAN dependent multipoint as well.
Earlier, in the A interface between BSC and MGW, control plane and user plane
connections were handled over the TDM. With the A-over-IP (AoIP) support it is possible
to connect GSM RAN to the core network with IP connectivity. In other words, both the
user plane and control plane for A are transported over IP. BSSAP signaling can be
routed directly between the BSC and the MSS instead of going through MGW. A direct
signaling link saves the MGW resources. In both TDM - and IP-based A interfaces, user
plane traffic goes between BSC and MGW.
• The network element upgrades have been done. See the network element’s product
documentation for the required SW and HW upgrades.
• MGW has been created and connections (M3UA, H.248) have been established, that
is, MGW has been integrated properly and registered successfully to MSS.
• A or Ater interface configuration exists and is functioning.
• The necessary SW licenses have been acquired.
The AoIP is an optional feature and supported by the MSS AoIP TC in MGW
support, and MGW A over IP port capacity TC in MGW licenses. Check from the
BSC vendor if a license is required.
• The MSS Enhanced TFO and TrFO optimization and MGW TrFO would also provide
great benefits.
• Multipoint A requires its licenses if the multipoint solution is applied.
In the Multipoint A environment, one BSC can be connected to different MSC Servers
using either TDM or IP. However, check from the BSC vendor if the BSC supports this
kind of configuration.
See also section Planning A over IP (AoIP) in MSS System Network Planning and TFO
and TrFO Guidelines for MSS System in MSS System documentation.
Summary
The TDM transmission may be used in parallel with IP transmission. The TDM transport
may also be removed after a successful migration.
The following steps give an overview of the steps required. For detailed instructions, see
the network element’s configuration instructions in network element’s product
documentation.
Some of the steps are done on the MSS basis, some steps are needed for each
migrated BSC.
When a new BSC is added to the MSS it is not considered as migration but feature
activation as defined in Feature 1895: A over IP (AoIP)support in MSC Server feature
activation manual available in M-release Product Documentation.
This migration procedure applies only when BSC is using live TDM traffic.
Procedure
• Plan the replacement of TDM units by IP units if necessary, and make re-
dimensioning if necessary.
• Install licenses.
• Check the current IP user plane capacity and add new capacity as needed.
a) Configure the needed MGWs as AoIP capable with activating AOIP CAPABILITY
parameter among the PROVISIONED MGW CAPABILITIES. This step is needed
only if the optional enhanced TFO and TrFO feature is available.
ZJGG:NAME=MGWPAR0:TYPE=PROVPARAM::::::NUMBER=54:DWORDVALUE=1;
b) Define codec modification support to MGWs with activating CODEC
MODIFICATION CAPAB parameter among the PROVISIONED MGW
CAPABILITIES. This step is needed only if the optional enhanced TFO and TrFO
feature is available.
ZJGG:NAME=MGWPAR0:TYPE=PROVPARAM::::::NUMBER=50:DWORDVALUE=1;
It does not harm to make this step for all the existing vMGWs at one shot.
a) Modify one BSSAP version (here version number 21) so that AoIP is supported
but IP transport is not yet used for live traffic. For this version, configure the
necessary BSCs as AoIP capable with activating the
AOIP_TC_IN_MGW_SUPPORTED_BSSAP parameter.
ZEDT:VER=21:F,64,1;
b) For this migration phase BSSAP version, configure the needed load division
between the IP and the TDM connections so that IP load share is 0%, that is,
change AOIP_TRANSPORT_TYPE_PERCENTAGE_BSSAP parameter.
ZEDT:VER=21:P,8,0;
c) Modify another BSSAP version (here version number 20) so that AoIP is
supported and IP transport is used for live traffic. For this version, configure the
necessary BSCs as AoIP capable with activating the
AOIP_TC_IN_MGW_SUPPORTED_BSSAP parameter.
ZEDT:VER=20:F,64,1;
d) For this AoIP capable BSSAP version, configure the needed load division
between the IP and the TDM connections. If both TDM and IP connections are
available and load division is needed between these transport types, configure
the needed load division percentage with the
AOIP_TRANSPORT_TYPE_PERCENTAGE_BSSAP parameter. Note that the
percentage indicates IP transport's share of the traffic.
ZEDT:VER=20:P,8,50;
7 Setting the maximum allowed codec number in the Supported Codec List via
BICC.
Increase the maximum allowed codec number for BICC signaling with modifying
MAX_CODEC_NUMBER_BICC PRFILE. The default value is 8 which should suffice
for the most cases.
ZWOC:53,12,20;
9 Setting the maximum allowed codec number in the Supported Codec List via
SIP.
Increase the maximum allowed codec number for SIP signaling by modifying the
MAX_CODEC_NUMBER_SIP PRFILE. The default value is 8 which should suffice
for the most cases.
ZWOC:53,19,20;
ZWOC:53,59,2;
Value 2 of AOIP_ENHANCE_TRANSP_SEL parameter requires the MSS Enhanced
TFO and TrFO license to be activated.
a) For the BSC under migration, change the BSSAP version (AoIP active but load
share 0%).
ZEDV:,NAME=BSC1:21;
b) Configure the relevant UPD to the AoIP capable BSC.
ZEDG:NAME=BSC1:NUPD=UPD1;
c) Define GSM full rate codec mode set.
ZE9P:TYPE=FR,:OM=N,SCSADD=0&2&4&7;
d) Define narrowband codec mode set.
ZE9P:TYPE=NB,:OM=N,SCSADD=0&2&4;
e) Define wideband codec mode set. The wideband codec mode set can be created
when the WB AMR support license is in the ON/CONF state.
ZE9P:TYPE=WB,:SCSADD=0;
f) Attach mode sets to AoIP capable BSC.
ZEDG:NAME=BSC1:FRMSET=1,HRMSET=2,WBMSET=64;
• Set up the MTP and SCCP configuration towards the target MSS.
• The states of MTP and SCCP subsystems should be active.
Routing configuration:
• Circuit group(s) and route(s) must exist between the BSC and MSSs.
• Note that the route number(s) for the transferred BSC must be same as the route
numbers used in the source MSS or otherwise the transferred routes have to be
deleted and created again with the correct route numbers.
Summary
MSS System supports the Multipoint Iu and A features.
This feature overcomes the strict hierarchy which restricts the connection of a RAN node
to just one CN node (core network nodes; MGW and MSC Server). This restriction
results from routing mechanisms in the RAN nodes which differentiate only between
information to be sent to the CN nodes and which do not differentiate between multiple
CN nodes in each domain. Multipoint Iu/A feature introduces a routing mechanism (and
other related functionality), which enables the RAN nodes to route information to different
CN nodes.
This feature further introduces the concept of pool areas which is enabled by the routing
mechanism in the RAN nodes. A pool area is comparable to an MSS service area as a
collection of one or more RAN node service areas. Different from an MSS service area, a
pool area is served by multiple CN nodes in parallel, which share the traffic of this area
between each other. Furthermore, pool areas may overlap which is not possible for
service areas.
Multipoint Iu/A features enable a few different application scenarios with certain
characteristics. The service provision by multiple CN nodes within a pool area enlarges
the served area compared to the service area of one CN node. The configuration of
overlapping pool areas allows separating the overall traffic into different MS moving
patterns, for example, pool areas where each covers a separate residential area and all
of them the same city centre. Other advantages of multiple CN nodes in a pool area
include the possibility of capacity upgrades by additional CN nodes in the pool area and
the increased service availability as other CN nodes may provide services in case one
CN node in the pool area fails.
From the migration point of view, RNC interconnection is performed as usual. Then
Multipoint Iu is handled as a new feature integration.
An MS is served by one dedicated CN node of a pool area as long as it is in the radio
coverage of the pool area. The figure below shows most of the possible pool area
configurations. It contains CS pool area 1 (RAN areas 1, 2, 5, 6 served by MSSs 1, 2, 3),
CS pool areas 2 (RAN areas 2, 3, 6, 7 served by MSSs 4, 5, 6). In addition, MSS 7
serves the RAN areas 4 and 8 without any usage of the Multipoint Iu/A feature. The
possibility to configure overlapping pool areas is shown by the CS pool areas 1 and 2.
The pool areas of the CS domain may be configured in an identical way to CS pool area
2 or they may be configured differently, as shown by CS pool area 1. The number or
capacity of CN nodes is configured independently for each pool area. The use of Feature
1449 Iu in MSC Server Concept and Feature 1564 Multipoint A interface may be
configured in parts of the network only.
RANnode
RANnode RANnode RANnode Area4
Area1 Area2 Area3
MSS3 MSS6
MSS2 MSS5
MSS1 MSS4 MSS7
CSpoolarea1
CSpoolarea2
MSC/MSS3 MSC/MSS6
MSC/MSS2 MSC/MSS5
MSC/MSS1 MSC/MSS4 MSC/MSS7
CSpoolarea1
CSpoolarea2
The following outlines briefly the steps needed for multipoint configuration. The
preparation steps and additional use cases for multipoint configuration management are
described in more detail in section Pooling and Multipoint configuration and management
in Pooling and Multipoint Guidelines for MSS System.
Steps
1 Create a pool.
Sub-steps
a) Plan the pool area configuration, including, for example, NRI length and
allocation.
b)
Check whether relevant multipoint features are active in MSS.
c)
Configure own and neighboring NRI values to all MSC Servers.
d)
Create MGW routing and signaling data for the pool area.
e)
Choose the location areas that will belong to the pool in each MSC/MSS.
f)
Export the BSC/RNC radio network data configuration from all MSCs/MSC
Servers for importing it to the primary MSC/MSS.
a)
Import the radio network configuration back to the temporary files of the primary
MSC/MSS.
c)
Activate the configuration.
d)
Add the remaining user plane and control plane definitions for the added radio
network configuration.
f)
Export the configuration for the secondary MSCs/MSSs. Execute steps 2b-2e in
every secondary MSC/MSS.
Example:Configuring Multipoint A
Note that this example is intended for CS core. It describes only the main steps and
MML commands which are needed. For detailed steps and MML commands, careful
network planning is required as this is a big change for the whole core network. All
the parameters here are examples. However, all the parameters have to be defined.
a) Configure the primary MSS.
• Define the name of the pool area used and the length of the used NRI inside
the pool area (E3M)
ZE3M:POOLNAME=MULTIA,NRILEN=3;
• Add the own MSS to the pool area. Define the used NRIs and that this MSS
is the primary MSS (E3A)
ZE3A:TYPE=OWN,MSSNAME=MLEGE:CONFSEL=P,NRI=123;
• Add the information needed for own MSS off-loading mode (E3E)
ZE3E::MNRI=3,NBLAC=999,NBMCC=262,NBMNC=03;
• Set the TMSI re-allocation to 'ON' in the location update with a new visitor,
IMSI attach, location update, and periodic location update (MXN)
ZMXN:IND=1::TNEW=Y,TIMSI=Y,TLOC=1,TPER=1;
• Activate the TMSI allocation in the MSS (MXM)
ZMXM::TMUSE=Y;
d) Import the pool area configuration of the primary MSS into the target MSS.
• Create an empty directory for the transfer files in the target MSS (IWL)
ZIWL::WSB,DEF::MULTIA,,,PA;
• Import the radio network from the transfer files (E3Y)
ZE3Y:DNAME=MULTIA;
e) Activate the new radio network configuration in the target MSS (E3V)
ZE3V:;
After the activation, the imported BSC remains in LOCKED state.
f) Configure the Multipoint A in the BSC.
• Define the NRI to SPC mapping tables in the BSC (E7A)
ZE7A:220::NRI=123:;
• Define the default SPC of the primary MSS in the BSC (E7M)
ZE7M:SPC=220:MCC=262,MNC=03,CNID=123:;
• Define the default SPC of the target MSS in the BSC (E7A)
ZE7A:397:MCC=262,MNC=03,CNID=124:NRI=234:;
g) Change the administrative state of the BSC to UNLOCKED in the MSS (EDS)
ZEDS:NO=88:U:;
Summary
This process is relevant only when feature Feature 1778 RAN independent multipoint
A/Iu support in MGW is used.
In this example, multipoint configuration with RAN independent multipoint A/Iu support in
MGW feature, as shown in the figure below, is implemented in a non-pooled network.
Before implementation the network was non-pooled, after implementation the network
was pooled.
In some non-pooled network implementations signaling connectivity is already enabled
through two MGWs for resiliency, but there are also non-pooled networks where
signaling is through only one MGW. The same approach is possible also with RAN
independent multipoint A/Iu support in MGW feature. It is possibility to use more than
one MGW in a MGW cluster to provide MGW resiliency. Alternatively, it is possible that
only one MGW is used in the configuration. If MGW level resiliency is not required in the
network, the MGW resiliency related steps in the description below are not required.
When MGW resiliency is implemented in the network, it is recommended that signaling
links and user plane routes are split between the (two or more) MGWs that will be the
redundant MGWs before the configuration of RAN independent multipoint A/Iu support in
MGW is started. This preparatory step is required so that the necessary configurations
are available. The signaling links and user plane routes through redundant MGW are not
in use until they are activated.
For further information about control and user plane routing, see User Plane Routing as
well as Control Plane Routing documents in M-release Product Documentation.
When implementing the pool configuration, the import/export procedure may be utilized
for radio network configuration handling as described earlier in this document, but the
migration towards RAN independent multipoint A/Iu support in MGW can also be done in
smaller steps. The minimum is that the BSCs/RNCs that belong to one LA are migrated,
as illustrated in the example below.
g Note: Note that a BSC/RNC can be included into MGW cluster only when it is taken as
part of MSS pool area. This is because the BSSAP/RANAP signaling handling is
fundamentally different in RAN independent multipoint feature from MSS and MGW
viewpoint when compared to normal point-to-point A/Iu interface. For BSCs/RNCs that
are not yet part of the MSS pool area, MGW shall be used as STP for BSSAP/RANAP
signaling as in the normal point-to-point A/Iu interface configurations.
The alternative RAN node configurations are described more detailed in
Pooling and Multipoint Guidelines for MSS System document. In the same document,
the signaling configuration and uplink/downlink signaling examples are shown for the
RAN independent multipoint solution.
Figure 7 Creating a pool when RAN independent multipoint A/Iu support in MGW is
used–primary MSS
MSS1
MGW1 MGW2
MGWcluster
SPC
LA1 LA1
RAN1 RAN2
The following steps give an overview of the steps required. For detailed instructions, see
the network element configuration instructions in each network element’s Product
Documentation.
Steps
• Check that Feature 1449: Multipoint Iu in MSC Server Concept or Feature 1564:
Multipoint A Interface has been activated in MSS.
See appropriate Feature Activation Manuals in M-release Feature
Documentation.
• Check that Multipoint A and Feature 1778: RAN independent multipoint A/Iu
support in MGW has been activated both in the MSS and in the MGW.
See RAN Independent Multipoint A/Iu Support in MGW, Integrating MGW into the
MSC Server System and Configuration Data Management in MGW in MGW
Product Documentation.
• To be able to make the necessary MSS configurations, the license in the MSS
must be set into the CONFIG state
• Prepare the radio network configuration in primary MSS for pool configuration,
see chapter Preparatory steps for multipoint configuration in Pooling and
Multipoint Guidelines for MSS System.
• Create the GT analysis in the MSS for the BSCs/RNCs so that the addresses
point to the MGW.
• For the RNCs create the unique Iu signaling connection identifier range for each
MSS.
• In the MGW, create the configuration for RAN independent multipoint A/Iu
support in MGW feature including, for example, addressing for BSCs/RNCs and
NRI configuration for the MSSs as described in Integrating MGW into the MSC
Server System and Configuration Data Management in MGW in MGW Product
Documentation.
• Create the MGW cluster configuration comprising of two (or more) redundant
MGWs with IWFadditional point codes.
• Modify in the MSS BSCs/RNCs addressing from the earlier used SPC
addressing to GT addressing.
6 Modify Addressing
• Modify in BSCs/RNCs the SPC address to point to MGW cluster SPC, instead of
the MSS SPC address.
• In the BSC, modify the user plane route address to point towards the MGW
cluster SPC.
• Activate the new BSSAP links for the A-interface and/or RANAP and ALCAP
links for Iu-CS over ATM.
• Create the IWF-SPC for RANAP/BSSAP signaling.
• Create the alternative GT addressing for the BSCs/RNCs in the MSS via the
redundant MGW.
• Activate signaling links via the redundant MGW in the BSCs/RNCs (towards the
MGW cluster SPC).
• Activate the additional signaling link.
Expected outcome
Creating a pool when RAN independent multipoint A/Iu support in MGW is used–primary
MSS is illustrated in the diagram above.
In the MSS, the support for IP-based route level IP CAC is a generic feature that does
not need to be activated separately.
The IP-based routes in MGW can only be configured on NPGEP IP interface units, so IP
CAC at IP-based route level also requires NPGEP and can not be used with IPFGE IP
interface unit. For more information on how to create multiple isolated IP networks (IP-
based route) configuration in the network elements, see MGW and M-release Product
Documentation.
Summary
Alternative 1: IP CAC at MGW level
You can configure (on MGW basis) the maximum number of simultaneously used IP
terminations in a physical MGW. If the CAC limit is reached, the MGW rejects the H.248
request. The MSC Server either rejects the call, makes alternative routing via another
MGW, IP route, or user plane media.
Alternative 2: IP CAC at IP-based route level
The IP Connection Admission Control (IP CAC) at IP-based route level feature enables
you to monitor and limit the user plane traffic and IP bandwidth according to IP-based
routes. With the IP CAC at IP-based route level feature you can also monitor the quality
(packet loss) of the user plane traffic on IP-based routes, and limit the traffic if the
maximum allowed packet loss is reached. Packet loss can result from IP traffic overload
or a fault in IP network, for example. The principle is described in the figure below.
Figure 8 IP CAC at IP-based route level configuration with UPDs in the MSS.
MSSconfiguration:
MSS -UPD1:IPnwR=1
-UPD2:IPnwR=2
MGWconfiguration: -UPD3:IPnwR=3
-IPnwR1:IPCACconfiguration1 -UPD4:IPnwR=4
-IPnwR2:IPCACconfiguration2 -UPD5:IPnwR=5
-IPnwR3:IPCACconfiguration3 -UPD6:IPnwR=6
-IPnwR4:IPCACconfiguration4
-IPnwR5:IPCACconfiguration5 MSS
-IPnwR6:IPCACconfiguration6
H.248
request
UPD1
IP-based
MGW1 route1 MGW2
UPD2
IPCAC1
IPCAC2 IP-based MGW3
route2
IPCAC
IPCAC3 IP-based
route3
IPCAC6
UPD6
BSC
UPD4
UPD5 RNC
3GIPNetwork
MGW5 OperatorB
One way of using the IP-based routes is to have some IP-based routes in own network,
one IP-based route for each pMGW-pMGW interconnection. In addition to that, an IP-
based route can be configured for each inter-connecting operator, on UPD basis. If one
MSS used more than one pMGWs for such interconnection, then you would need to
configure the same IPnwR value in multiple pMGWs, but resulting into different IP-
addresses for the user plane. Furthermore, on the SIP or BICC number of CIC codes
should be aligned with the IP-based route limits when the UPD uses BICC or SIP. Unlike
the CIC configuration, IP-route CAC level configuration can also limit (or measure) intra-
MSS connections.
The configuration logic is as follows:
• For the incoming traffic, the used UPDR is obtained from the incoming ephemeral
circuit group.
• For the outgoing traffic, the digit analysis result is a Destination (DEST). The DEST
contains one or more subdestinations (SDEST). The subdestinations may use load
sharing or they may be alternative routes. Each SDEST contains one route (or
special route). A route contains one or more circuit groups. For ephemeral cases the
CGR contains a UPDR.
• In all of the above cases, the UPDR is an input attribute in the user plane analysis
and the result record contains an UPD. The UPD contains
interconnecting/preceeding/succeeding BNC characteristics and IPnwR parameters.
• If a MSS controls two physical MGWs, a MSS-wide parameter defines the IPnwR for
the IP interconnection. It is controlled by MML command JFQ.
• For the ephemeral resources, the interconnecting, preceeding and succeeding BNC
characteristics define either ATM, IPv4, or IPv6 to be used.
• MSS sends the BNC characteristics and the IPnwR over H.248 signaling to the MGW
if IPv4 or IPv6 bearer is used.
• IPnwR points to an IP-based route and to a DSP parameter set in the MGW. DSP
parameter set defines the relevant DSP, QoS, and RTP parameters.
• IP-based route set defines a list of IP-based routes, which can be selected through
the H.248 interface.
• For each NPGEP Ethernet interface, the used IP address range on an IP-route basis
can be defined. If the same IP-based route is attached to multiple Ethernet
interfaces, the interface is selected randomly. The BNC characteristics define
whether IPv4 or IPv6 address is selected.
MSS IP_basedrouteset:
DEST->SDEST->Route->CGR (IPnwR1,IP-based-route1,DSP-paremeterSet1)
->UPDR->UPD->BNC+IPnwR
JFQMML: (IPnwR2,IP-based-route2,DSP-paremeterSet2)...
interconnectingIPnwR
H.248
MGW
Eth0:
NPGEP Max10of
192.168.x.x/27forroute1,
10.0.1.x/24forroute2,...
Both IP CAC at MGW level and IP route level allow alternative routing. The different
situations are seen as different release codes:
The following table maps the error texts to the given clear code:
The End-of-selection (EOS) analysis result for these reason codes should be alternative
routing (ALTROU).
Steps
Purpose
L3 connectivity provides simple and stable connectivity towards any IP network and
easier management and configuration compared to L2/Ethernet connectivity. This results
in increased network stability and better interoperability in multivendor site infrastructure.
L3 connectivity allows multiple L3 uplinks to be used, which makes it possible to
physically segregate O&M and control plane traffic. However, the principles of traffic
segregation remain unchanged by L3 migration. For further information about selecting
L2 or L3 connectivity, see the section LAN topologies and L2/L3 connectivity models for
MSS System elements in Site Connectivity Guidelines for MSS System.
If the Multiple Isolated IP Networks/IP Realm feature is enabled, physical separation of
the traffic can be achieved by having dedicated L3 uplinks (up to four L3 uplinks can be
used) from L3 ESB24-D for each group of IP realms. For further information, see MSS
System Network Planning and Feature 1903: MSC Server-IP Realm in M-release
Feature Documentation.
For L3 connectivity model for MSS and MGW, support for single-hop BFD is required
with VRRP uplink tracking in MSS and MGW L3 SWUs for TCP/UDP type of traffic
including SCTP single homing. Floating backup static routes are also required in the
multilayer site switches towards the MSS and MGW with single-hop BFD for faster
backup static routing convergence. VRRP or single-hop BFD is not used for control
plane signaling with SCTP multihoming transport.
For more information, see section Multilayer site switch/router requirements in Site
Connectivity Overview for MSS System.
Bi-directional forwarding detection (BFD) provides a low-overhead option for the
detection of faults on any channel, including ethernet, direct physical link, virtual circuit,
tunneling, Multi-protocol Label Switching (MPLS), Label-switched Path (LSP), multi-hop
route channel, and indirect channel. The BFD protocol is a simple “Hello” protocol. It is
based on two systems transmitting BFD packets periodically over a path between the
systems. If one of the systems does not receive BFD packets from the adjacent system
for a defined period of time, it is assumed that one of the components in the bidirectional
path has failed. However, note that only single-hop BFD is used with ESA40-A and
ESB24-D having L3 connectivity to the site.
For further information about using single-hop BFD, see section Redundancy of LAN in
Site Connectivity Guidelines for MSS System.
It should also be noted that the Internet Protocol Director Unit (IPDUs) for the SIP and
the H.248 load balancer, introduced previously in the MSS, make it mandatory for the
control plane LAN to support L3 connectivity. The IPDUs for both the SIP and the H.248
load balancer require L3 connectivity irrespective of whether these are used together or
independently. However, such considerations are outside the scope of this procedure.
Before you start
Before starting the migration to L3 connectivity, it is recommended to take a backup of
the configuration so that fallback is possible in case of problems.
Prior to the migration, check that user plane and control plane is working. Otherwise, the
troubleshooting after the migration may be difficult since one would search the problem
from the migration steps rather than the original problem.
L2 to L3 migration is supported in the standalone MSS for the control plane and the
management plane (O&M) including the billing LAN.
L2 to L3 migration is supported in the MGW for the control plane, the management plane
(O&M) and the user plane with NP2GE/-A plug-in unit types. Dedicated, or shared, L3
ESA40-A SWU units can be used for control plane and O&M connectivity. Furthermore,
a dedicated ESA40-A pair is required for user plane with L3 connectivity.
L2 to L3 migration must be totally transparent to the Mobile Packet Backbone (MPBN),
including the Provider (P) and Provider Edge (PE) routers, and is thus only visible to the
site switches at the core site. Furthermore, the L2-L3 migration must also be transparent
to all the other network elements peering with the L3 migrated NE. Ensure that the Open
Shortest Path First (OSPF) configuration does not use any of the VLANs that are
planned to be migrated to L3. This means that since the configuration and usage of
OSPF based dynamic routing convergence towards the MSS System elements is
currently not supported, that is, neither with L2 nor L3 connectivity models, therefore
static routing configurations are always used instead of OSPF dynamic routing between
the MSS or MGW and the multilayer site switches/routers.
Ensure that the L3 and console cables are available and that L3 licenses have been
purchased. Check that the SWUs are running the correct software levels. Optionally, L3-
capable SWU variants can also be purchased.
g Note: Migration should occur during low traffic periods to minimize disruption to the
network and the level of service provided.
The supported L3 connectivity models for MSS and MGW are described in the Site
Connectivity Guidelines for MSS Server System.
Figure 10 L2 connectivity
SCTPPRI
O&M
SCTPSec
{
el0
ISU
el1
Controlunits ESA24-1 L2links L2/L3Site
Switch1
el0 (SWU1)
ISU
el1
el0
ISU
el1
ESA24-1
O&Munits
el0
(SWU2) L2links
{
OMU L2/L3Site
el1
Switch2
el0
OMU
el1 MGW
Procedure
SCTPPRI
O&M
SCTPSec
{
el0
ISU
el1
Controlunits
el0
ISU
el1
ESA40-A-2
O&Munits
el0
(SWU2)
{
OMU L3link
el1 SiteSwitch2
L2/L3
1c,1d,1e, 1b,1d
el0
OMU 1f,1g,1h
el1 MGW
Sub-steps
a)
Check the state of HSRP/VRRP from the multi-layer site switches. Ensure that
Site Switch1 is the active switch for O&M.
c)
Remove the SWU2 unit and replace it with a new L3-capable variant.
g Note: The cable from Site Switch 2 to SWU2 must now be connected.
d) Configure the /30 transport network between Site Switch 2 and SWU2. Ensure
the connection by pinging SWU2 from Site Switch 2. (A console cable is
required).
e) Configure the O&M VLAN interface and VRRP (BFD) to SWU2. Use the same
IPv4 addressing that was used in Site Switch 2.
g Note: If the same IPv4 addressing is not used, there will be a change required in
the configuration of, for example, first-hop routing, for the O&M units of the NE
migrated to L3.
f)
Configure the SCTP secondary path(s) VLAN interface to SWU2. Use the same
IPv4 addressing that was used in Site Switch 2.
g Note: If the same IPv4 addressing is not used, there will be changes required in
the configuration of, for example, signaling units of the NE that is being migrated
to L3
SCTPPRI
O&M
2e SCTPSec
{
el0
ISU
el1
Controlunits ESA24-1 L2link SiteSwitch1
L2/L3 2b
el0 (SWU1)
ISU
el1
el0
ISU
el1
2a,2d ESA40-A-2
O&Munits
el0
(SWU2)
{
OMU L3link
SiteSwitch2
el1
L2/L3
2c
el0
OMU
el1 MGW
Sub-steps
a) Make the interface switchover (SWO) to the Operation and Maintenance Unit
(OMU) (el0 to el1).
b)
Shut down the O&M VLAN interfaces in Site Switch 1.
d)
Check the connections from the signaling and the O&M units by pinging the O&M
address from a network management host and the signaling unit IP addresses
from a peer core network node (MSS/MGW).
e)
After the IP connection from the signaling unit is working again, check the state
of the SCTP secondary path(s).
SCTPPRI
3a,3b,3c,
O&M
3d,3e,3f,3h
3i,3k SCTPSec
{
el0
ISU
el1
Controlunits
ESA40-A-1 L3link SiteSwitch1 3b,3g,
L2/L3 3h,3i
el0 (SWU1)
ISU
el1
el0
ISU
el1
3i,3j ESA40-A-2
O&Munits
el0
(SWU2)
{
OMU L3link
SiteSwitch2
el1
L2/L3
3l
el0
OMU
el1 MGW
Sub-steps
g Note: The cable must be connected from Site Switch 1 to SWU1 now.
b)
Configure the /30 transport network between Site Switch 1 and SWU1. Check the
connection by pinging SWU1 from Site Switch 1. (A console cable is required).
c)
Add VRRP cables between the SWUs. See Use of A and M links and LAN
connections in MGW in MGW Site Documentation, and Engineering for MGW in
MGW Product Documentation.
d) Configure the O&M VLAN interface and VRRP (BFD) to SWU1. Use the same
IPv4 addressing that was used in Site Switch 1.
g Note: If the same IPv4 addressing is not used, there will be changes required in
the configuration of, for example, first-hop routing, for the signaling units of the
NE that is being migrated to L3.
e)
Configure the SCTP primary path(s) VLAN interface to SWU1. Use the same
IPv4 address that was used in Site Switch 1.
g Note: If the same IPv4 addressing is not used, there will be changes required in
the configuration of, for example, first-hop routing, for the signaling units of the
NE that is being migrated to L3.
g) Shut down the SCTP primary path(s) VLAN interfaces in Site Switch 1.
h)
Add static routes to Site Switch 1 and SWU1. In this phase, O&M traffic goes
from Site Switch 1 to SWU1 to SWU2 to the active OMU el1.
i)
Verify the connections from the signaling and the O&M units by pinging the O&M
address from a network management host and the signaling unit IP addresses
from a peer core network node (MSS/MGW).
j)
Make interface switchover (SWO) to the OMU (el1 to el0). Test the connection.
k)
After the IP connection from the signaling units is working again, check the state
of the SCTP primary path(s).
l) Add backup static routes, with single-hop BFD tracking, to both Site Switches 1
and 2 towards the O&M subnet.
After L3 migration
{
ESA40-A),SCTPprotected
ISU
Signaling&
control ISU
units
ISU ESA40-A-1
VRRP
forO&M
{
ISU ESA40-A-0
O&M
ISU
DCN-L3uplinksinESA40-A
MGW
8.1.2 Dedicated L3 SWU units for MGW Control LAN and O&M
Before you start
{
ISU
SWU0
Signaling&
control
units ...
ISU
SWU1 L2/L3switch1
ISU
DCNL2/L3
{
NEMU switch0
SWU2
O&M OMU DCNNetwork
SWU3
OMU
DCNL2/L3
switch1
g Note: L3 migration can be undertaken for the Control Plane, O&M, or both.
A configuration where ESA24 and ESA40-A pairs are used is not supported. If L2 is
used for ISU units and L3 is only used for O&M, ESA40-A units must also be added for
the ISU’s.
Procedure
1
Step 1: Migrating SWU2
SCTPSec
SiteSwitch1
L2/L3
1a,1h,
L2link
1i
{
el0 ESA24-1
ISU SWU1
el1
Controlunits
el0
ISU
el0 ESA40-A-2
el0
SWU2
ISU L3link
el1
SiteSwitch2
MGW 1c,1d,1e L2/L3 1b,1d
1f,1g 1g
Sub-steps
a)
Check the state of the SCTP primary and secondary paths and ensure that all
the Control Plane (CP) traffic is using the SCTP primary path (Site Switch 1).
b)
Shut down the SCTP secondary path’s VLAN interface in Site Switch 2. The
SCTP secondary path failure alarm is triggered.
c)
Remove the secondary path’s SWU2 and replace it with new L3-capable variant.
g Note: The cable from Site Switch 2 must be connected to SWU2 now.
d) Configure the /30 transport network between Site Switch 2 and SWU2. (A
console cable is required). Check the connection by pinging SWU2 from Site
Switch 2.
e)
Configure SCTP secondary path VLAN interface to SWU2. Use the same IPv4
address that was used in Site Switch 2.
g Note: If the same IPv4 addressing is not used, there will be a changes required
in the configuration of, for example, first-hop routing, for the signaling units of the
NE migrated to L3.
f)
Create the required port-based VLAN tagging to SWU2.
g)
Add static routes to Site Switch 2 and SWU2. The SCTP secondary path failure
alarms are cancelled.
h)
Ensure that the connections from the signaling units are working by pinging the
signaling unit IP addresses from a peer core network node (MSS/MGW).
i) After the IP connection from the signaling unit is working again, check the state
of the SCTP secondary paths. Ensure that all SCTP primary and secondary
paths are active.
SCTPSec
2b,2c,2d
2e,2f SiteSwitch1
L2/L3 2a,2c,
2f
2g,2h
L2link
{
el0 ESA40-A-1
ISU SWU1
el1
Controlunits
el0
ISU
el0 ESA40-A-2
el0
SWU2
ISU L3link
el1
SiteSwitch2
MGW L2/L3
Sub-steps
a) Shut down the SCTP primary path’s VLAN interface in Site Switch 1.
b) Remove the primary path’s SWU unit, and replace it with a new L3-capable
variant.
g Note: The cable from Site Switch 1 must be connected to SWU1 now.
c)
Configure the /30 transport network between Site Switch 1 and SWU1. (A
console cable is required). Ensure the connection by pinging SWU1 from Site
Switch 1.
d) Configure the SCTP primary paths VLAN interface to SWU1. Use the same IPv4
address that was used in Site Switch 1.
g Note: If the same IPv4 addressing is not used, there will be a change required in
the configuration of, for example, first-hop routing for the signaling units of the
NE migrated to L3.
e)
Create the required port-based VLAN tagging to SWU1.
f)
Add static routes to Site Switch 1 and SWU1.
g) Check the connections from the signaling units by pinging the signaling unit IP
addresses from a peer core network node (MSS/MGW).
h)
After the IP connection from the signaling unit is working again, check the state
of the SCTP primary paths. Ensure that all SCTP primary and secondary paths
are active.
Figure 18 L2 to L3 migration is complete for the Control LAN
3a,3b,3c
3d,3e,3f
L2/L3Site
3b,3f,
Switch1
L2link 3i
3g,3h
{
el0 ESA24-1
OMU
O&Munits
el1
SWU1
el0
OMU ESA40-A-2
el0
SWU2
L3link 3i
L2/L3
MGW SiteSwitch
Procedure
1
Step 1: Migrating SWU 2
SiteSwitch1
1a
L2/L3
L2link
{
el0 ESA24-1
OMU
O&Munits
el1
SWU1
el0
OMU ESA40-A-2
el0
SWU2
L3link 1b,1d
SiteSwitch2
MGW 1c,1d,1e L2/L3
1f,1g
O&M
Sub-steps
a)
Check the state of the HSRP/VRRP from the Site Switches. Ensure that Site
Switch 1 is the active switch for O&M.
b)
Shut down the O&M VLAN interface in Site Switch 2.
c)
Remove SWU2 and replace it with a new L3-capable variant.
g Note: The cable from Site Switch 2 must be connected to SWU2 now.
d)
Configure the /30 transport network between Site Switch 2 and SWU2. Check the
connection by pinging SWU2 from Site Switch 2. (A console cable is required).
e) Configure the O&M VLAN interface and VRRP (BFD) to SWU2. Use the same
IPv4 addressing that was used in Site Switch 2.
g Note: If the same IPv4 addressing is not used, there will be changes required in
the configuration of, for example, first-hop routing, for the O&M units of the NE
migrated to L3.
f)
Create the required port-based VLAN tagging to SWU2.
g)
Add static routes to SWU 2.
2
Step 2: Making the interface SWO to OMU
SiteSwitch1
2b
L2/L3
L2link
2a,2d
{
el0 ESA24-1
OMU
O&Munits
el1
SWU1
el0
OMU ESA40-A-2
el0
SWU2
L3link 2c
SiteSwitch2
MGW L2/L3
O&M
Sub-steps
a)
Make the interface switchover (SWO) to the OMU (el0 to el1)
b)
Shut down the O&M VLAN interface in Site Switch 1.
c)
Add static routes to Site Switch 2.
d) Verify that the connections to the O&M units are working by pinging the O&M
address from a network management host.
3
Step 3: Migrating SWU 1
{
el0 ESA24-1
OMU
O&Munits
el1
SWU1
el0
OMU ESA40-A-2
el0
SWU2
L3link 3i
L2/L3
MGW SiteSwitch
Sub-steps
a)
Remove SWU1 and replace it with a new L3-capable variant.
g Note: The cable from Site Switch 1 must be connected to SWU1 now.
b)
Configure the /30 transport network between Site Switch 1 and SWU1. Check the
connection by pinging SWU1 from Site Switch 1. (A console cable is required).
c)
Add VRRP cables between SWUs. See Use of A and M links and LAN
connections in MGW in MGW Site Documentation and Engineering for MGW in
MGW Product Documentation.
d)
Configure the O&M VLAN interface and VRRP (BFD) to SWU1. Use the same
IPv4 addressing that was used in the Site Switch 1.
g Note: If the same IPv4 addressing is not used, there will be changes required in
the configuration of, for example, single-hop routing, for the O&M units of the NE
that is being migrated to L3.
f)
Add static routes to Site Switch 1 and SWU1. In this phase, O&M traffic goes
from Site Switch 1 to SWU1 to SWU2 to the active OMU el1.
g) Ensure that the connection from the O&M unit is working by pinging the O&M
address from a network management host.
h)
Make the interface switchover (SWO) to the OMU (el1 to el0). Test the
connection.
i) Add backup static routes with single-hop BFD tracking to both Site Switches 1
and 2 towards the O&M subnet.
{
ESA40-A),SCTPprotected
ISU
Signaling&
control ISU
units
ISU ESA40-A-1
VRRP
forO&M
{
ISU ESA40-A-0
O&M
ISU
DCN-L3uplinksinESA40-A
MGW
NPGEP-0 L2/L3Site
WO-EX Switch1
{
Userplaneunits
TPG-0
NPGEP-1
TPG-1 SP-EX
TPG-n
NPGEP-2
WO-EX
MGW
Procedure
1
Step 1: User plane via own SWU pair
UP-SWU1 1b,1c,1d
1b,1c,1d 1e,1f,1g
1e,1f,1g
SiteSwitch1
1a,1c
L2/L3
UP-SWU2
NPGEP-0
WO-EX
{
Userplaneunits
TPG-0
NPGEP-1
SiteSwitch2
TPG-1 SP-EX
L2/L3
1h 1c,1i
TPG-n
NPGEP-2
WO-EX
SFU/
NPGEP-3
MXU
SP-EX
1h
Sub-steps
a) Check the state of HSRP/VRRP from the Site Switches. Ensure that Site Switch
1 is the active switch for all the User Plane (UP) interfaces.
b)
Add a new L3 UP-SWU pair to the MGW.
g Note: The cables from Site Switch 2 must be connected to SWU2 now.
c)
Configure the /30 transport network between the Site Switches and the
UPSWUs.
d)
Add VRRP cables between SWUs. See Use of A and M links and LAN
connections in MGW in MGW Site Documentation and Engineering for MGW in
MGW Product Documentation.
g Note: If the same IPv4 addressing is not used, there will be changes required in
the configuration of, for example, first-hop routing, for the user plane units of the
NE being migrated to L3.
h) Disconnect the SP-EX NPGEP cables and connect them to UP-SWU2. Add
cables between SP-EX NPGEP and UP-SWU2. Check the /30 transport network
connection by pinging Site Switch 1 to UP-SWU1 and Site Switch 2 to UP SWU2.
i)
Shut down the UP VLAN interfaces from Site Switch 2.
2
Step 2: Making unit switchover (SWO) for user plane traffic
Figure 25 Step 2: Making unit switchover (SWO) for user plane traffic
UP-SWU1
VRRP(s)
L2/L3Site 2b
forUP
Switch2
UP-SWU2
NPGEP-0
SP-EX
{
Userplaneunits
TPG-0
NPGEP-1 L2/L3Site
TPG-1 WO-EX Switch2
2a,2d 2c
TPG-n
SFU/ NPGEP-3
MXU`s WO-EX
2a,2d
Sub-steps
a)
Make the unit switchover (SWO) to the (WO-EX) NPGEP units.
d) Check that the connections from the UP units are working by pinging the UP unit
addresses from a peer core network node (MGW).
e)
After the IP connection from UP unit is working again, check that the user plane
traffic is flowing.
3
Step 3: Migrating SWU1
UP-SWU1
VRRP(s)
L2/L3Site 3b,3e
forUP
Switch1
UP-SWU2
3a
NPGEP-0
WO-EX
{
Userplaneunits
TPG-0
NPGEP-1 L2/L3Site
TPG-1 SP-EX Switch2
3c 3a 3e
TPG-n
NPGEP-2
WO-EX
SFU/
MXU`s
3c
Sub-steps
a)
Disconnect the SP-EX NPGEP cables and connect them to UP-SWU1. Add
cables between SP-EX NPGEP and UP-SWU1
b)
Add static routes to Site Switch 1.
c)
Make NPGEPs switchover. Test the IP connection by pinging the UP unit address
from a peer core network node (MGW).
d)
After the IP connection from UP unit is working again, check that the user plane
traffic is flowing.
e)
Add backup static routes, with single-hop BFD tracking , to both Site Switches 1
and 2 towards the UP subnet(s).
SFU/MXUs
WO-EX
NPGEP-0 UPBB-L3uplinks
{
TPG-0 toESA40-A
NPGEP-3
SP-EX
{
OMU Switch1
OMU
HSRP
R2
A
O&Munits
STU
S
STU
(DCN)IPBB
CMM
L2/L3Site
CMM SWU1 Switch2
L2
MSS
AP Alternate Port
RP Root Port
g Note: LAN Device Integration (LDI) aspects when L2 LDI is migrated to L3 LDI, or
when L3 LDI is deployed for the first time, are outside the scope of this description.
Procedure
{
OMU Switch1
RP
OMU HSRP
R2
AP
O&Munits
STU A
AP 1c
STU S
RP 1b
CMM
1d L2/L3Site
SWU1
CMM L2 Switch2
1d
MSS
Sub-steps
a) Check that the HSRP active router R2 (or VRRP master) for egress traffic from
the O&M VLAN is located in the L2/L3 (DCN) Site Switch 1.
b) Shut down the O&M VLAN interface in the L2/L3 (DCN) Site Switch 2.
c) Remove the cross-uplink cable in SWU1 towards the L2/L3 Site Switch 1.
d) Optionally, remove the uplink cable in SWU1 towards the L2/L3 Site Switch 2.
Remove SWU1 and replace it with a new L3-capable variant (ESB26 or ESB24-
D).
{
OMU Switch1
OMU RP HSRP
VRRP
R2
R2
AP 2b,2c
O&Munits
STU M A
(DCN)IPBB
STU B S
Router
CMM
2a,2d, L2/L3Site
CMM SWU1 2e Switch2
L3
MSS
Sub-steps
a)
Activate the L3 license in SWU1 according to the accompanying instructions.
b)
Configure the /30 transport network between the L2/L3 Site Switch 2 and SWU1.
c)
Check the IP connection by pinging SWU1 from the L2/L3 Site Switch 2. (A
console cable is required).
d) Configure the O&M VLAN interface and VRRP R2 Virtual Router configuration
(and single-hop BFD if ESB24-D SWU variant is used) to SWU1 according to the
site connectivity configuration instructions.
g Note: Use the same IPv4 addressing that was used for HSRP/VRRP R2 in the
L2/L3 Site Switch 2. If the same IPv4 addressing is not used, there will be
changes in the configuration of the O&M units.
e)
Optionally, create port-based VLAN tagging to SWU1 (only if OLCM VLAN for
Online Call Monitoring is not required) if the existing SWU1 was replaced with a
new L3 capable SWU variant.
g Note: If OLCM VLAN is required, then a VLAN trunk must be used towards the
O&M units.
3
Step 3: Recovering O+M traffic via SWU1
{
OMU Switch1
OMU RP DP
VRRP
R2
AP DP
O&Munits
STU B
(DCN)IPBB
STU M
Router
CMM
L2/L3Site 3c
CMM SWU1 Switch2
L3
MSS
g Note: The first planned outage for O&M traffic starts here.
Sub-steps
a)
Make a forced el0 to el1 interface switchover (SWO) for the O&M units (OMU
and STU) by disabling the SWU0 downlink port towards the O&M units. The
O&M units will start sending Gratuitous ARP requests to update the ARP tables.
The O&M units also learn the virtual MAC of the VRRP R2 master router on
SWU1 for the default-gateway IP address.
g Note: At this point, the VRRP R2 master Virtual Router is located in SWU1 and
there are no inter-switch VRRP cables between SWU0 and SWU1.
b)
Shut down the O&M VLAN(s) in L2/L3 Site Switch 1.
g Note: The HSRP R2 active router is now inactive (disabled). The incoming O&M
traffic-related IP packets are now routed to a black-hole by the L2/L3 Site
Switches 1 and 2.
L3 routing loops must not be created.
c) Add static routes for the O&M subnet(s) in L2/L3 Site Switch 2 via SWU1. The
O&M traffic flows should recover via SWU1. Also, the Open Shortest Path
Function (OSPF) ensures that the O&M subnet is now reachable via the L2/L3
Site Switch 2.
g Note: The first planned outage for O&M VLAN(S) now ends. The total length of
the outage is dependent upon how quickly steps 3a to 3c are completed.
d) Check the IP connectivity between the O&M units and the network management
hosts by using ping and traceroute.
4
Step 4: L3 connectivity in use, L3 configuration for SWU0
{
SWU0
OMU Switch1
OMU
VRRP
R2
O&Munits
STU B
4a
STU M
(DCN)IPBB
Router
CMM
L2/L3Site
CMM SWU1 Switch2
L3
MSS
Sub-steps
a)
Remove the cross-uplink cable in SWU0 towards the L2/L3 Site Switch 2.
b)
Optionally, remove the uplink cable in SWU0 towards the L2/L3 Site Switch 1.
Remove SWU0 unit and replace it with new L3-capable variant (ESB26 or
ESB24-D).
c)
Activate the appropriate L3 license in SWU0 according to the instructions.
g Note: Different licenses are required depending upon the configuration of the
network.
d) Configure the /30 transport network between the L2/L3 Site Switch 1 and SWU0.
e) Ensure the IP connection by pinging SWU0 from the L2/L3 Site Switch 1. (A
console cable is required).
f)
Configure the O&M VLAN interface and the VRRP R2 Virtual Router (also
singlehop BFD if the ESB24-D SWU variant is used) to SWU0 according to the
site connectivity configuration instructions. Use the same IPv4 addressing that
was used for HSRP/VRRP R2 in the L2/L3 Site Switch 1.
g Note: If the same IPv4 addressing is not used, there will be changes in the
configuration of the O&M units.
g)
Optionally, create port-based VLAN tagging to SWU0 (if a OLCM VLAN is not
required) if the existing SWU0 was replaced with a new L3-capable SWU variant.
If a OLCM VLAN is required, a VLAN trunk must be used towards the O&M units.
5
Step 5: Normalizing O&M traffic via SWU0 and SWU1.
{
SWU0
OMU Switch1
Router
OMU
VRRP
R2
O&Munits
STU M
5a
STU B
(DCN)IPBB
Router
CMM
L2/L3Site 5f
CMM SWU1 Switch2
L3
MSS
Sub-steps
a)
Optionally, disable/shut down the inter-switch ports on SWU0 and SWU1. Add
the inter-switch VRRP cables according to the cabling instructions in Use of LAN
Ports in MSC Server (MSS) in M-release Site Documentation. Define the VLAN
trunk group between SWU0 and SWU1. Enable the inter-switch ports (if
disabled/shut down) on SWU0 and SWU1 and check the VLAN group status.
g Note: If the VRRP Virtual Router R2 and the VLAN group have been configured
properly, VRRP R2 mastership is moved to SWU0 (due to higher VRRP priority).
The egress O&M traffic is switched by SWU1 towards SWU0. The ingress O&M
traffic from the L2/L3 Site Switch 2 is received by SWU1.
Instead of using a VLAN trunk group between SWU0 and SWU1, Link
Aggregation Group (LAG) using Link Aggregation Control Protocol (LACP) with
passive negotiation can be used. However, the use of LAG with LACP between
SWU0 and SWU1 has not been verified.
b)
Add static routes for the O&M subnet(s) in L2/L3 Site Switch 1 via SWU0. After
this the O&M traffic flows should recover via SWU0. Also, OSPF makes sure that
the O&M subnet is now reachable via the L2/L3 Site Switch 1.
c)
Check the IP connectivity between the O&M units and the network management
hosts by using ping and traceroute.
g Note: The second planned service outage for O&M VLAN starts here.
d) Make a el1 to el0 interface switchover (SWO) for the O&M units (OMU and STU).
The O&M units will start sending Gratuitous ARP requests in order to update the
ARP tables. The O&M units also learn the virtual MAC of VRRP R2 master router
on SWU0 for the default-gateway IP address.
g Note: At this point the VRRP R2 master Virtual Router is located in SWU0.
e)
Check the IP connectivity between the O&M units and the network management
hosts by using ping and traceroute.
g Note: The second planned service outage for O&M VLAN ends here.
f)
Add floating/backup static routes in L2/L3 Site Switch 1 for the O&M subnet(s)
via L2/L3 Site Switch 2, and in reverse. Additionally, configure BFD single-hop
tracking in L2/L3 Site Switches 1 and 2 and SWU0 and 1, based on the site
connectivity configuration instructions.
{
CHU Switch1
RP HSRP
Billingunits
CHU
R3
AP
CHU A
(DCN)IPBB
CHU RP S
BDCUunits
{
BDCU
SWU3 L2/L3Site
BDCU L2 Switch2
MSS
AP Alternate Port
RP Root Port
g Note: If both the Billing VLAN and Short Message Relay Service Element (SMRSE)
VLAN are used, then port-based VLAN tagging for SWU2 and SWU3 is also used.
Additionally, different HSRP virtual routers (for example, R3 and R4) are used in L2/L3
(DCN) Site Switches 1 and 2 for the Billing LAN subnet and the SMRSE subnet.
In new deliveries, the BDCU units are connected to Billing LAN SWU2 and SWU3 by
default.
Special L2 to L3 migration considerations may be needed for CHU0 and CHU1, CHU1-
0 and CHU1-1, CHU2-0 and CHU2-1, and CHU3-0 and CHU3-1 pairs.
Procedure
1
Step 1: L2 connectivity in use, preparations for SWU3
Sub-steps
a)
Check that the HSRP active router R3 (or VRRP master) for egress traffic from
the Billing VLAN is located in the L2/L3 (DCN) Site Switch 1.
b) Shut down the Billing VLAN interface in the L2/L3 (DCN) Site Switch 2.
c) Remove the cross-uplink cable in SWU3 towards the L2/L3 Site Switch 1.
d)
Optionally, remove the uplink cable in SWU3 towards the L2/L3 Site Switch 2.
Remove SWU3 and replace it with a new L3-capable variant (ESB26 orESB24-
D).
Sub-steps
a)
1. Activate the L3 license in SWU3 according to the accompanying instructions.
b) Configure the /30 transport network between the L2/L3 Site Switch 2 and SWU3.
c) Ensure the IP connection by pinging SWU3 from the L2/L3 Site Switch 2. (A
console cable is required).
d)
Configure the Billing VLAN interface and VRRP R3 Virtual Router (also singlehop
BFD if ESB24-D SWU variant is used) to SWU3 according to the site connectivity
configuration instructions.
g Note: Use the same IPv4 addressing that was used for HSRP/VRRP R3 in the
L2/L3 Site Switch 2. If the same IPv4 addressing is not used, there will be
changes required in the configuration.
e)
Optionally, create port based VLAN tagging to SWU3 if the existing SWU3 was
replaced with a new L3 capable SWU variant.
3
Step 3: Recovering Billing traffic via SWU3
g Note: The first planned outage for Billing VLAN starts here.
Sub-steps
a)
Make a forced el0 to el1 interface switchover (SWO) for the Billing units (CHU
and BDCU) by disabling the SWU2 downlink port towards the Billing units. The
Billing units will start sending Gratuitous ARP requests to update the ARP tables.
The Billing units also learn the virtual MAC of VRRP R3 master router on SWU3
for the default-gateway IP address.
g Note: At this point, the VRRP R3 master Virtual Router is located in SWU3 and
there are no inter-switch cables between SWU2 and SWU3.
b)
Shut down the Billing VLAN(s) in L2/L3 Site Switch 1.
g Note: The HSRP R3 active router is now inactive (disabled). Also, the incoming
Billing traffic-related IP packets are now routed to a black-hole by the L2/L3 Site
Switches 1 and 2.
L3 routing loops must not be created.
c) Add static routes for the Billing subnet(s) in L2/L3 Site Switch 2 via SWU3. After
this, the Billing traffic flows should recover via SWU3. OSPF ensures that the
Billing subnet is now reachable via the L2/L3 Site Switch 2.
d)
Check the IP connectivity between the Billing units and the billing system by
using ping and traceroute.
g Note: The first planned outage for Billing VLAN(S) now ends. The total length of
the outage is dependent upon how quickly steps 3a to 3c are completed.
4
Step 4: L3 connectivity in use, L3 configuration for SWU2
Sub-steps
a) Remove the cross-uplink cable in SWU2 towards the L2/L3 Site Switch 2.
b) Optionally, remove the uplink cable in SWU2 towards the L2/L3 Site Switch 1.
Remove SWU2 and replace it with new L3-capable variant (ESB26 or ESB24-D).
c)
Activate the appropriate L3 license in SWU2 according to the instructions.
g Note: Different licences are required depending upon the configuration of the
network.
d)
Configure the /30 transport network between the L2/L3 Site Switch 1 and SWU2.
e)
Check the IP connection by pinging SWU2 from the L2/L3 Site Switch 1. (A
console cable is required).
f)
Configure the Billing VLAN interface and VRRP R3 Virtual Router (and singlehop
BFD if ESB24-D SWU variant is used) to SWU2 according to the site connectivity
configuration instructions. Use the same IPv4 addressing that was used for
HSRP/VRRP R3 in the L2/L3 Site Switch 1.
g Note: If the same IP addressing is not used, there will be changes in the
configuration of the Billing units.
g) Optionally, create port-based VLAN tagging to SWU2 if the existing SWU2 was
replaced with a new L3 capable SWU variant.
5
Step 5: Normalizing Billing traffic via SWU2 and SWU3
Sub-steps
a)
Optionally, disable/shut down the inter-switch ports on SWU2 and SWU3. Add
the inter-switch VRRP cables according to the cabling instructions in Use of LAN
Ports in MSS. Define the VLAN trunk group between SWU2 and SWU3. Enable
the inter-switch ports on SWU2 and SWU2 (if disabled/shut down) and check the
VLAN trunk group status.
g Note: If the VRRP Virtual Router R3 and the VLAN trunk group have been
configured properly, VRRP R3 mastership is moved to SWU0 (due to higher
VRRP priority), and the egress Billing traffic is switched by SWU3 towards SWU2
while the ingress Billing traffic from the L2/L3 Site Switch 2 is received by SWU3.
Instead of using a VLAN trunk group between SWU2 and SWU3, Link
Aggregation Group (LAG) using Link Aggregation Control Protocol (LACP) with
passive negotiation can be used. However, the use of LAG with LACP between
SWU2 and SWU3 has not been verified.
b) Add static routes for the Billing subnet(s) in L2/L3 Site Switch 1 via SWU2. After
this, the ingress Billing traffic flows should recover via SWU2. OSPF ensures that
the Billing subnet is now reachable via the L2/L3 Site Switch 1.
c) Check the IP connectivity between the Billing units and the billing system by
using ping and traceroute.
g Note: The second planned service outage for the Billing VLAN starts here.
d)
Make the el1 to el0 SWO interface for the Billing units . The Billing units will start
sending Gratuitous ARP requests to update the ARP tables. The Billing units also
learn the virtual MAC of VRRP R3 master router on SWU2 for the defaultgateway
IP address.
g Note: At this point the VRRP R3 master Virtual Router is located in SWU2.
e) Check the IP connectivity between the Billing units and the billing system by
using ping and traceroute.
g Note: The second planned service outage for Billing VLAN ends here.
f) Add floating/backup static routes in L2/L3 Site Switch 1 for the Billing subnet(s)
via L2/L3 Site Switch 2, and in reverse. Additionally, configure BFD single-hop
tracking in L2/L3 Site Switches 1 and 2 and SW2 and 3 based on the site
connectivity configuration instructions.
{
CHU Switch1
Router
VRRP
Billingunits
CHU
R3
CHU M
(DCN)IPBB
CHU B
BDCUunits
Router
{
BDCU
L2/L3Site
BDCU SWU3 Switch2
L3
MSS
g Note: If L2 LDI is in use the preparations required may vary, for example because of
Multiple Spanning Tree Protocol (MSTP) regional configurations.
If LDI is in use, the correct topology file must be selected first according to
Use of LAN Ports in MSC Server (MSS) in M-release Site Documentation, in order to be
able to add the new SWU40 and SWU41 L3 switching units under LDI.
Procedure
1
Step 1: L2 connectivity in use, preparations for SWU40 and SWU41
1a ExternalVLAN1
(Media)Control(SCTPprimary)
SWU40
1c ExternalVLAN2
Control(SCTPsecondary)
Router
ExternalVLAN3
VRRP
(Media)Control(TCP/UDP)
R1
M DP
RP
B 1b,1g
1f
L2/L3Site
1d Switch1
Router 1e
L2 1a
SWU41 HSRP
SWU6
R1
A
RP
AP S
SU
AP
SU
1b
RP L2/L3Site
... Switch2
SWU7
L2
AP Alternate Port
DP Designated Port
RP Root Port
Sub-steps
a)
Configure the L3 uplinks using dedicated /30 transport networks (according to the
updated network plan) for L3 between SWU40 and L2/L3 Site Switch 1, as well
as between SWU41 and L2/L3 Site Switch 2. If multiple L3 uplinks are
configured, a dedicated /30 transport network is required for each L3 uplink.
b)
Check the IP connectivity from the L2/L3 Site Switch 1 to SWU40, and from
L2/L3 Site Switch 2 to SWU41 by pinging.
g Note: SWU40 and SWU41 are not reachable from the Mobile Packet backbone
network (MPBN) because routes have not been defined to SWU40 or SWU41.
c)
Configure the downlink interface ports towards the first level L2 SWUs based on
the planned L2 topology. Add the MSTP configuration in SWU40 and SWU41
according to the existing MSTP configuration, both in the first level L2 SWUs and
the whole core site. This means that identical MSTP regional configuration must
also be added to SWU40 and SWU41. Spanning tree is then enabled.
d)
Configure VRRP R1 Virtual Router used as the default-gateway to SWU40 and
SWU41 for the TCPUDP VLAN3. The same IP addresses must be used here as
is defined for HSRP R1 in the L2/L3 Site Switches 1 and 2. Additionally,
singlehop BFD is configured in ESB24-D SWU-variant for L3 uplink tracking
according to the site connectivity configuration instructions. Define the egress
routing (for example, 0.0.0.0/0 default routes via L3 uplinks) in SWU40 and
SWU41. If multiple TCPUDP VLANs are in use for all TCP/UDP and SCTP SH
type of egress traffic, the same number of VRRP Virtual Routers must be
configured in SWU40 and SWU41.
e)
Configure the same SCTP primary and secondary path interface configurations
in SWU40 and SWU41 as exist on the L2/L3 Site Switches 1 and 2 respectively.
Add the egress static routes for SCTP primary and secondary paths towards the
L2/L3 Site Switches 1 and 2. Backup routes are not configured in SWU40 or
SWU41 for SCTP primary and secondary MH paths.
f) Shut down the inter-switch ports in SWU40 and SWU41. Add the inter-switch
VRRP cables according to the cabling instructions in Use LAN Ports in M 14
MSS. Define the VLAN trunk group between SWU40 and SWU41. Enable the
inter-switch ports if they were disabled/shut down earlier. Check the VLAN group
status and that the CIST root and the MSTI roots are located on SWU40 or
SWU41 as expected (RP is found via the VLAN group link).
g Note: Instead of using VLAN trunking between SWU40 and SWU41, link
aggregation (LAG) using Link Aggregation Control Protocol (LACP) with passive
negotiation can also be used. However, using LAG with LACP between SWU40
and SWU41 has not been verified.
g)
Check that the HSRP active router R1 (or VRRP master) used for egress traffic
from TCPUDP VLAN(s) is located in the L2/L3 Site Switch 1. If multiple TCPUDP
VLANs exist, all active HSRP routers must be located on the L2/L3 Site Switch 1.
2
Step 2: L2 links from odd numbered SWU‘s to SWU40 and SWU41
ExternalVLAN1
(Media)Control(SCTPprimary)
SWU40
2d ExternalVLAN2
Control(SCTPsecondary)
Router
ExternalVLAN3
VRRP
(Media)Control(TCP/UDP)
R1
M DP 2d
RP
B 2c
L2/L3Site
Switch1
2c 1e
Router
L2 SWU41 1a
HSRP
SWU6
R1
2b,2e
A
RP
AP S
SU
AP
SU RP
2a,2c
2c,2d L2/L3Site
... Switch2
SWU7
L2
Sub-steps
a)
Shut down the TCPUDP VLAN(s) and SCTP secondary path VLAN interfaces in
L2/L3 Site Switch 2. Shut down the downlink ports towards the odd numbered
SWUs in the L2/L3 Site Switches 1 and 2.
g Note: The HSRP R1 standby router is now inactive (disabled). Also, the
incoming SCTP secondary path packets are now routed to a black-hole by the
L2/L3 Site Switch 2.
b)
Check that the SCTP secondary path failure alarm is triggered in the MSS/MGW
(and other SCTP MH peers).
g Note: For example, a failure printout for the SCTP secondary path failure for
M3UA would appear like this:
<HIST> MSS69LB6200 CMM-0 SWITCH 2009-11-11
12:31:13.75
* ALARM SIGU-0 1A001-01 S3CMAN
(0026) 3379 SCTP PATH FAILURE
03 02 0008 0000
c) Shut down the front plate uplink ports in the odd numbered SWUs. Re-cable,
according to the cabling instructions (Use of LAN Ports in MSS), the odd
numbered SWUs by removing the old cabling and connecting SWU40 and
SWU41 internal cables to the back plate ports of the odd numbered SWUs (in
14AC, 14BC0, 14BC1, 14BC2 and optionally in 14CC cabinets).
g Note: If the MSTP regional configuration was changed in step 1c, corresponding
changes must now be made in each odd numbered SWU.
d)
Enable the odd numbered SWUs uplink ports (on the back plate) of the odd
numbered SWUs towards the SWU40 and SWU41. Check the interface port and
MSTP status on SWU40 and SWU41 towards the odd numbered SWUs.
e)
Check IP connectivity from the signaling units toward the SCTP secondary path
default-gateway IP address on SWU41 using ping.
g Note: When pinging you must use the ”SRC” option for selecting the source IP
address when the local IP based default-gateway routing is used for SCTP MH
paths.
3
Step 3: Recovering SCTP secondary paths via SWU40 and SWU41
ExternalVLAN1
(Media)Control(SCTPprimary)
SWU40
ExternalVLAN2
Control(SCTPsecondary)
Router
ExternalVLAN3
VRRP
(Media)Control(TCP/UDP)
R1
M DP
RP
B
L2/L3Site
Switch1
Router
L2 SWU41
HSRP
SWU6
R1
3b
A
RP
AP S
SU
AP
SU
RP 3a
L2/L3Site
... Switch2
SWU7
L2
Sub-steps
a) Add static route(s) via SWU41 for the SCTP secondary path subnet(s) in the
L2/L3 Site Switch 2. The SCTP secondary path(s) should recover via SWU41.
b)
Check that the (H.248) SCTP secondary path alarms triggered during Step 2b (in
MSS/MGW) are now cancelled. If not, check the IP connectivity towards the
signaling units from MGW to MSS, and in reverse, using ping and traceroute.
RP
B 4b,4c,4d
L2/L3Site
Switch1
Router
L2 SWU41
SWU6
4a
RP
AP
SU
AP
SU 4c
RP
L2/L3Site
... Switch2
SWU7
L2
g Note: The first planned service outage for TCPUPD VLANs starts here.
Sub-steps
a)
Make a forced el0 to el1 interface switchover (SWO) for the TCPUDP VLAN(s)
by shutting down the downlink ports towards the signaling units (the ports
connected to the EL0 interfaces) in the even numbered SWUs. The ethernet
Interface failure alarm is triggered and the EL0 to EL1 SWO is also detected.
g Note: The SCTP primary paths are lost at the same time and applications start
using the SCTP secondary paths automatically. For example:
<HIST> MSS69LB6200 SIGU-0 SWITCH 2009-11-12
14:38:49.21
** ALARM SIGU-0 1A001-01 NDVMAN
(0114) 3053 ETHERNET INTERFACE FAILURE
0000001D 02 08
g Note: Software updates can be undertaken for the even numbered SWUs
because the downlink ports towards the CPU units were shut down.
b)
Shut down the TCPUDP VLAN(s) in L2/L3 Site Switch 1.
g Note: The HSRP R1 active router is now inactive (disabled). The incoming
TCPUDP traffic-related IP packets are now routed to a black-hole by the L2/L3
Site Switches 1 and 2. L3 routing loops must not be created. Shut down the
downlink ports in the L2/L3 Site Switches 1 and 2 towards the even numbered
SWUs.
c)
Add a static route in L2/L3 Site Switch 1 for the TCPUDP subnet(s) via SWU40.
After this the TCPUDP traffic flows should recover via SWU40.
ExternalVLAN1
(Media)Control(SCTPprimary)
SWU40
ExternalVLAN2
5d Control(SCTPsecondary)
Router
ExternalVLAN3
VRRP
(Media)Control(TCP/UDP)
R1
M DP
RP
5d
B 5d
6a
L2/L3Site
5c,5d Switch1
Router
L2
SWU6 SWU41
IPBB
6b
RP
AP
SU
AP
SU
RP
L2/L3Site
... Switch2
SWU7
L2
Sub-steps
a)
Shut down the SCTP primary path VLAN interface(s) in the L2/L3 Site Switch 1.
The incoming SCTP primary path packets are now routed to a black-hole by the
L2/L3 Site Switch 1 because a backup/floating route is not defined.
b)
Check that SCTP primary path failure alarm is triggered in MSS/MGW (if not
completed in Step 4a) and the other SCTP peers for the impacted SCTP
associations. After this the SCTP applications start using the SCTP secondary
paths because the SCTP primary paths are broken.
g Note: Refer to Step 4a for instructions on confirming SCTP primary path failure.
c)
Shut down the uplink ports on the front plate of the even numbered SWUs.
Recable, (according to the cabling instructions in Use of LAN Ports in MSC
Server (MSS) in M-release Site Documentation), for even numbered SWUs by
removing the old cabling and connecting the SWU40 and SWU41 internal cables
to the backplate ports of the even numbered SWUs in 14AC, 14BC0, 14BC1,
14BC2 and optionally in 14CC cabinets. At this point, the SCTP primary paths
seem to recover, but only in the egress direction as the ingress route back to
SWU40 from the L2/L3 Site Switch 1 does not exist. Recovery is not possible
until the static route is enabled.
d) Enable the uplink ports on the even numbered SWUs towards the SWU40 and
SWU41, and the downlink ports towards the signaling units. Check the interface
port and MSTP status on SWU40 and SWU41 towards the even numbered
SWUs.
e)
Check the IP connectivity from the signaling units towards the primary path
default-gateway IP address on SWU40 using ping.
6
Step 6: Recovering SCTP primary paths
Figure 41 Step 6: Recovering primary SCTP paths via SWU40 and SWU41
ExternalVLAN1
(Media)Control(SCTPprimary)
SWU40
ExternalVLAN2
Control(SCTPsecondary)
Router
ExternalVLAN3
VRRP
(Media)Control(TCP/UDP)
R1
M DP
RP
B
L2/L3Site 6a
Switch1
Router
L2
SWU6 SWU41
IPBB
6b
RP
AP
SU
AP
SU
RP 7a
L2/L3Site
... Switch2
SWU7
L2
Sub-steps
a) Add a static route via SWU40 for the SCTP primary path subnet(s) in the
L2/L3 Site Switch 1. After this the SCTP primary path(s) should recover via
SWU40.
b) Check that the SCTP (H.248) primary path failure alarms (in MSS/MGW and
other SCTP peers) are now cancelled. If not, check (H.248) connectivity to the
signaling units from the MGW, and in reverse, using ping and traceroute.
ExternalVLAN1
(Media)Control(SCTPprimary)
SWU40
ExternalVLAN2
Control(SCTPsecondary)
Router
ExternalVLAN3
VRRP
(Media)Control(TCP/UDP)
R1
M DP
RP
B
L2/L3Site
Switch1
Router
7b L2
SWU6 SWU41
IPBB
7b
RP
AP
SU
AP
SU
RP 7a
L2/L3Site
... Switch2
SWU7
L2
Sub-steps
a)
Add floating/backup static routes in the L2/L3 Site Switch 1 for the TCPUDP
subnet(s) via L2/L3 Site Switch 2, and in reverse. Additionally, configure BFD
single-hop tracking in L2/L3 Site Switches 1 and 2 and SWU40 and SWU41
based on the site connectivity configuration instructions.
g Note: Single-hop BFD configuration principles are outside the scope of these
guidelines.
g Note: The second planned service outage for TCPUPD VLAN(s) begins here.
b)
If EL0 was prioritized (for TCPUDP VLAN) on the signaling units, check that the
automatic EL1 to EL0 interface SWO was executed by the system as expected
after enabling the downlink ports on the even numbered SWUs. After this, EL0 is
used for the TCP/UDP traffic flows on TCPUDP VLAN(s).
g Note: The second planned service outage for TCPUPD VLAN(s) ends here.
Procedure
1
Migrate to L3 in one of the N+X redundant MSSs.
• Connect it to a test bed MGW to check operability.
• Redundant MSS-specific IP addressing is used.
• A N+X Redundant MSS not migrated to L3, can only back up protected MSSs
also not migrated to L3.
2
Migrate to L3 in one live MSS.
3
Migrate the protected MSSs protected by the L3 redundant MSS (RMSS).
5
Migrate the other N+X MSS to L3.
6 In the first live MSS, create one unit pool and vMGW.
7
During a period of low traffic, copy the SW configuration into the RMSS for that SW
package.
8
If successful, migrate the live MSS entirely.
9
Copy configuration into the N+X switch (for that SW package).
10
Either migrate one MSS entirely, or do it gradually as with the first switch.
11
During another period of low traffic, copy MSS changes into the RMSS with that SW
package.
Alternative 2: If just N+X one RMSS is available then the the RMSS can backup either
L3 migrated protected MSSs or L2 MSSs. A RMSS cannot support both L2 and L3 MSSs
because of the different cabling required.
For further information, see N+X MSS Redundancy Operating Guidelines.
Procedure
1
Remove L2 VLANs and L2 tagging.
2
Check the interface descriptions to make sure that the descriptions match with the
new usage.
3
Take a backup of the configuration.
4 Upgrade network diagrams and IP-address lists to match the migrated configuration.
Purpose
The SIP load balancer (LB) reduces the number of externally visible IP addresses. The
adjacent network elements are not now required to perform load balancing on behalf of
the MSC Server.
The MSS supports SIP load balancing when UDP, TCP or SCTP is used as the transport
protocol.
MSS
MSS
MSS
SIGU SIGU SIGU
LB(SIP)
LB(SIP)
MSS
g Note: For information about the SIP LB and the MSS N+X Redundancy Solution see
N+X MSS Redundancy Operating Guidelines.
• The SIP LB Sales item, L.4241, must be ordered in addition to the standard NVS and
SIP sales items.
• SIP over SCTP requires sales item L.1432 together with sales item L.4241. This
sales item allows SIP-over-SCTP load balancing.
• An Ethernet Message Bus (EMB) is available in the MSS.
• A pair of ESB24-As (in old deliveries) exists in Layer 2 (L2) SWUs in the IPDU unit
racks. For newer deliveries, ESB24-D is also compatible.
• L3 IP connectivity and the associated sales item, L.3397, is required in the MSS.
Another pair of ESB-24A SWUs is required on the L3 layer.
• Virtual CIC of SIP must be deactivated before or during the migration.
The migration is different depending on the use case. For the NVS functionality of the
MSS the following use cases exist:
VLAN-ID Control Plane VLANs for the MSS IPv4 subnet Default gateway and interface address
Internal LAN (local)
(default)
1
Activate the licenses.
ZW7M:FEA=1462:ON:;
2 If DNS were used by the adjacent network element(s), change the Time To Live
(TTL) timers to a shorter period. The Fully Qualified Domain Name (FQDN) visible to
the adjacent element may be MSS public FQDN, MSS private FQDN, or, unit level
FQDN.
TTL is on a zone basis, having a granularity of a second. Change the TTL gradually
to a very short period, for instance, one minute.
3
Add all the Internet Protocol Director Unit (IPDU) computer units that are required
into the MSS. If the existing signaling units (SCPUs/CCSUs/SIGU) must be
converted to IPDUs, move some vMGWs to other signaling units, and convert the
signaling unit(s) to IPDU.
Temporarily adding additional signaling units may help.
Physical
ZQRN:SCPU,0::VLAN400:"169.254.0.90",23,P,:::;
ZQRN:SCPU,1::VLAN400:"169.254.0.91",23,P,:::;
ZQRN:SCPU,2::VLAN400:"169.254.0.92",23,P,:::;
SIP access
VIPs
ZQRN:SCPU,0::VLAN400:"10.23.153.67",32,L,V,:::;
ZQRN:SCPU,1::VLAN400:"10.23.153.67",32,L,V,:::;
ZQRN:SCPU,2::VLAN400:"10.23.153.67",32,L,V,:::;
ZQRN:IPDU,0::VLAN1300:"10.23.153.67",26,L,V,:::;
SIP trunk
VIPs
ZQRN:SCPU,0::VLAN400:"10.23.153.68",32,L,V,:::;
ZQRN:SCPU,1::VLAN400:"10.23.153.68",32,L,V,:::;
ZQRN:SCPU,2::VLAN400:"10.23.153.68",32,L,V,:::;
ZQRN:IPDU,0::VLAN1300:"10.23.153.68",26,L,V,:::;
5
Create LBS GROUPs. Add all the units for each load share group together. For each
LBS, configure IP addresses, port numbers and other relevant parameters. Different
SIP use cases may require different LBS groups. For example, one for SIP-I and
another for NVS SIP.
a) LBSSIP: MGCF/SIP-I over UDP/TCP
ZJJE:CREATE:NAME=LBS-
MGCFSIPI,TYPE=SIP:HASHT=120,:PROT=TCP,PORT=5060,PROLE=1,I
P="10.23.153.66":UNIT=SIGU,IND=0;
ZJJE:MODIFY:NAME=LBS-MGCFSIPI,TYPE=SIP:::UNIT=SIGU,IND=1;
ZJJE:MODIFY:NAME=LBS-MGCFSIPI,TYPE=SIP:::UNIT=SIGU,IND=2;
ZJJE:MODIFY:NAME=LBS-
MGCFSIPI,TYPE=SIP::PROT=UDP,PORT=0,PROLE=2,IP="10.23.153.
66":;
ZJJE:MODIFY:NAME=LBS-
MGCFSIPI,TYPE=SIP::PROT=TCP,PORT=0,PROLE=2,IP="10.23.153.
66":;
ZJJE:MODIFY:NAME=LBS-
MGCFSIPI,TYPE=SIP::PROT=TCP,PORT=0,PROLE=1,IP="10.23.153.
66":;
ZJJC:CREATE:IPDU,2:INT=VLAN400,EXT=BOND0; // IPDU2 is spare at
the moment in the example.
ZJJC:CREATE:IPDU,0:INT=VLAN400,EXT=BOND0,NAME=LBS-
MGCFSIPI;
b) LBSSIP: SIP trunk over UDP/TCP
ZJJE:CREATE:NAME=LBS-SIPTRUNK,
TYPE=SIP:HASHT=120,:PROT=TCP,PORT=5060,PROLE=1,IP=10.23.1
53.68":UNIT=SCPU,IND=0:;
ZJJE:MODIFY:NAME=LBS-SIPTRUNK,TYPE=SIP:::UNIT=SCPU,IND=1;
ZJJE:MODIFY:NAME=LBS-SIPTRUNK,TYPE=SIP:::UNIT=SCPU,IND=2;
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=UDP,PORT=0,PROLE=2,IP="10.23.153.
68";
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=TCP,PORT=5070,PROLE=1,IP="10.23.1
53.68";
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=UDP,PORT=5070,PROLE=1,IP="10.23.1
53.68";
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=TCP,PORT=0,PROLE=1,IP="10.23.153.
68";
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=TCP,PORT=0,PROLE=2,IP="10.23.153.
68;"
ZJJC:CREATE:IPDU,2:INT=VLAN400,EXT=BOND0; // unit 2 is spare at the
moment in the example.
ZJJC:CREATE:IPDU,0:INT=VLAN400,EXT=BOND0,NAME=LBS-
SIPTRUNK:
LBSSIP: SIP ACCESS
ZJJE:CREATE:NAME= LBS-NVS69,
TYPE=SIP:HASHT=120,:PROT=TCP,PORT=5060,PROLE=1,IP="10.23.
153.67":UNIT=SCPU,IND=0:;
ZJJE:MODIFY:NAME= LBS-NVS69,TYPE=SIP:::UNIT=SCPU,IND=1;
ZJJE:MODIFY:NAME= LBS-NVS69,TYPE=SIP:::UNIT=SCPU,IND=2;
ZJJE:MODIFY:NAME= LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=5060,PROLE=1,IP="10.23.153.
67";
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=UDP,PORT=5060,PROLE=1,IP="10.23.153.
67";
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=UDP,PORT=0,PROLE=2,IP="10.23.153.67"
;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=0,PROLE=1,IP="10.23.153.67"
;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=0,PROLE=2,IP="10.23.153.67"
;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=5070,PROLE=1,IP="10.23.153.
67";
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=UDP,PORT=5070,PROLE=1,IP="10.23.153.
67";
ZJJC:CREATE:IPDU,2:INT=VLAN400,EXT=BOND0; // unit 2 is spare at the
moment in the example.
ZJJC:CREATE:IPDU,0:INT=VLAN400,EXT=BOND0,NAME=LBS-NVS69:;
c) LBSSIP: MGCF/SIP-I over SCTP MH
ZJJE:CREATE:NAME=LBS-
SCTPSIPI,TYPE=SIP:HASHT=120,:PROT=SCTP,PORT=5060,PROLE=1,
IP="10.23.151.66",IP2="10.23.152.66":UNIT=SIGU,IND=0;
ZJJE:MODIFY:NAME=LBS-SCTPSIPI,TYPE=SIP:::UNIT=SIGU,IND=1;
ZJJE:MODIFY:NAME=LBS-SCTPSIPI,TYPE=SIP:::UNIT=SIGU,IND=2;
ZJJE:MODIFY:NAME=LBS-
SCTPSIPI,TYPE=SIP::PROT=SCTP,PORT=0,PROLE=2,IP="10.22.95.
3",IP2="10.22.96.3"::;
ZJJC:CREATE:IPDU,2:INT=VLAN400,EXT=BOND0; // unit 2 is spare at the
moment in the example.
ZJJC:CREATE:IPDU,0:INT=VLAN400,EXT=BOND0,NAME=LBS-
SCTPSIPI;
6
Create source-based IP routing for the VIPs in the SIGU/SCPUs and IPDUs. In DX
200 IPDUs GW logical addresses are from 169.254.0.0 link local address subnet. In
the example, 169.254.0.1 is IPDU-0's GW address for external traffic (UDP, TCP,
SCTP). 10.22.97.1 is the site router's VRRP/HSRP IP address.
a) MGCF / SIP-I over UDP/TCP
ZQKM:SIGU::"10.23.153.66":"169.254.0.1":LOG:;
ZQKM:IPDU,0::"10.23.153.66":"10.22.97.1":LOG:;
b) SIP access + trunk
ZQKM:SCPU::"10.23.153.67":"169.254.0.1":LOG:;
ZQKM:SCPU::"10.23.153.68":"169.254.0.1":LOG:;
ZQKM:IPDU,0::"10.23.153.67":"10.22.97.1":LOG:;
ZQKM:IPDU,0::"10.23.153.68":"10.22.97.1":LOG:;
c) MGCF / SIP-I over SCTP
ZQKM:SIGU::"10.23.151.66":"169.254.0.1":LOG:;
Static IP routes for IPDU with SCTP multihoming. 10.102.25.80/29 and
10.102.25.88/29 are the remote end's IP subnets.
ZQKM:IPDU,0::”10.102.25.80,29”:”10.23.151.1”:LOG:;
ZQKM:IPDU,0::”10.102.25.88,29”:”10.23.152.1”:LOG:;
7
For each SIP signaling unit:
• If DNS is used by adjacent elements, change the DNS so that the MSS FQDN
points to the VIP. In case of SIP-I over SCTP, the FQDN points to the IPDU's
primary SCTP multi-homing VIP only, which is the same as SIGU SCTP VIP
(10.23.151.66 in example configuration).
• Simultaneously, for each SIP signaling unit change in the MSS its end SIP IP
address to a VIP address. After the license is active, you can create both load
balanced SIP IP addresses and non-load balanced / non-SIP address. Non-
balanced / non-SIP addresses may, for instance, be used for Lightweight
Directory Access Protocol (LDAP).
MSS software reads the end IP address at the beginning of the call, and uses
that value throughout the call session. On-going calls are not released, and ,thus,
continue using the old, unit-specific IP address. After the end addresses have
been changed, new calls use the new IP addresses.
• Non-SIP traffic can use the existing logical IP addresses.
DNS changed to "point" to the VIP.
g Note: Traffic cut-over starts here for incoming (from this MSS point of view) calls. There
are ip_cp_unreachable (920H) reason codes in the adjacent element, if it is an
Nokia MSS. The problem is invisible in the MSS under migration since the MSS is not
listening to the IP address and socket related to it.
8 For each SIP signaling unit, if DNS is not used, configure the adjacent element or
Session Border Controller (SBC) to point to the VIP instead of the unit-based IP
address. The SBC can be set to use fixed IP addresses.
The SBC may use two VIPs as MSC addresses via different IPDUs. This approach
provides greater resiliency.
The SBC is changed accordingly.
9
Flush the DNS cache for all peer elements if possible. If the adjacent element is a
DX200 based MSS, then switch the DNS cache OFF and then ON.
g Note: Traffic cut-over ends when this step is complete. If the adjacent element is an
Nokia MSS, then cl_a_onhook_set_up_phase (30AH) reason codes appear until
this step is completed.
LDAP and DNS IP addresses are from the signaling unit TCPUDP subnet.
ZJDC:SIGU,0:NONSIPIP="10.23.153.66",:;
ZJDC:SIGU,1:NONSIPIP="10.23.153.67",:;
ZJDC:SIGU,2:NONSIPIP="10.23.153.67",:;
ZJDC:SCPU,0:NONSIPIP="10.23.153.86",:;
ZJDC:SCPU,1:NONSIPIP="10.23.153.87",:;
ZJDC:SCPU,2:NONSIPIP="10.23.153.87",:;
10 For each SIP signaling unit, delete the unit-specific IP addresses if they are not used
for other purposes.
g Note: Ongoing calls using the old unit-specific addresses will be dropped.
In this example, the deletion process is not detailed because the existing IP
addresses are used for a non-SIP related use.
ZQRK::NO;
ZQRK::YES;
11
Change the DNS validity times back to normal.
Purpose
The H.248 Load Balancer (LB) reduces the number of SCTP associations between the
MGW and the MSC Server. The H.248 LB also reduces the number of externally visible
IP addresses. After the migration, there will be fewer virtual MGWs in both network
elements but with comparable capacities.
The operator has a number of alternative ways to approach migration. The following
main options can be considered and each is equally reasonable:
1. Migrate on a physical MGW (pMGW) basis. Here the operator migrates resources of
a pMGW resource by resource.
2. Migrate on MSC Server basis.
3. Migrate on adjacent element basis (for example, BSC, RNC, fixed switch).
If the operator has the Multipoint A/Iu feature in particular, the approaches can be mixed.
MSS
MGW
MSS
SIGU SIGU SIGU
LB(H.248)
LB(H.248)
MGW
g Note: For information about the H.248 load balancer and the N+X MSS redundancy
solution see N+X MSS Redundancy Operating Guidelines.
• Prior IP planning is a prerequisite. It must be possible to introduce new IPs for the
load balancer. Moving an existing IP address is not described in this document.
• Prior network planning is a prerequisite, not only for the final configuration but for
intermediate steps. Pay attention to vMGW load sharing weights. The weight shall be
in balance with the number of GISUs in the unit pool.
• A number of sales items are required:
– ISU, Interface Signal Unit CCP18/2GB 3GN05H0007 CCP18-A / -C CPU in the
ISU.
– Sales item L.4242 for the MSS.
– Sales item 3GNLIC0056 for the MGW.
• L3 IP connectivity and its associated sales item L.3397, is required in the MSS. A
pair of ESB24-A (old deliveries) is required on L3 layer, ESB24-D in new deliveries.
• Ethernet Message Bus (EMB) is required in the MSS. The Integrated MSS (MSSi)
does not support LB because of the requirement for EMBs. MSSu product has EMB
support. The switching matrix must first be removed from the MSSu and EMBs
installed. The LB can then be used.
• One free vMGW is available in the MSS and the MGW. (If not, a vMGW needs to be
deleted and its resources moved to another vMGW).
• A free SIGU is needed. The migration is easier if there are extra temporary units
available. The extra units can be removed once the migration is complete.
400 Internal IPDU 169.254.0.0/27 169.254.0.1 169.254.0.2 169.254.0.7, 169.254.0.9, 169.254.0.1, 169.254.0.11
(Media) (GW), (GW) 169.254.0.8 169.254.0.15 169.254.0.16
Control 169.254.0.3, 169.254.0.5,
VLAN. Link 169.254.0.4 169.254.0.6
local IP
addresses
are used. In
Dx200 MSS,
Table 5 MSS internal LANs and IPv4 subnets in the example (Cont.)
VLA Control IPv4 subnet Default gateway and interface address
N ID Plane (local)
VLANs for
the Internal IPDU-0 (WO) IPDU-1 (WO) IPDU-2 (SP) SIGU-0 (WO) SIGU-1 (WO) SIGU-2 (SP)
LAN
this subnet is
used for
physical IPs
and IPDU
gateway IP
addresses.
vMGW69 vMGW139
Steps
1
Activate licenses.
MSS:
ZW7M:FEA=1473:ON:;
MGW:
ZW7M:FEA=1424:ON:;
2
An optional step: In the MGW add Slave ISUs (SISUs) to the existing vMGWs. The
operator may observe MGW functionality for one day to ensure that the MGW
feature operates as expected.
3
If possible add extra SIGUs temporarily. Similarly, add IPDU Hard Ware (HW)
temporarily.
g Note: The N+X redundant switch must have at least the same amount of signaling
units as the largest protected switch. The same requirement applies to IPDUs also.
4
Add all the IPDU computer units to the MSS that will be required. If signaling units
(SCPUs/CCSUs/SIGU) must be converted to IPDUs, move some vMGWs to other
signaling units, and , then, convert the signaling unit to an IPDU.
In case of changing a signaling unit to an IPDU, move vMGWs out from the source
signaling unit. “Move” in this context means deletion and re-creation.
The target signaling units shall have spare capacity to be able to handle the traffic.
5
For each H.248 unit pool:
• Create new IP addresses and VLANs required for the H.248 LB, if they do not
already exist. Also existing SIGU IP addresses may be used.
• Allocate Virtual IPs (VIPs) for each H.248 unit pool.
• Define the VIPs in each signaling unit that will belong to a H.248 pool.
• In the MSS, create source IP address-based default gateway in each signaling
unit of the H.248 unit pool.
In the example below, 2 IPDUs are created and one of them is used for the H.248
unit pool.
MSS:
ZQRN:IPDU::EL0:::UP;
ZQRN:IPDU::EL1:::UP;
ZQRN:IPDU::BOND0:;
ZQRA:IPDU::VLAN400:400:BOND0:;
ZQRN:IPDU,0::VLAN400:"169.254.0.1”27,L,::;
ZQRN:IPDU,1::VLAN400:"169.254.0.2",27,L,::;
ZQRN:IPDU,0::VLAN400:"169.254.0.3",27,P,::;
ZQRN:IPDU,0::VLAN400:"169.254.0.4",27,P,::;
ZQRN:IPDU,1::VLAN400:"169.254.0.5",27,P,::;
ZQRN:IPDU,1::VLAN400:"169.254.0.6",27,P,::;
ZQRN:IPDU,2::VLAN400:"169.254.0.7",27,P,::;
ZQRN:IPDU,2::VLAN400:"169.254.0.8",27,P,::;
ZQRN:SIGU::BOND0:;
ZQRA:SIGU::VLAN400:400:BOND0;
ZQRN:SIGU,0::VLAN400:"169.254.0.9",27,P,:::;
ZQRN:SIGU,1::VLAN400:"169.254.0.10",27,,P,:::;
ZQRN:SIGU,2::VLAN400:"169.254.0.11",27,P,:::;
ZQRA:IPDU::VLAN1100:1100:BOND0;
ZQRA:IPDU::VLAN1200:1200:BOND0;
ZQRN:IPDU,0::VLAN1100:"10.23.151.66",26,L,V,::;
ZQRN:IPDU,0::VLAN1200:"10.23.152.66",26,L,V,::;
ZQRN:IPDU,1::VLAN1100:"10.23.151.67",26,L,V,::;
ZQRN:IPDU,1::VLAN1200:"10.23.152.67",26,L,V,::;
ZQRN:SIGU,0::VLAN400:"10.23.151.66",32,L,V,::;
ZQRN:SIGU,1::VLAN400:"10.23.151.67",32,L,V,::;
ZQRN:SIGU,0::VLAN400:"169.254.0.15",27,L,:::;
ZQRN:SIGU,1::VLAN400:"169.254.0.16",27,L,:::;
Create static IP routes for SIGU and IPDU with SCTP multihoming.
ZQKM:SIGU::"10.23.151.66":"169.254.0.1":LOG:;
ZQKM:SIGU::"10.23.151.67":"169.254.0.2":LOG:;
ZQKM:IPDU,0::"10.23.151.66":"10.23.151.65":LOG:;
ZQKM:IPDU,1::"10.23.151.67":"10.23.151.65":LOG:;
ZQKM:IPDU,0::"10.23.152.66":"10.23.152.65":LOG:;
ZQKM:IPDU,1::"10.23.152.67":"10.23.152.65":LOG:;
6 If no extra units are available for temporary use, move the vMGWs from a signaling
unit to another one.
7
Create H.248 load balancing groups.
MSS:
ZJJE:CREATE:NAME=H248LB0,TYPE=H248;
ZJJE:CREATE:NAME=H248LB1,TYPE=H248;
8 If no other LB exists in the IPDUs, then configure load balancing unit in the following
way:
MSS:
ZJJC:CREATE:IPDU,2:INT=VLAN400,EXT=BOND0; (Spare unit first)
ZJJC:CREATE:IPDU,0:INT=VLAN400,EXT=BOND0;
ZJJC:CREATE:IPDU,1:INT=VLAN400,EXT=BOND0;
9 If any other LB exists in the IPDUs, then configure load balancing unit in the following
way:
MSS:
ZJJE:MODIFY:IPDU,0:INT=VLAN400,EXT=BOND0,NAME=H248LB0:;
ZJJE:MODIFY:IPDU,1:INT=VLAN400,EXT=BOND0,NAME=H248LB1:;
10 Create the H.248 unit pool in a MSS with one signaling unit. Create a new (target)
vMGW in the MSS using the H.248 unit pool.
MSS:
ZJEC:POOLID=0,NAME=H248POOL0,POOLADDR=10.23.151.66,UIDX=0,SECUNSEL=Y;
ZJEA:POOLID=0,UTYPE=SIGU,UIDX=0,IPADDR=169.254.0.15,ROLE=PRI,;
ZJGC:NAME=MGW69LB,MGWTYP=GEN,ADDR="10.102.25.6",PORT=8010:UPOOL=0:AES
A="E-23469",NBR=1,LBCU=1,:USEPARS=0,DEFPARS=4,;
11
Create the inter-vMGW connections for the new vMGW.
You may set “TDM borrowing” (PRFILE 53:13) to true (=1). This may help if all
TDM resources are not yet moved. MGW00 and MGW01 are old existing virtual
MGWs.
MSS:
ZJFT:NMGW=MGW00:NMGW=MGW69LB:BNCC=IPV4;
ZJFT:NMGW=MGW01:NMGW=MGW69LB:BNCC=IPV4;
ZJFT:NMGW=MGW01:NMGW=MGW69LB:BNCC=IPV4;
12
Create a new vMGW in the MGW. If the LB feature was active in the MGW, existing
ISUs would become Master ISU (MISU) for the old vMGWs. Add Slave ISU units
(SISUs) for the new vMGW.
13
Initiate the registration of the new vMGW to the MSS. The registration creates SCTP
associations between the MSS (VIPs) and the MGW (MISU).
MGW: (MGW69)
ZJVC:VMN=MSS69LB,UINX=0,:OIP="10.102.25.6",OPN=8010,:A2T=55,::;
ZJVA:VID=3,::PIP="10.23.151.66",:SIP="10.23.152.66",:;
ZJVN:VID=3,:CNT=0,:NBR=1,;
ZJVN:VID=3,:CNT=0,:::TFO=0,;
ZJVN:VID=3,:CNT=0,:::IPS=1,;
ZJVN:VID=3,:CNT=0,:::DII=1,;
ZJVJ:VID=3,:SISU=0,:LBS=0,:;
ZJVG:ISU=3,:BLP=45,:;
ZJVR:VID=3:REGA=1;
14
If necessary, add signaling units to the existing H.248 unit pool.
This may require moving existing non-pooled vMGWs and related SCTP
associations to other signaling units. M3UA associations can be left unchanged.
IPDU reset is not needed. However, H.248 traffic is cut for that load share
group/H.248 unit pool.
g Note: A vMGW supports only ten CGRs. Creating more vMGWs maybe required.
15
For ISUP, PBX TDM trunks:
• In the MSS, shutdown the CGR, lock the CGR, and then move the CGR into the
new vMGW.
• In the MGW, set the circuits into Barred by User (BA-US) state.
• In the MGW, remove the CGR from source vMGW.
• In the MGW, add the CGR to the source vMGW.
• In the MGW, set the circuits to working state.
• In the MSS, unlock the CGR.
• Add the CGR to the route or special route.
• Inter-MGW trunks must also be moved.
MSS (part 1):
ZCEL:NCGR=PSTN,;
ZCEM:NCGR=PSTN:SHUTDOWN:;
ZCEL:NCGR=PSTN,;
ZCEM:NCGR=PSTN:LOCK:;
ZRCN:NCGR=PSTN:MGW=MGW69LB,;
MGW:
ZCIM:CRCT=48-2&&-31:BA;
ZJVF:VMN=MSS69NONLB:CGR=4:;
ZJVE:VMN=MGW69:CGR=4:;
ZCIM:CRCT=48-2&&-31:WO;
MSS (part 2):
ZCEM:NCGR=PSTN:UNLOCK:;
PBX:
MSS (part 1):
ZCEL:NCGR=CISCOPBX,;
ZCEM:NCGR=CISCOPBX:SHUTDOWN:;
ZCEL:NCGR=CISCOPBX,;
ZCEM:NCGR=CISCOPBX:LOCK:;
ZRCN:NCGR=CISCOPBX:MGW=MGW69LB,;
MGW:
ZCIM:CRCT=52-1&&-15&-17&&-31:BA;
ZJVF:VMN=MSS69NONLB:CGR=1052:;
ZJVE:VMN=MGW69:CGR=1052:;
ZCIM:CRCT=52-1&&-15&-17&&-31:WO;
MSS (part 2):
ZCEM:NCGR=CISCOPBX:UNLOCK:;
16
For BSC TDMs:
• The same steps as above must be completed for the TDM trunks. If the BSC is a
multi-homed BSC via two pMGWs or vMGWs then there will be no break in
traffic.
There may be multiple CGRs due to codec pools. If this is the case then handle
all CGRs at one time.
• A typical case has two CGRs in two vMGW in the same pMGW. After the
migration one vMGW and one CGR is the most feasible option.
MSS (part 1):
ZCEL:NCGR=BSS692ND,;
ZCEM:NCGR=BSS692ND:SHUTDOWN:;
ZCEL:NCGR=BSS692ND,;
ZCEM:NCGR=BSS692ND:LOCK:;
ZRCN:NCGR=BSS692ND:MGW=MGW69LB,;
MGW:
ZCIM:CRCT=11900-1&&-28:BA;
ZJVF:VMN=MSS69NONLB:CGR=669:;
ZJVE:VMN=MGW69:CGR=669:;
ZCIM:CRCT=11900-1&&-28:WO;
MSS (part 2):
ZCEM:NCGR=BSS692ND:UNLOCK:;
17
For RNC resources:
• Add the new vMGW zero weight on the RNC UPD as soon the vMGW exists.
RNC-UPD=1.
• Set new vMGW weight to a value greater than zero for the UPD.
• Set the old vMGW weight to zero: no calls will be allocated to the the old vMGW.
A vMGW may only be allocated for a call if it has TDM resources allocated.
MSS: RNC
ZJFA:UPD=1:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=1:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=1:NMGW=MGW00:LDSH=0:;
18
For BICC and SIP resources follow the same steps as detailed above for the RNC.
ZJFA:UPD=2:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=2:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=2:NMGW=MGW00:LDSH=0:;
ZJFA:UPD=3:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=3:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=3:NMGW=MGW00:LDSH=0:;
ZJFA:UPD=5:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=5:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=5:NMGW=MGW00:LDSH=0:;
ZJFA:UPD=10:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=10:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=10:NMGW=MGW00:LDSH=0:;
ZJFA:UPD=12:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=12:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=12:NMGW=MGW00:LDSH=0:;
ZJFA:UPD=20:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=20:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=20:NMGW=MGW00:LDSH=0:;
19
Repeat the resource removal steps detailed above (15 - 18) for all the vMGWs that
must be removed.
20
Create another virtual MGW and unit pool.
1. MSS:
ZJEC:POOLID=1,NAME=H248POOL1,POOLADDR=10.23.151.67,UIDX=1,SECUNSEL=Y;
ZJEA:POOLID=1,UTYPE=SIGU,UIDX=1,IPADDR=169.254.0.16,ROLE=PRI,;
ZJGC:NAME=MGW138LB,MGWTYP=GEN,ADDR="10.102.25.7",PORT=8010:UPOOL=0:AE
SA="E-23469",NBR=1,LBCU=1,:USEPARS=0,DEFPARS=4,;
Create the inter-vMGW connections for the new vMGW:
ZJFT:NMGW=MGW00:NMGW=MGW138LB:BNCC=IPV4;
ZJFT:NMGW=MGW01:NMGW=MGW138LB:BNCC=IPV4;
ZJFT:NMGW=MGW69:NMGW=MGW138LB:BNCC=IPV4;
MGW: (MGW138)
ZJVC:VMN=MSS138LB,UINX=1,:OIP="10.102.25.7",OPN=8013,:A2T=55,::;
ZJVA:VID=4,::PIP="10.23.151.67",:SIP="10.23.152.67",:;
ZJVN:VID=4,:CNT=0,:NBR=1,;
ZJVN:VID=4,:CNT=0,:::TFO=0,;
ZJVR:VID=4,:REGA=1,;
2. MSS:
Add vMGW to relevant UPDs. Here are some examples.
ZJFA:UPD=0:NMGW=MGW138LB:LDSH=0;
ZJFW:UPD=0:NMGW=MGW138LB:LDSH=1:;
ZJFW:UPD=0:NMGW=MGW138:LDSH=0:;
ZJFA:UPD=138:NMGW=MGW138LB:LDSH=0;
ZJFW:UPD=138:NMGW=MGW138LB:LDSH=1:;
ZJFW:UPD=138:NMGW=MGW138:LDSH=0:;
22
Delete unused vMGWs.
1. MSS:
ZJGM:MGWID=0::::REGA=N,;
ZJGM:MGWID=1::::REGA=N,;
ZJGD:MGWID=0:;
ZJGD:MGWID=1:;
MGW: (MGW00)
ZJVD:VID=0,;
1. MGW: (MGW01)
ZJVD:VID=1,;
23
Free SIGUs.
The SIGUs/GISUs without vMGW can be moved into some of the H.248 unit pools.
t Tip: IP forwarder changes are IP routing modifications, and the number of connections
does not increase its complexity.
Terms
CE Customer Edge (device/router). This is an L2/L3 multilayer
site switch (or a site router).
Downtimes
For the SIGTRAN (SCTP) multihoming traffic, there is no down time at all, primary and
secondary paths are migrated separately. For the UDP/TCP and SCTP single homing,
the traffic is cut at step 16, when SWU interfaces are shut down. Traffic is only restored
when the site routers have changed their IP routing. For Cisco routers, this change takes
about 5 minutes, and in Juniper about 30 seconds. In case of Juniper routers,
configuration can be performed in advance, and then this configuration is taken into use
with the commit command. The exact traffic cut time depends on the number of static
routes and links. In total, this takes between 10-15 minutes.
Before you start
The prerequisites for the following steps are that:
• Two or more working and one spare IPDU computer units are installed and each
IPDU has the site connectivity cables connected to the site routers using the direct
blade connectivity from the IPDU units via RTM modules equipped with the correct
SFP transceivers based on the selected physical media (that is, optical fibre vs.
copper 1GbE): 1000BASE-LX, 1000BASE-SX or 1000BASE-T. If you have more
than two working IPDUs available, use two of them for IP forwarder.
• M16.0 EP1, M16.1 or M16.2 SW is installed on Open MSS.
• The needed IPv4 subnets used for the VLAN routing interfaces on IPDU units and
CE devices have been allocated from the free IPv4 subnet range of the customer.
• Site routers have free 1GbE ports for the IPDU connections via the RTM back plane
(SFP) ports, one 1GbE port per IPDU (including spare IPDUs) on both CE devices.
Otherwise returning to the old configuration is not possible in case of problems. In
addition, IPDU does not support 10GbE interface, which may exist in AHUBs.
• A physical topology plan exists for cable locations and Ethernet port usage.
• A preliminary study has been performed based on the following site connectivity
documents.
1. MSS System documents:
– Site Connectivity Guidelines for MSS IMS System (ATCA platform specific
info)
Summary
IP forwarder migration has the following possible cases:
The migration is executed so that the SCTP secondary path is migrated first, then SCTP
primary path plus the TCP/UDP connections. There are different possibilities for the
ingress/egress routing policy when multiple active IPDU units are used as IP forwarders.
One option is that all TCP traffic flows use secondary SCTP path IPDU, and another
option is that primary IPDU is used. Which alternative is selected, depends on the
operator's preference. In this example, we use the primary path's IPDU for TCP/UDP
traffic (that is, IPDU-0). In this example, the instructions are described for 2+1 IPDU
units. The VLR backup configuration is not included in the example but can be performed
according to the site configuration guidelines. This example does not use floating backup
routing on CEs towards TCPUDP subnet(s).
g Note: Also other subnets not presented in this document may have to be migrated.
The following figure illustrates the starting situation (for active traffic flows; this illustration
does not show any alternate or backup paths for the different traffic flows) where control
plane traffic is routed via the L3 point-to-point links between the SWU-0/1 and CE
devices:
...
CE-2
AHUB
GW:10.23.152.1
GISU-n
10.22.95.253(3/1)
10.22.96.253(3/4) GW:10.23.152.65
TCP/UDP
SCTPassociation1
SCTPassociation2
The following figure illustrates IP connectivity after the migration (similarly, this only
illustrates the ingress/egress packet handling for the active traffic flows) when control
plane traffic is routed via the VLAN routing interfaces on IPDU units and CE devices:
SCTPPRI:10.23.151.2
VLAN1100:10.22.95.2
VLAN1300:10.22.97.4 SCTPSEC:10.23.152.2
VLAN1300:10.22.97.20 TCPUDP:10.23.153.66(SIP)
IPDU-0 VLAN200 TCPUDP:10.23.153.2(non-SIP)
CE-1 Workingunit SCTPPRISIP:10.23.151.66
GISU-0 SCTPSECSIP:10.23.152.66
IPforwarderunit
VLAN200
10.22.95.2
GISU-1
VLAN200
10.22.97.2VLAN1300IFTCPUDP
10.22.97.18VLAN1300IFSCTPSH
GISU-2
10.22.97.1HSRP(CE-1active) 10.23.153.68
10.22.97.17HSRP(CE-1active)
TCP/UDP
SCTPassociation1
SCTPassociation2
3/4 3/4
3/1 3/1
AHUB3-A/ AHUB3-A/
SWU-0 SWU-1
L Logical
LI Logical Internal
P Physical
PI Physical Internal
LV Logical Virtual
VLAN1100, VLAN1200, VLAN1300 and VLAN1301 are created during the migration,
whereas VLANs 100, 200, 300, 310 and 320,330 and 340 exist prior to the migration.
VLAN1301 is just a different name for VLAN1300 since the MML does not accept the
same name for both IPDUs’ external interfaces. New IPv4 subnets (see below) are
introduced for the IPDU direct connections on external interfaces because by doing so
the service interruption becomes shorter. In an opposite scenario where the existing
AHUB subnets are used, the following undesired results would occur:
Procedure
2 Connect cables from site routers to IPDU-0, IPDU-1 and IPDU-2 units using RTM
(SFP) ports according to the instructions found in Open MSS Hardware Installation
Quick Guide in M-Release product documentation. Note that SFP1 port corresponds
to EL5 physical interface and SFP2 port to EL4 physical interface on IPDU RTM
modules.
3 In the event of a double link failure in EL4 and EL5, a forced IPDU switchover
happens. To prevent this during migration, you need to set weight in the 3053
ETHERNET CONNECTION FAILURE alarm. When a total weight of 64 is reached,
switchover occurs. Weight 1 practically means that switchover never occurs due to
this alarm. You can perform this by entering the following command:
ZARA:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,::1:N:;
5
Set IP forwarder ON in all IPDUs, including the non-forwarding ones.
ZQRT:IPDU,0::IPF=YES,;ZQRT:IPDU,1::IPF=YES,;
ZQRT:IPDU,2::IPF=YES,;
If you happen to have more IPDUs or if you add them later, remember to activate IP
forwarder also in those units.
ZQRT:IPDU,3::IPF=YES,;ZQRT:IPDU,4::IPF=YES,;
6
Configure IP network interfaces to IPDU units for the TCP/UDP, SCTP MH primary
and secondary paths. Note that all VLANs need to be created for all IPDUs since
VLANs are not passed along in an IPDU unit switchover.
• VLAN configuration, internal interfaces on FI LAN.
a) ZQRA:IPDU,0::EL0::UP:;
b) ZQRA:IPDU,1::EL0::UP:;
c) ZQRA:IPDU,2::EL0::UP:;
d) ZQRA:IPDU,0::EL1::UP:;
e) ZQRA:IPDU,1::EL1::UP:;
f) ZQRA:IPDU,2::EL1::UP:;
g) ZQRA:IPDU,0::BOND0,EL0,EL1,:;
h) ZQRA:IPDU,1::BOND0,EL0,EL1,:;
i) ZQRA:IPDU,2::BOND0,EL0,EL1,:;
j) ZQRA:IPDU,0::VLAN100,100,BOND0,,::UP:;
k) ZQRA:IPDU,0::VLAN200,200,BOND0,,::UP:;
l) ZQRA:IPDU,0::VLAN300,300,BOND0,,::UP:;
m) ZQRA:IPDU,0::VLAN310,310,BOND0,,::UP:; // if used
n) ZQRA:IPDU,0::VLAN320,320,BOND0,,::UP:; // if used
o) ZQRA:IPDU,0::VLAN330,330,BOND0,,::UP:; // if used
p) ZQRA:IPDU,0::VLAN340,340,BOND0,,::UP:;
q) ZQRA:IPDU,1::VLAN100,100,BOND0,,::UP:;
r) ZQRA:IPDU,1::VLAN200,200,BOND0,,::UP:;
s) ZQRA:IPDU,1::VLAN300,300,BOND0,,::UP:;
t) ZQRA:IPDU,1::VLAN310,310,BOND0,,::UP:;
u) ZQRA:IPDU,1::VLAN320,320,BOND0,,::UP:;
v) ZQRA:IPDU,1::VLAN330,330,BOND0,,::UP:;
w) ZQRA:IPDU,1::VLAN340,340,BOND0,,::UP:;
x) ZQRA:IPDU,2::VLAN100,100,BOND0,,::UP:;
y) ZQRA:IPDU,2::VLAN200,200,BOND0,,::UP:;
z) ZQRA:IPDU,2::VLAN300,300,BOND0,,::UP:;
a ZQRA:IPDU,2::VLAN310,310,BOND0,,::UP:;
a)
a ZQRA:IPDU,2::VLAN320,320,BOND0,,::UP:;
b)
ac) ZQRA:IPDU,2::VLAN330,330,BOND0,,::UP:;
a ZQRA:IPDU,2::VLAN340,340,BOND0,,::UP:;
d)
w NOTICE: Once created, do not disable the EL4 or EL5 physical interfaces on IPDU
units, or any logical VLAN interfaces configured on top of them which have Virtual IP
(VIP) addresses assigned in both IPDU and GISU units—that is, do not set such
interfaces to Down state using the QRA MML command. If you do, this will have an
adverse affect on all of the interfaces and IP addresses configured on top of them and
may result in those interfaces and IP addresses being rendered unreachable by the
system kernel routing process.
If IPDU-based SIP and/or H.248 load balancing is used with SCTP multihoming (MH)
transport support, then disabling the SCTP MH primary path interface on an IPDU unit
will cause packet drops among internal backend traffic sent between IPDU and GISU
units over VLAN400.
If IPDUs are used for IP forwarding purposes rather than load balancing, and SRCNET
policy routes are configured, then disabling the active external interface (used for
reaching the next hop address) will cause the cached route to be disabled.
7
In the IPDU for the SCTP MH primary paths (here, IPDU-0) and IPDU for the
TCP/UDP IPDU (here, IPDU-0 and optionally on IPDU-1 if multiple local IPv4
subnets are in use and load is shared over multiple IPDU units), configure the
external VLAN routing interfaces and create the destination based static routes (if
needed). Default route works in most cases. However, when there are multiple next-
hop gateways on CE devices, then destination based static routes in addition to the
default route must be used (until local subnet based policy routing is supported in
later releases) as only single default route can be added to each unit. The actual
routing policy (for example, if default route is used for SCTP MH or TCP/UDP traffic
flows) needs to be defined based on the number of peering destination networks that
the Open MSS has control plane signaling links to, so that the number of destination
based static routes can be minimized.
In this example, 10.22.95.1/28 is used in the default route as the next-hop gateway
for the SCTP MH primary path egress traffic, and is assigned to site router CE-1 on
the related VLAN routing interface (VLAN1100). Furthermore, 10.22.95.2/28 on
IPDU-0 is used as the next-hop gateway for ingress routing purposes. Site router
CE-1 uses this as the next-hop gateway towards the Open MSS for the SCTP MH
primary path traffic flows.
In this example, 10.22.97.1/28 is used in the destination based static routes as the
next-hop gateway for the TCP/UDP egress traffic and is assigned to the site router
CE-1 on the related VLAN routing interface (VLAN1300) configuration for the VIP of
the target HSRP/VRRP group where 10.22.97.2/28 and 10.22.97.3/28 are used as
the secondary IP addresses on CE-1 and CE-2 for this VLAN routing interface.
Furthermore, 10.22.97.4/28 and optionally 10.22.97.5/29 are used as the next-hop
gateways on IPDU-0 and IPDU-1, respectively, for ingress routing purposes. Site
routers CE-1 and CE-2 use this or these as the next-hop gateway(s) towards the
Open MSS for the TCP/UDP traffic flows.
• The VLAN routing interface configuration for SCTP MH primary path traffic flows
(that is, using default route):
a) ZQRN:IPDU,0::VLAN1100,:"10.22.95.2",28,L,;
b) ZQKC:IPDU,0:::"10.22.95.1":LOG::;
c) Wait until IPDU-0 comes to WO-EX state (working unit).
• The VLAN routing interface configuration for TCP/UDP traffic flows (that is, using
multiple destination based static routes to each "<dest-network>/<subnet-mask>"
destination network):
a) ZQRN:IPDU,0::VLAN1300,VLAN1301:"10.22.97.4",28,L,;
b) ZQRN:IPDU,1::VLAN1301,VLAN1300:"10.22.97.5",28,L;
c) ZQKC:IPDU,0::"<dest-network>",<subnet-
mask>:"10.22.97.1":LOG::;
d) ZQKC:IPDU,1::"<dest-network>",<subnet-
mask>:"10.22.97.1":LOG::;
e) Wait until IPDU-0 comes to WO-EX state (working unit).
g Note: Backup routes are not configured in IPDU units for SCTP primary and secondary
MH paths.
8
In the IPDU for the SCTP MH secondary paths (here: IPDU-1), configure the external
VLAN routing interface(s) and create the destination based static routes (if needed).
Default route works in most cases , however, when there are multiple next-hop
gateways on CE devices, then destination based static routes in addition to the
default route must be used (until local subnet based policy routing is supported in
later releases) as only single default route can be added to each unit. The actual
routing policy (for example, if default route is used for SCTP MH or TCP/UDP traffic
flows) needs to be defined based on the number of peering destination networks that
the Open MSS has control plane signaling links to so that the number of destination
based static routes can be minimized.
In this example, 10.22.96.1/28 is in the default route as the next-hop gateway for the
SCTP MH secondary path egress traffic, and is assigned to the site router CE-2 on
the related VLAN routing interface (VLAN1200). Furthermore, 10.22.96.2/28 on
IPDU-1 is used as the next-hop gateway for ingress routing purposes. Site router
CE-2 uses this as the next-hop gateway towards the Open MSS for the SCTP MH
secondary path traffic flows.
• The VLAN routing interface configuration for SCTP secondary path flows (that is,
using default route):
a) ZQRN:IPDU,1::VLAN1200,:"10.22.96.2",28,L,;
b) ZQKC:IPDU,1:::"10.22.96.1":LOG::;
g Note: Backup routes are not configured in IPDU units for SCTP primary and secondary
MH paths.
9
Create the VLAN routing interfaces on CE devices for SCTP MH primary and
secondary path, and TCP/UDP traffic flows. Additionally, create the HSRP/VRRP
group(s) for the TCP/UDP traffic related VLAN routing interface.
10.22.95.1 is the Cisco port IP address of site router CE-1. Then 10.22.96.1 is the
Cisco port IP address of site router CE-2. Note that interface identifiers are just
examples. Juniper commands are in ANNEX A: Juniper router commands. The end
result will look like the below printout with Cisco OSR:
***site router CE-1
vlan 1100
name SCTP_MH1
interface Vlan1100
description SCTP_MH1
ip address 10.22.95.1 255.255.255.240
end
vlan 1300
name TCPUDP
interface Vlan1300
description TCPUDP
ip address 10.22.97.18 255.255.255.240 secondary ** SCTP_SH
ip address 10.22.97.2 255.255.255.240
standby 93 10.22.97.1 255.255.255.240 ** HSRP
standby 93 10.22.97.17 255.255.255.240 ** HSRP SCTP_SH
end
vlan 1200
name SCTP_MH2
interface Vlan1200
description SCTP_MH2
ip address 10.22.96.1 255.255.255.240
end
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
vlan 1300
name TCPUDP
interface Vlan1300
description TCPUDP
ip address 10.22.97.19 255.255.255.240 secondary ** SCTP_SH
ip address 10.22.97.3 255.255.255.240
standby 93 10.22.97.1 255.255.255.240 ** HSRP
standby 93 10.22.97.17 255.255.255.240 ** HSRP SCTP_SH
end
10 Print out the summary of the IP interface and routing configurations on IPDU units.
Also, check the active alarms from OMU.
11 Test host reachability of the IPDU external interfaces from the CE devices, and from
the IPDU units towards some specific destinations using SRC option for local IP
address selection. If host reachability does not work as expected, then the next steps
must not be performed until the faults have been fixed.
Secondary SCTP path:
12
Ping secondary path addresses. This step is to verify that the secondary path really
worked prior to the migration. Perform this step for every GISU for every secondary
path IP address. For example:
ZQRX:GISU,0::IP="<peer IP>","10.23.152.2";
ZQRX:GISU,0::IP="<peer IP>","10.23.152.66";
13 Move the SCTP MH secondary path default GW IP address from the SWU-1 (here,
interface 5/2) to the SCTP MH secondary path's IPDU (here, IPDU-1) and change
the static routing configuration on CE-2 accordingly:
• Disable SWU-1 VLAN routing interface 5/2 that has the SCTP MH secondary
path GW IP.
a) interface 5/2
b) shutdown
c) exit
• Check that the VRRP Master router(s) associated to this VLAN routing interface
is active on SWU-0 and, then, disable SWU-1 VLAN routing interface 5/3, which
has the TCP/UDP GW IP. This is an optional task, but is recommended to be
done so that SWU-1 will not suddenly become the VRRP Master router until the
single-hop BFD based uplink tracking disables this from happening.
a) interface 5/3
b) shutdown
c) exit
• Disable SWU-1 uplink / north-bound port based routing interface 3/1 in AHUB.
The single-hop BFD session associated to this port based routing interface
should now be down on both SWU-1 and CE-2.
a) interface 3/1
b) shutdown
c) exit
• Disable optional SWU-1 uplink / north-bound port based routing interface 3/4 in
AHUB (if used). The single-hop BFD session associated to this port based
routing interface should now be down on both SWU-1 and CE-2.
a) interface 3/4
b) shutdown
c) exit
• Shut down the routing interface ports on CE-1 towards the port based routing
interfaces 3/1 and 3/4 (if used) on SWU-1.
• Check that the SCTP MH secondary path alarms are raised by the system. Also
alarm 0x3292 "'INTERNAL LAN LINK BROKEN" may appear as SWU-1 uplink
was previously shut down. Also check that the VRRP Master router(s) remains
active on SWU-0.
• Create the default secondary path GW IP address to IPDU-1 (here, 10.23.152.1
for non-SIP and 10.23.151.65 for SIP). IPDU unit should go to WO-EX state if not
already (due to the previous steps where the external VLAN interfaces were
created).
a) ZQRN:IPDU,1::VLAN200,:"10.23.152.1",26,L,;
b) ZQRN:IPDU,1::VLAN200,:"10.23.152.65",26,L,;
• Enable ingress routing towards the local SCTP MH secondary path subnet via
the selected IPDU unit (here, IPDU-1) by changing the static routing
configuration in site router CE-2. For SCTP multi-homing, this step is performed
only in site router CE-2. Here, 10.22.96.2/28 is the IP address in IPDU-1 for
ingress routing purposes and used as the next-hop gateway on CE-2 towards the
local 10.23.152.0/26 subnet for the SCTP MH secondary paths (non_SIP).
Similarly, 10.23.152.64/26 for SIP over SCTP. Remember also other SCTP
secondary path subnets if you have them in use.
Cisco OSR:
ip route 10.23.152.0 255.255.255.192 10.22.96.2 name
MSS_SCTP_SEC_IPDU-1_EL5 (in site router CE-2 only)
ip route 10.23.152.64 255.255.255.192 10.22.96.2 name
MSS_SCTP_SEC_IPDU-1_EL5 (in site router CE-2 only)
Juniper:
set routing-options static route 10.23.152.0/26 next-hop
10.22.96.2
set routing-options static route 10.23.152.64/26 next-hop
10.22.96.2
14
Check that the SCTP MH secondary path flows converge via IPDU-1 and the related
active alarms are canceled by the system.
Primary SCTP path + UDP/TCP:
15 Move the SCTP MH primary path and TCP/UDP default GW IP addresses from
SWU-0 (here, interfaces 5/1 and 5/3) to the selected IPDU(s) (here, IPDU-0) and
change the static routing configuration on CE-1 and CE-2 accordingly:
• Disable SWU-0 VLAN routing interface 5/3 that has the TCP/UDP GW IP. At the
same time, the VRRP group(s) associated to this VLAN routing interface 5/3 is
disabled.
a) interface 5/3
b) shutdown
c) exit
• Disable SWU-0 VLAN routing interface 5/1 that has the SCTP MH primary path
GW IP.
a) interface 5/1
b) shutdown
c) exit
• Disable SWU-0 uplink / north-bound port based routing interface 3/1 in AHUB.
The single-hop BFD session associated to this port based routing interface
should now be down on both SWU-0 and CE-1.
a) interface 3/1
b) shutdown
c) exit
• Disable optional SWU-0 uplink / north-bound port based routing interface 3/4 in
AHUB (if used). The single-hop BFD session associated to this port based
routing interface should now be down on both SWU-0 and CE-1.
a) interface 3/4
b) shutdown
c) exit
• Shut down the routing interface ports on CE-1 towards the port based routing
interfaces 3/1 and 3/4 (if used) on SWU-0.
• Check that the SCTP MH primary path alarms are raised by the system. Also
alarm 0x3292 "'INTERNAL LAN LINK BROKEN" may appear as SWU-0 uplink
was previously shut down.
• Create the default GW IP address to the IPDU(s) (here, IPDU-0). IPDU should go
to WO-EX state after this step if not already (due to the previous steps where the
external VLAN interfaces were created).
a) ZQRN:IPDU,0::VLAN100,:"-10.23.151.1",26,L,;
b) ZQRN:IPDU,0::VLAN100,:"-10.23.151.65",26,L,;
c) ZQRN:IPDU,0::VLAN300,:"-10.23.153.65",26,L,;
d) ZQRN:IPDU,0::VLAN300,:"-10.23.153.65",26,L,; // IPDU-1
also possible
• Enable ingress routing towards the local SCTP MH primary path and TCP/UDP
subnets via the selected IPDU unit (here, IPDU-0) by changing the static routing
configuration in site router CE-1 and CE-2. For SCTP multi-homing, this step is
performed only in site router CE-1. Here, 10.22.95.2/28 is the IP address in
IPDU-0 for ingress routing purposes and used as the next-hop gateway on CE-1
towards the local 10.23.151.0/26 and 10.23.151.64/26 subnets for the SCTP MH
primary paths. Furthermore, here 10.22.97.4/28 is the IP address in IPDU-0 for
ingress routing purposes and used as the next-hop gateway on CE-1 and CE-2
towards the local 10.23.153.0/26 (non-SIP) and 10.23.153.64/26 (SIP) for the
TCP/UDP applications. Optionally, 10.22.97.5/28 is the IP address in IPDU-1 for
ingress routing purposes and used as the next-hop gateway on CE-1 and CE-2
towards the local IPv4 subnets for the TCP/UDP applications that are reached
via IPDU-1. However, in this example no static routes via 10.22.97.5/28 are
shown.
Cisco OSR:
a) ip route 10.23.151.0 255.255.255.192 10.22.95.2 name
MSS_SCTP_PRI_IPDU-0_EL4 (in site router CE-1 only)
b) ip route 10.23.151.64 255.255.255.192 10.22.95.2 name
MSS_SCTP_PRI_IPDU-0_EL4 (in site router CE-1)
c) ip route 10.23.153.0 255.255.255.192 10.22.97.4 name
MSS_TCPUDP_IPDU-0_EL4 (in site router CE-1)
d) ip route 10.23.153.0 255.255.255.192 10.22.97.4 name
MSS_TCPUDP_IPDU-0_EL5 (in site router CE-2)
e) ip route 10.23.153.64 255.255.255.192 10.22.97.4 name
MSS_TCPUDP_IPDU-0_EL4 (in site router CE-1)
f) ip route 10.23.153.64 255.255.255.192 10.22.97.4 name
MSS_TCPUDP_IPDU-0_EL5 (in site router CE-2)
Juniper:
set routing-options static route 10.23.151.0/26 next-hop
10.22.95.2 // CE-1
set routing-options static route 10.23.151.64/26 next-hop
10.22.95.2 // CE-1
set routing-options static route 10.23.153.640/26 next-
hop 10.22.97.4 // CE-1
set routing-options static route 10.23.153.640/26 next-
hop 10.22.97.4 // CE-2
set routing-options static route 10.23.153.64/26 next-hop
10.22.97.4 // CE-1
set routing-options static route 10.23.153.64/26 next-hop
10.22.97.4 // CE-2
• Delete floating backup routes in the site routers to AHUBs if any existed prior to
the migration.
16 Check that the SCTP MH primary path flows converge via IPDU-0 and the related
active alarms are canceled by the system.
17
Remove the routing configurations from SWU-0 and SWU-1:
• remove VRRP (TCP), single-hop BFD (TCP).
• remove VLAN routing interfaces,
• remove VLAN subnet mapping / association,
• remove port based routing interface(s),
• remove static routing configurations,
18
Clean-up the single-hop BFD and routing configurations from the CE devices and
remove the cabling to SWU-0 and SWU-1 units in the Open MSS and the CE
devices.
• Remove cables from the AHUBs.
• Remove site router CE-1's routing interface and single-hop BFD configurations
towards the SWU-0 port based routing interfaces 3/1 and 3/4 (if used).
• Remove site router CE-2's routing interface and single-hop BFD configurations
towards the SWU-1 port based routing interfaces 3/1 and 3/4 (if used).
19
Enable IPDU switcher due to alarm 3053 again by canceling the rule created earlier
with the following command:
ZARD:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,:;
20 Cancel alarm 0x3292 "'INTERNAL LAN LINK BROKEN" with ZASA MML command if
active.
set interfaces vlan unit 1300 family inet address 10.22.97.18/28 vrrp-group
130 virtual-address 10.22.97.17
set interfaces vlan unit 1300 family inet address 10.22.97.18/28 vrrp-group
130 priority 80
set interfaces vlan unit 1300 family inet address 10.22.97.18/28 vrrp-group
130 fast-interval 200
set interfaces vlan unit 1300 family inet address 10.22.97.18/28 vrrp-group
130 accept-data
set interfaces vlan unit 1300 family inet address 10.22.97.3/28 vrrp-group
130 virtual-address 10.22.97.1
set interfaces vlan unit 1300 family inet address 10.22.97.3/28 vrrp-group
130 priority 100
set interfaces vlan unit 1300 family inet address 10.22.97.3/28 vrrp-group
130 fast-interval 200
set interfaces vlan unit 1300 family inet address 10.22.97.3/28 vrrp-group
130 preempt hold-time 30
set interfaces vlan unit 1300 family inet address 10.22.97.3/28 vrrp-group
130 accept-data
• Migrate on a physical MGW (pMGW) basis. Here the operator migrates resources of
a pMGW resource by resource.
• Migrate on MSS basis.
• Migrate on peer element basis (for example, BSC, RNC, fixed switch).
g Note: If the operator has the Multipoint A/Iu feature in particular, the approaches can
be mixed.
Downtimes
There is no total interruption of traffic. The interruptions are on resource basis.
• PBX and ISUP TDM: A circuit group is barred in the MSS and the MGW and then
moved from a vMGW to another one. The break is a few minutes per CGR. There is
no traffic impact if an alternative CGR is available.
• BSC TDM: The same impact as for ISUP TDM. For multi-homed BSCs, there is no
impact (TDMs connected to 2 or more pMGWS).
• RNC: No impact on traffic.
• SIP and BICC: No impact on traffic.
Background information
In this example, two new IPDUs are introduced for load balancer use. Also IP forwarder
units can be used as H.248 LB units. In this example, one H.248 LB instance (also called
load share group) is created into IPDU-0 (IP forwarder) and another to IPDU-3. The pre-
conditions to use an IP forwarder unit as an H.248 LB unit are:
1. The host IP address(es) of H.248 must be from different subnet(s) from the ones
used for IP forwarding.
2. IP forwarder units have enough capacity and bandwidth.
In the example, two H.248 unit pools are created. In order to decide the proper number,
the following aspects need to be considered:
3. Each H.248 LBS instance can have multiple IP address pairs (for SCTP MH).
4. Each H.248 LBS instance consists of a signaling unit (GISU) pool.
5. Each GISU can belong to one unit pool only.
6. A particular GISU cannot belong to a unit pool and control a vMGW directly at the
same time.
7. To obtain resilience, it is recommended to use at least two IPDUs for the H.248
traffic.
1. The GISU-specific IP addresses are often used for non-H.248 traffic as well, for
example, M3UA/SIGTRAN. Keeping the same address would require parallel
migration of M3UA LB. It is easier to change H.248 IP addressing rather than M3UA
IP addressing since vMGW is under its own control whereas SIGTRAN links might
have a peer element in another operator’s premise. If M3UA LB is migrated first, it
requires two new temporary IP subnets for the GISU IP addresses. These addresses
are used until H.248 is migrated to IPDUs.
2. The number of externally visible IP addresses is not reduced if we keep GISU-
specific IP addresses.
3. If there is no room for new IPDU units, an IP forwarder unit must be used. However,
an IP forwarding unit cannot listen to one IP address and simultaneously forward
packets of the same subnet. Therefore, in this example, new IP subnets are created
for H.248 LBs.
4. If it is absolutely necessary to keep certain IP addresses of our end, some addresses
can be moved to LB IPDU’s. This requires host-based IP routes, since the IP
forwarding unit is handling the rest of the subnet.
In these cases, the procedure is more complex and the GISU-specific address might
have to be changed in the GISU as well. These instructions do not cover this
particular case.
Before you start
The following prerequisites must be met:
1. IP planning has been performed. It must be possible to introduce new IPs for the
load balancer. Moving an existing IP address is not described here.
2. Network planning has been performed, not only for the final configuration but also for
intermediate steps.
g Note: Pay attention to vMGW load sharing weights. The weight must be in balance with
the number of GISUs in the unit pool.
3. IP forwarder is in use and the IP forwarder uses two working IPDUs. The IP
forwarder related configuration changes are described in section IP forwarder
migration (Open MSS).
4. If signaling units use the same IP addresses for H.248 and M3UA/SIGTRAN, distinct
IP addresses for H.248 have been assigned. This is due to the migration phase
setup – two units cannot own the same IP address.
5. IP addresses must have the same scope. For any vMGW, both ends must have
either public or private IP addresses. Another vMGW can have a different address
scope. For more information about Classification of IPv4 addresses and IPv4
address scoping policy considerations for SCTP, see section IP addressing and
routing principles for MSS System network elements in Site Connectivity Guidelines
for MSS System. Currently, in the MSS System products, IP address scoping policy
is always enabled by default and cannot be disabled by configuration.
6. VLANs 100, 200, 300, 330, 1100, 1200 and 1300 exist. VLAN400 is created as part
of the migration.
7. Sales item L.4242 for the MSS must be available.
8. Sales item 3GNLIC0056 for IPA MGW must be available. Open MGW does not need
any sales items.
9. One free vMGW is available in the MSS and MGW. (If not, a vMGW needs to be
deleted and its resources must be moved to another vMGW).
10. A free GISU is needed. The migration is easier if there are extra temporary units
available. The extra units can be removed once the migration is complete.
11. There must be enough spare GISU units.
Different scenarios
Either of the above cases can be implemented so that
This document describes the latter case, that is, the IP change.
Any combination of the above scenarios can co-exist.
In the sequence below, two new IPDUs are created. One H.248 LB instance is created in
a new unit, another one into the IP forwarder unit, and then one SCTP association set is
migrated to a load balancer. Both are possible scenarios (IP forwarder unit and new
unit). The use of the IP forwarder unit requires distinct VLANs.
In the end, there are clean-up commands to remove the old configuration. IPDU-4 is not
used for anything in this example, but its creation is shown since it is a common scenario
to create two more units.
IP addresses in H.248 load balancer migration
Figure 48 IP addresses used in H.248 migration
IPforwarderunit
VLAN1100:10.22.95.2 GW:169.254.0.1VLAN400
VLAN1150:10.22.95.34
VLAN1250:10.22.96.34
GW: 10.23.151.1VLAN100 MSS
IPDU-0
MGW0 CE-1 10.22.95.1 Workingunit SCTP PRI:10.23.151.2VLAN100
10.70.51.44/24 10.22.95.33 GISU-0 SCTPSEC:10.23.152.2VLAN200
10.70.52.44/24
X IPDU-1
VLAN1200:10.22.96.2 Workingunit
GISU-1
GW:169.254.0.2
VLAN400
10.102.25.6 X GW:10.23.152.1
10.102.25.94 VLAN200 GISU-2 SCTP PRI:10.23.151.4VLAN100
10.22.96.1 SCTPSEC:10.23.152.4VLAN200
10.22.96.33 IPDU-2
X Spareunit GISU-3
MGW69 IPDU-3
Workingunit GISU-4
MGW01 GW:169.254.0.4VLAN400
SCTPPRI:10.22.95.3
VLAN1100
GW:169.254.0.5 ...
CE-2 SCTPSEC:10.22.96.3
VLAN400
VLAN1200 IPDU-4 GISU-n
Workingunit
10.102.25.7
10.102.25.95 vMGW H.248 LBunit
vMGW
MGW70
Table 11 Internal LANs and IPv4 subnets in the example (H.248 LB)
VLAN ID 100 200 300 400
Control Internal (Media) Internal (Media) Internal (Media) Internal LBS back-end
Plane Control VLAN (SCTP Control VLAN (SCTP Control VLAN VLAN
primary) secondary) (TCPUDP)
IPDU-3 - 169.254.0.4 LI
169.254.0.13 PI
169.254.0.23 PI
169.254.0.33 PI
169.254.0.43 PI
IPDU-4 - 169.254.0.5 LI
169.254.0.14 PI
169.254.0.24 PI
169.254.0.34 PI
169.254.0.44 PI
L Logical
LI Logical Internal
P Physical
PI Physical Internal
LV Logical Virtual
courier font The IP address might have existed before the migration.
H.248 needs one logical IP in each H.248 LB IPDU. The
SIP LB needs one physical IP for each IPDU per WO-EX
IPDU. GISUs need physical address for SIP LB, logical
address for H.248 LB. Thus 4+1 IPDUs means that 5
IPDUs (including spare) need 4 IP addresses each. In
addition, logical addresses (L) are needed in IPDU for GW
addresses. The GW address cannot be moved if used for
non-M3UA SCTP traffic, for example, H.248 or IUA.The
spare unit must not be allocated with any logical IP
address.
g Note: Even though the SIP LB migration example might have the same IP addresses
as H.248 LB example, the same IP addresses cannot be shared by these two load
balancers.
H.248 LB does not need TCP/UDP VLANs. However, these VLANs must exist in all
IPDUs since an IPDU switchover might happen and VLANs do not move along in a unit
switchover.
Load balanced traffic which uses internally LAN instead of EMB is using VLAN400.
Procedure
1
Create VLAN400 to all AHUBs.
g Note: It is important to bring the new IPDU units to WO-EX state below rather than
bringing the old spare unit to WO-EX. This is due to SIP LB implementation, and how
SIP LB uses the spare unit.
3 When you add more IPDUs, set IP forwarding ON also in the new units.
ZQRT:IPDU,3::IPF=YES,;
ZQRT:IPDU,4::IPF=YES,;
Add all the IPDU computer units that are required to the MSS. Check that all IPDU
units have IP forwarder activated, since in unit switchover scenarios IP forwarder
functionality might roam around.
4
Activate the license.
ZW7M:FEA=1473:ON:;
5 Set overload control if desired. H.248 load balancer overload control is a generic
feature on top of the H.248 load balancer in MSS feature. If the H.248 load balancer
in MSS license is activated (“ON” state), the H.248 overload control functionality can
be controlled with the H248LB_OLC_CONTROL (012:0125) PRFILE parameter.
The H248LB_OLC_CONTROL PRFILE parameter controls whether the ticketing
service is used to protect IPDU or not. The default value of this parameter is “TRUE”.
6 Prevent IPDU switchover due to temporary interface problems during the migration
procedure.
In the event of a double link failure in EL4 and EL5, a forced IPDU switchover would
happen To prevent this during migration, you need to set weight on the 3053
ETHERNET CONNECTION FAILURE alarm. When total weight of 64 is reached, a
switchover occurs. Weight 1 practically means that the switchover never occurs due
to this alarm.
It can be set by entering the following command:
ZARA:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,::1:N:;
7
Create general IP configuration to the IPDUs, if not already created. In the example
below, two new IPDUs (IPDU-3 and IPDU-4) are created for the M3UA unit pool.
Sub-steps
a)
Internal interfaces for general setup:
1. ZQRN:IPDU,3::EL0::UP;
2. ZQRN:IPDU,3::EL1::UP;
3. ZQRN:IPDU,4::EL0::UP;
4. ZQRN:IPDU,4::EL1::UP;
5. ZQRA:IPDU,3::BOND0,EL0,EL1:;
6. ZQRA:IPDU,4::BOND0,EL0,EL1:;
7. ZQRA:IPDU,3&4::VLAN400:400:BOND0::UP;
8. ZQRA:IPDU,0&&2::VLAN400:400:BOND0::UP;
(This step is applicable only if the configuration has not been created earlier.)
9. ZQRA:GISU,x::BOND0,EL0,EL1:;
(If not yet created, applicable for every GISU.)
10. ZQRA:GISU,x::VLAN400:400:BOND0;
(If not yet created, applicable for every GISU.)
11. ZQRA:IPDU,3&4::VLAN100,100,BOND0,,::UP:;
(These are existing VLANs, but new units here.)
12. ZQRA:IPDU,3&4::VLAN200,200,BOND0,,::UP:;
13. ZQRA:IPDU,3&4::VLAN300,300,BOND0,,::UP:;
14. ZQRA:IPDU,3&4::VLAN330,330,BOND0,,::UP:;
15. Logical IPDU IP addresses
If GW is not yet created:
ZQRN:IPDU,0&&1::VLAN400,:"169.254.0.1",23,L,I::;
If IPDU’s 0-2 has already GW IP:
ZQRN:IPDU,3&&4::VLAN400,:"169.254.0.4",23,L,I::;
Wait until IPDU-3 and 4 come to WO-EX state.
16. 1st IPDU-LBMgr IPDU physical IP address
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.10",23,P,I::;
If IPDUs 0-2 already have GW IP:
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.12",23,P,I::;
17. 2nd IPDU-LBMgr IPDU physical IP addresses
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.20",23,P,I::;
or
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.22",23,P,I::;
18. 3rd IPDU-LBMgr IPDU physical IP addresses
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.30",23,P,I::;
or
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.32",23,P,I::;
19. 4th IPDU-LBMgr IPDU physical IP addresses
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.40",23,P,I::;
or
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.42",23,P,I::;
20. ZQRN:GISU,0&&114::VLAN400,:"169.254.0.100",23,L,I::;
g Note: Exclude spare GISUs from this command, otherwise they become WO-EX
units.
21. ZQRN:GISU,0::VLAN400,:”10.22.95.3”,32,L,V::;
(pool0)
22. ZQRN:GISU,1::VLAN400,:”10.22.95.3”,32,L,V::;
(pool0)
23. ZQRN:GISU,2::VLAN400,:”10.22.95.34”,32,L,V::;
(pool0)
Add IP address to all GISUs that belong to the pool.
24. ZQRN:GISU,3::VLAN400,:”10.22.95.34”,32,L,V::;
(pool0)
w NOTICE: Once created, do not disable the EL4 or EL5 physical interfaces on
IPDU units, or any logical VLAN interfaces configured on top of them which have
Virtual IP (VIP) addresses assigned in both IPDU and GISU units—that is, do not
set such interfaces to Down state using the QRA MML command. If you do, this
will have an adverse affect on all of the interfaces and IP addresses configured
on top of them and may result in those interfaces and IP addresses being
rendered unreachable by the system kernel routing process.
If IPDU-based SIP and/or H.248 load balancing is used with SCTP multihoming
(MH) transport support, then disabling the SCTP MH primary path interface on an
IPDU unit will cause packet drops among internal backend traffic sent between
IPDU and GISU units over VLAN400.
If IPDUs are used for IP forwarding purposes rather than load balancing, and
SRCNET policy routes are configured, then disabling the active external interface
(used for reaching the next hop address) will cause the cached route to be
disabled.
8 Ping the IP addresses internally in the MSS to ensure that VLAN400 is working:
ZQRX:GISU,0::PING:IP="169.254.0.4",SRC="169.254.0.100",;
ZQRX:GISU,1::PING:IP="169.254.0.4",SRC="169.254.0.101",;
ZQRX:IPDU,3::PING:IP="169.254.0.100",SRC="169.254.0.4",;
ZQRX:IPDU,3::PING:IP="169.254.0.101",SRC="169.254.0.4",;
ZQRX:GISU,3::PING:IP="169.254.0.1",SRC="169.254.0.102",;
ZQRX:GISU,4::PING:IP="169.254.0.1",SRC="169.254.0.103",;
ZQRX:IPDU,0::PING:IP="169.254.0.102",SRC="169.254.0.4",;
ZQRX:IPDU,0::PING:IP="169.254.0.103",SRC="169.254.0.4",;
Ping all GISUs.
9 SCTP MH: Create SCTP VLANs in the site routers. 10.22.95.1 is the Cisco port IP
address of site router CE-1. 10.22.96.1 is the Cisco port IP address of site router CE-
2. Juniper commands are in ANNEX B: Juniper commands. The interface identifiers
are just examples, they depend on the configuration. Changes compared to IP
forwarder are highlighted with bold font.
You will get an end result similar to this with Cisco OSR:
***site router 1
vlan 1100
name SCTP_MH1
interface Vlan1100
description SCTP_MH1
ip address 10.22.95.1 255.255.255.240
end
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1100
switchport mode trunk
switchport nonegotiate
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1100
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
vlan 1150
name SCTP_MH1150
interface Vlan1150
description SCTP_MH1150
ip address 10.22.95.33 255.255.255.240
end
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1150
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1150
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
***site router 2
vlan 1200
name SCTP_MH2
interface Vlan1200
description SCTP_MH2
ip address 10.22.96.1 255.255.255.240
end
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1200
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1200
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
vlan 1250
name SCTP_MH1250
interface Vlan1250
description SCTP_MH1250
ip address 10.22.96.33 255.255.255.240
end
no cdp enable
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1250
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1250
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
10
Create GISU-specific IP configuration to an IPDU into which the SCTP association is
moved.
In the IPDU-3 here a configuration is created that relates to this particular GISU. The
next GISU’s H.248 associations might be moved to IPDU-4 together with its IP
address.
Sub-steps
a)
IPDU-3 specific configuration changes for SCTP MH:
1. ZQKM:GISU,x::"10.22.95.3":"169.254.0.4":LOG:;
(IPDU-3, pool0)
2. ZQRN:IPDU,3::VLAN1100:”10.22.95.3”,28,L,V::;
3. ZQRN:IPDU,3::VLAN1200:”10.22.96.3”,28,L,V::;
4. ZQKM:IPDU,3::"10.22.95.3":"10.22.95.1":LOG:;
(Site router CE-1 is set as default GW.)
5. ZQKM:IPDU,3::"10.22.96.3":"10.22.96.1":LOG:;
(Site router CE-2 is set as default GW.)
1. ZQKM:GISU::"10.22.95.34":"169.254.0.1":LOG:;
(IPDU-0, pool1)
2. ZQRN:IPDU,0::VLAN1150:”10.22.95.34”,28,L,V::;
3. ZQRN:IPDU,0::VLAN1250:”10.22.96.34”,28,L,V::;
4. ZQKM:IPDU,0::"10.22.95.34":"10.22.95.33":LOG:;
(Site router CE-1 is set as default GW.)
5. ZQKM:IPDU,0::"10.22.96.34":"10.22.96.33":LOG:;
(Site router CE-2 is set as default GW.)
11
Ping the IP addresses externally to verify the external IP connectivity.
ZQRX:IPDU,3::PING:IP="10.102.25.6",SRC="10.22.95.3",;
(primary path)
ZQRX:IPDU,3::PING:IP="10.102.25.86",SRC="10.22.96.3",;
(secondary path)
ZQRX:IPDU,3::PING:IP="10.102.25.7",SRC="10.22.95.3",;
(primary path)
ZQRX:IPDU,3::PING:IP="10.102.25.87",SRC="10.22.96.3",;
(secondary path)
ZQRX:IPDU,0::PING:IP="10.102.25.6",SRC="10.22.95.34",;
(primary path)
ZQRX:IPDU,0::PING:IP="10.102.25.86",SRC="10.22.96.34",;
(secondary path)
ZQRX:IPDU,0::PING:IP="10.102.25.7",SRC="10.22.95.34",;
(primary path)
ZQRX:IPDU,0::PING:IP="10.102.25.87",SRC="10.22.96.34",;
(secondary path)
12
If no extra units are available for temporary use, move the vMGWs from a signaling
unit to another one.
Sub-steps
a)
ZJJE:CREATE:NAME=H248LB0,TYPE=H248;
b)
ZJJE:CREATE:NAME=H248LB1,TYPE=H248;
14
Configure a load balancing unit when no other LB exists in the IPDU prior to the
H.248 LB:
Sub-steps
a)
Define the external and internal interfaces to the spare unit:
ZJJC:CREATE:IPDU,2;
b) Create a H.248 load balancer function to working IPDUs which have H.248 LB:
15
Configure the load balancing unit when another LB already exist in the IPDU, for
example, SIP LB:
Add the load balancer to the IPDUs:
ZJJC:MODIFY:IPDU,3::NAME=H248LB0;
ZJJC:MODIFY:IPDU,4::NAME=H248LB1;
The VLAN400 must already exist in this case.
16 Create an H.248 unit pool in an MSS with one signaling unit. Create a new (target)
vMGW in the MSS using the H.248 unit pool.
MSS: pool0
ZJEC:POOLID=0,NAME=H248POOL0,POOLADDR=10.22.95.3,UIDX=3,SEC
UNSEL=Y;
ZJEA:POOLID=0,UTYPE=GISU,UIDX=0,IPADDR=169.254.0.100,ROLE=P
RI,;
17
Verify the created load balancer configuration with the JJI command:
JJI:TYPE=UNIT:UTYPE=IPDU,;
INT
UNIT INDEX INTERFACE LBS NAME
---- ----- --------- --------
IPDU 0 VLAN400 H248LB0
LBS-MGCFSIPI
M3UALBS0
IPDU 1 VLAN400 H248LB1
IPDU 3 VLAN400 M3UALBS1
IPDU 2 VLAN400
COMMAND EXECUTED
18
Check the alarms. There must not be any new alarms about the SCTP association or
signaling link.
ZAHO:[[<unit type>|<all> def],[[<stage>|<pair>]|<all>
def],[ <unit index>|<all> def]]:[[CLS = <alarm
class>...|NR=<alarm number>...]|<all>def];
ZAHO:;
19
Create the inter-vMGW connections for the new vMGW. For IPA2800 MGW internal
connections (vMGWs are in same physical MGW), use ATM, otherwise, use IPv4.
MSS:
ZJFT:NMGW=MGW00:NMGW=MGW69LB:BNCC=IPV4;
ZJFT:NMGW=MGW01:NMGW=MGW69LB:BNCC=IPV4;
...and so on.
20
Create a new vMGW in the physical MGW. If the LB feature was active in the MGW,
the existing ISUs would become Master ISU (MISU) for the old vMGWs. Add Slave
ISU units (SISUs) for the new vMGW. In Open MGW, use HCLB. Initiate the
registration of the new vMGW to the MSS. The registration creates SCTP
associations between the MSS (VIPs) and the MGW.
21
For ISUP, PBX TDM trunks:
Sub-steps
a)
In the MSS, shut down the CGR, lock the CGR, and then move the CGR into the
new vMGW.
b)
In the MGW, set the circuits into Barred by User (BA-US) state.
The same steps as above must be completed for the TDM trunks. If the BSC is a
multi-homed BSC through two pMGWs or vMGWs, there will be no break in traffic.
There might be multiple CGRs due to the codec pools. If this is the case, then handle
all the CGRs at one time.
A typical case has two CGRs in two vMGWs in the same pMGW. After the migration,
one vMGW and one CGR is the most feasible option.
MSS (part 1):
ZCEL:NCGR=BSS692ND,;
ZCEM:NCGR=BSS692ND:SHUTDOWN:;
ZCEL:NCGR=BSS692ND,;
ZCEM:NCGR=BSS692ND:LOCK:;
ZRCN:NCGR=BSS692ND:MGW=MGW69LB,;
IPA2800 MGW:
ZCIM:CRCT=11900-1&&-28:BA;
ZJVF:VMN=MSS69NONLB:CGR=669:;
ZJVE:VMN=MGW69:CGR=669:;
ZCIM:CRCT=11900-1&&-28:WO;
Open MGW:
set vmgw registration rega 0 vid 0
23
For RNC resources:
Sub-steps
a)
Add the new vMGW zero weight on the RNC UPD as soon the vMGW exists.
RNC-UPD = 1 in the example.
b) Set the new vMGW weight to a value higher than zero for the UPD.
c)
Set the old vMGW weight to zero: no calls are allocated to the old vMGW.
A vMGW with zero weight can only be allocated for a call if it has TDM resources
allocated (weight definition has no impact on TDM resources).
MSS configiguration for the RNC:
ZJFA:UPD=1:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=1:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=1:NMGW=MGW00:LDSH=0:;
g Note: It is recommended to use the old GISU unit until the old non-LB vMGW is
deleted. For more information on network planning, see Step 2 in section
Before you start.
24 For BICC and SIP resources, follow the same steps as detailed above for the RNC.
ZJFA:UPD=2:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=2:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=2:NMGW=MGW06MGW00:LDSH=0:;
ZJFA:UPD=3:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=3:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=3:NMGW=MGW06MGW00:LDSH=0:;
ZJFA:UPD=5:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=5:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=5:NMGW=MGW06MGW00:LDSH=0:;
ZJFA:UPD=10:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=10:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=10:NMGW=MGW06MGW00:LDSH=0:;
ZJFA:UPD=12:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=12:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=12:NMGW=MGW06MGW00:LDSH=0:;
ZJFA:UPD=20:NMGW=MGW69LB:LDSH=0;
ZJFW:UPD=20:NMGW=MGW69LB:LDSH=1:;
ZJFW:UPD=20:NMGW=MGW06MGW00:LDSH=0:;
g Note: It is recommended to use the old GISU unit until the old non-LB vMGW is
deleted. For more information on network planning, see Step 2 in section
Before you start.
25 Repeat the resource removal steps detailed above ( Steps 21, 22, 23, 24) for all the
vMGWs that must be removed.
MSS: pool1
ZJEC:POOLID=1,NAME=H248POOL1,POOLADDR=10.22.95.34,UIDX=1,SE
CUNSEL=Y;
ZJEA:POOLID=1,UTYPE=GISU,UIDX=2,IPADDR=169.254.0.102,ROLE=P
RI,;
ZJGC:NAME=MGW70LB,MGWTYP=GEN,ADDR="10.102.25.7",PORT=8010:U
POOL=1:AESA="E-23469",NBR=1,LBCU=1,:USEPARS=0,DEFPARS=4,;
Create the inter-vMGW connections for the new vMGW:
ZJFT:NMGW=MGW00:NMGW=MGW70LB:BNCC=IPV4;
ZJFT:NMGW=MGW01:NMGW=MGW70LB:BNCC=IPV4;
ZJFT:NMGW=MGW69:NMGW=MGW70LB:BNCC=IPV4;
IPA2800 MGW: (MGW70)
ZJVC:VMN=MSS70LB,UINX=1,:OIP="10.102.25.7",OPN=8010,:A2T=55
,::;
ZJVA:VID=3,::PIP="10.22.95.34",:SIP="10.22.96.34",:;
ZJVN:VID=3,:CNT=0,:NBR=1,;
ZJVN:VID=3,:CNT=0,:::TFO=0,;
ZJVN:VID=3,:CNT=0,:::IPS=1,;
ZJVN:VID=3,:CNT=0,:::DII=1,;
ZJVJ:VID=3,:SISU=0,:LBS=0,:;
ZJVG:ISU=3,:BLP=45,:;
ZJVR:VID=3,:REGA=1,;
Open MGW: (MGW70)
add vmgw mgw vmn amss4vmgw70lb opip 10.202.25.7 osip 10.102.25.95 rg
0 opn 8010
add vmgw controller vid 3 pip 10.22.95.34 sip 10.22.96.34
set vmgw controller vid 3 hve 2
set vmgw controller vid 3 hea 3000
MSS: Add vMGW to relevant UPDs. These are some examples:
ZJFA:UPD=0:NMGW=MGW70LB:LDSH=0;
ZJFW:UPD=0:NMGW=MGW70LB:LDSH=1:;
ZJFW:UPD=0:NMGW=MGW70:LDSH=0:;
ZJFA:UPD=138:NMGW=MGW70LB:LDSH=0;
ZJFW:UPD=138:NMGW=MGW70LB:LDSH=1:;
ZJFW:UPD=138:NMGW=MGW70:LDSH=0:;
27 If necessary, add more signaling units to the existing H.248 unit pool. This might
require moving existing non-pooled vMGWs and related SCTP associations to other
signaling units. IPDU reset is not needed. However, H.248 traffic is cut for a while for
that load share group/H.248 unit pool.
ZJEA:POOLID=0,UTYPE=GISU,UIDX=1,IPADDR=169.254.0.101,ROLE=O
RD,;
ZJEA:POOLID=1,UTYPE=GISU,UIDX=3,IPADDR=169.254.0.103,ROLE=O
RD,;
ZJGI;
29 When everything is working fine, and there are no other users for these IP forwarder
VLANs (for example, H.248), and the M3UA LB is migrated after H.248, skip this
step. In this step, delete the GW addresses for the SCTP primary and secondary
paths used by the IP forwarder, and GISU specific IP addresses as well.
SCTP MH:
ZQRG:IPDU,0::VLAN100:"10.23.151.1";
( IP forwarder GW address for primary path)
ZQRG:GISU,0::VLAN100:10.23.151.2;
ZQRG:GISU,1::VLAN100:10.23.151.3;
ZQRG:GISU,2::VLAN100:10.23.151.4;
and so on.
ZQRG:IPDU,1::VLAN100:"10.23.152.1";
( IP forwarder GW address for primary path)
ZQRG:GISU,0::VLAN100:10.23.152.2;
ZQRG:GISU,1::VLAN100:10.23.152.3;
ZQRG:GISU,2::VLAN100:10.23.152.4;
and so on.
30
Delete the GISU-specific IP addresses (but not the VLAN400 address) unless they
are used for M3UA links.
31
Delete unused vMGWs.
MSS:
ZJGM:MGWID=0::::REGA=N,;
ZJGM:MGWID=1::::REGA=N,;
ZJGD:MGWID=0:;
ZJGD:MGWID=1:;
IPA2800 MGW: (MGW0)
ZJVD:VID=0,;
MGW: (MGW1)
ZJVD:VID=1,;
Open MGW: (MGW0 and MGW1)
delete vmgw mgw vid 0
delete vmgw mgw vid 1
32
The GISUs without direct vMGW control can be moved into some of the H.248 unit
pools.
Sub-steps
a)
1. Clean-up the IP forwarder. When everything is working fine, and there are no
other users for these IP forwarder VLANs (for example, H.248), delete the
VLAN from the GISUs and IPDUs.
The following command also deletes the IP addresses from this VLAN if any
exist.
SCTP MH:
ZQRG:IPDU,0::VLAN100::;
ZQRG:IPDU,1::VLAN100::;
ZQRG:IPDU,2::VLAN100::;
ZQRG:IPDU,3::VLAN100::;
ZQRG:IPDU,4::VLAN100::;
ZQRG:IPDU,5::VLAN100::;
ZQRG:IPDU,0::VLAN200::;
ZQRG:IPDU,1::VLAN200::;
ZQRG:IPDU,2::VLAN200::;
ZQRG:IPDU,3::VLAN200::;
ZQRG:IPDU,4::VLAN200::;
ZQRG:IPDU,5::VLAN200::;
ZQRG:GISU,0::VLAN100::;
ZQRG:GISU,1::VLAN100::;
ZQRG:GISU,2::VLAN100::;
and so on.
ZQRG:GISU,0::VLAN200::;
ZQRG:GISU,1::VLAN200::;
ZQRG:GISU,2::VLAN200::;
and so on.
33 Enable IPDU switcher due to alarm 3053 again by canceling the rule created earlier.
You can do this by entering the following command:
ZARD:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,:;
13.1 Purpose
This chapter describes the migration process to move existing TDM circuits from MSS
control into MGW control. The procedure applies to the multipoint A interface case only.
After the migration is complete the same circuits can be used from all MSC servers of
the MSS pool. This leads to simpler configuration and more efficient use of TDM
resources.
Terms
ACC Automatic Congestion Control
Source CGR The MSS controlled circuit group which is moved under the
MGW control.
Target CGR The MGW controlled circuit group which was initially under
one MSS control.
13.2 Downtimes
The procedure moves resources on PCM basis. Thus a service discontinuity affects one
PCM at worst. If there are multiple PCMs on the CGR or there are redundant CGRs then
there is no service interruption at all.
Despite the gradual move, the migration is not recommended to be done at a time with
peak traffic. This is because:
• If the hunting order on the route is sequential (CGM=S) then a small CGR as first
one causes unnecessary resource booking attempts between MSS and MGW.
• If the hunting order is normal (default, CGM=N) then there may be additional load if
the first CGRs of the route are busy.
• The hunting order type most free (CGM=M) cannot be used with this feature.
1. Multipoint A is in use.
MGWCGR=88,
PCM=100
BSC1 BSC2
MGWCGR=99,
PCM=200
BSC1 BSC2
MGW1 MGW2
BSC1 BSC2
13.4 Steps
The commands below are on circuit group (CGR) basis. Repeat the steps for all relevant
CGRs. Within a CGR, move PCM by PCM.
In the example, MSS1 has the CGR under move. In MSS1 source circuit group number
is 300 before migration, target number is 351 after migration in all MSC servers. In the
MGW1 the source CGR is 88 and the target number 99. In the MML commands circuit
name could also be used.
1
Feature activation:
Sub-steps
a)
In the MGW, activate the MGW TDM A interface control pool MSS license.
ZW7M:FEA=3478:ON:;
b)
In all MSSes (MSS1-MSS5) of the pool, activate the feature to allow MGW
controlled circuits.ZWOC:53,77,FF;
Sub-steps
a)
In the MGW create a new target CGR for MGW controlled
circuits.ZRCC:TYPE=CCS,NCGR=BSC01,CGR=99:NET=NA0,SPC=300,LSI
=AIF01:;
g Note: The name of the CGR must be the same in the MGW and in all MSSes of
the pool.
b) In MSS1 with the source CGR, create a new target MGW controlled
CGR.ZRCC:NCGR=BSC01,CGR=351,TYPE=ECCS,CONE=MGW:DIR=OUT,LS
I=AIF04:MGW=MGW1;
g Note: ACC and SCR features are not allowed in an MGW controlled CGR.
c)
In the MSSes that did not have the source circuit group (MSS2-MSS5), create
the target MGW controlled CGR as well.
ZRCC:NCGR=BSC01,CGR=351,TYPE=ECCS,CONE=MGW:DIR=OUT,LSI=AI
F04:MGW=MGW1;
g Note: The name of the CGR must be the same in the MGW and in all MSSes of
the pool.
d)
In every MSS of the pool, modify the target CGR state to working state (“WO”)
.ZCRM:CGR=351:WO;
e) In MSS1 with the source CGR add the new CGR to the old BSC
route.ZRRA::ROU=194,NCGR=BSC01;
3
Move one PCM. Repeat these steps on PCM basis within a CGR:
Sub-steps
a) In the MSS (MSS1 in the figure) with the source (MSS controlled) CGR, bar one
PCM
ZCEC:MGW=MGW1,TERMID=100-1&&-31:BA;
The PCM wide service break starts here, no new calls accepted, existing
calls continue.
b)
Optional step: Wait some time for calls to be released.
c)
In the MSS (MSS1 in the figure) with the source (MSS controlled) CGR, set one
PCM to not used state.
ZCEC:MGW=MGW1,TERMID=100-1&&-31:NU;
Calls of the PCM will be released here.
d) In the MSS1 with the source CGR, remove the MSS controlled circuits from the
source circuit group.
ZRCR:CCGR=300:TERMID=100-1&&-31:;
e)
In the MGW bar the source MSS controlled PCM
ZCIM:CRCT=100-1&&-31:BA;
f)
MGW: Delete the circuits under move from the (source) MSS controlled circuit
group.
ZRCR:CCGR=88:CRCT=200-1&&-31:;
g) In the MGW, add the MGW controlled circuits to the new target CGR
ZRCA:CGR=99:CRCT=200-1&&-31,:CCSPCM=1;
h)
In the MGW, bar the PCM circuits.
ZCEC:CRCT=200-1&&-31:BA;
i)
In the MGW, unbar the PCM circuits.
ZCEC:CRCT=200-1&&-31:wo;
The PCM wide service break ends here.
4
Cleanup in the end:
Sub-steps
a)
In the MSS1 with the source CGR, delete the old CGR.
ZRCD:CGR=300;
b)
MGW: Delete the source MSS controlled circuit group.
ZRCD:CGR=88;
Procedure
2 In all MSSes (MSS1-MSS5) of the pool, deactivate the feature to allow MGW
controlled circuits.
ZWOC:53,77,00;
3
Do above steps in reverse direction. You should migrate initially just one CGR and
observe the situation for a while. If everything is fine then move the next one.
Migrating all CGRs back to original would be laborious.
1. The host IP address(es) must be from different subnet(s) from the one(s) used for IP
forwarding.
2. IP forwarder units have enough capacity and bandwidth.
The migration procedure is different if peer elements do not use FQDN / DNS. This set of
instructions concentrates on the DNS case.
g Note: When DNS is used to resolve IP addresses using the default SIP port 5060 with
all transports is required.
In the example, four SIP load balancing instances are created. In order to decide the
proper number, the following aspects need to be considered:
1. One IPDU can have multiple SIP LB instances (also called LBS groups).
2. One SIP LB instance can reside in one IPDU only.
3. Each SIP LB instance can have multiple IP addresses (or address pairs for SCTP
MH).
4. Different SIP types (access, trunk, and so on) do not require separate LB instances
as such. However, access/trunk/so-on may require own SIP LBS instances in case
that different backend GISU units need to be defined due to the max. VIPs per
VLAN400 on each GISU limitation.
5. Each SIP LB instance is a single-threaded Linux process. Thus, multiple instances
utilize multi-core processor better.
g Note: Other load balancer types (H.248, M3UA) also introduce LB instances and they
use different processor cores.
6. Each SIP LB instance needs some configuration. Thus, balance between capacity
and maintainability must be achieved.
7. In general, if multiple SIP LB instances are used, the IP realm feature might be
needed. There are the following sub-cases:
a) Each GISU belongs to more than one realm. For a GISU, you must configure SIP
listening points with the JIM MML command. If one GISU uses a certain IP
address version with the same transport, IP realm feature is needed. For
example, two IPv4 IP addresses for SIP over TCP in one GISU requires IP realm
feature.
b) The same IP realm can be used if the IP address version (IPv4 versus IPv6) is
different or when the transport is different (TCP/UDP versus SCTP) in one GISU.
In case of different transports, the IPDU can be different and even the LB
instance as well. This is due to the fact that the IPDU is not affected by the
realms.
c) GISUs can be split for each LBS instance. Even in this case, you may need IP
realm so that the correct GISU is selected for an outgoing call. This approach is
not recommended since it can lead to unbalanced GISU loads.
However, it is possible to manage this case without IP realm too.
In the example below, IP realm is needed since 3 IP realms were used for 4 SIP LB
instances in 2 IPDUs. Without IP realm, you can have at maximum (all of the below):
• one IPv4 address for SIP over UDP or TCP
• one IPv4 address for SIP over SCTP
• one IPv6 address for SIP over UDP or TCP
• one IPv6 address for SIP over SCTP
• The GISU-specific IP addresses are often used for non-SIP traffic as well.
• The number of externally visible IP addresses is not reduced otherwise.
• If there is no room for new IPDU units, an IP forwarder unit must be used. However,
an IP forwarding unit cannot listen to one IP address and simultaneously forward
packets of the same subnet. Therefore, in this example, a new subnet is created for
SIP trunk over TCP.
• Only one IP per subscriber per IPDU for TCP/UPD is supported by the platform.
• If it is absolutely necessary to keep a certain IP address of the MSS, the following
actions can be taken:
– Some addresses can be moved to LB IPDU’s. This requires host-based IP routes
since the IP forwarding unit is handling the rest of the subnet.
– In the case of SIP over UDP/TCP, the IP address can be moved to the other IP
forwarder unit, which is not forwarding this TCP subnet, like IPDU-1 in the
example below.
In these cases, the procedure is more complex and the GISU-specific address might
have to be changed in the GISU as well.
Virtual CIC of SIP inter-working: Dedicating units for certain SIP CGRs have no
relevance when the LB is in use. However, the Service Level Agreement (SLA) part of
the Virtual Circuit feature still can be used to limit, for example, the number of
simultaneous calls towards specific directions. In this case, the virtual CIC configuration
and SIP LB configuration must be aligned (that is, the virtual CIC units must belong to
the SIP LBS unit group).
Before you start
1. IP planning has been done. It must be possible to introduce new IPs for the load
balancer. Moving an existing IP address is not described here.
2. IP forwarder is in use. The IP forwarder related configuration changes are described
section IP forwarder migration (Open MSS).
3. SIP is configured into signaling units. This configuration includes, for example, DNS,
SIP general parameters and other configurations. If this precondition is not met, the
corresponding feature activation manual must be applied.
4. VLANs 100, 200, 300, 330, 1100, 1200 and 1300 exist. VLAN400 is created as part
of the migration.
5. The SIP LB Sales item, L.4241, must be installed (no need to order) in addition to the
standard NVS and SIP sales items.
6. SIP over SCTP requires sales item L.1432 together with sales item L.4241. This
sales item allows SIP-over-SCTP load balancing.
7. IP addresses must have the same scope on peer connection basis. For more
information about Classification of IPv4 addresses and IPv4 address scoping policy
considerations for SCTP, see section IP addressing and routing principles for MSS
System network elements in Site Connectivity Guidelines for MSS System. Currently,
in the MSS System products, IP address scoping policy is always enabled by default
and cannot be disabled by configuration.
8. There must be enough spare GISU units.
9. SIP over SCTP may require some preparation in the peer element as indicated
below.
Migration is different depending on the use case. For MSS, the following use cases exist:
g Note: Here SIP is moved to the IP forwarder unit and also to a new IPDU unit. Both are
possible scenarios. The use of IP forwarder unit requires distinct VLANs.
EL5Secondary10.22.96.2
EL0Secondary10.23.96.3
SIGU-1
EL1Secondary10.23.96.3
SIGU-n
EL0Primary10.23.95.50
EL1Secondary10.23.96.50
IPforwarderunit
VLAN1350:10.22.97.20 GW:169.254.0.1VLAN400
VLAN1300:10.22.97.4 GW: 10.23.153.1VLAN300 MSS
10.70.51.44/24
IPDU-0
CE-1 10.22.97.2 Workingunit 10.23.153.66
10.22.97.18 GISU-0
XX IPDU-1
10.23.153.67
VLAN1300:10.22.97.5 Workingunit
X GISU-1
Peer GW:169.254.0.2VLAN400
network 10.22.97.1VLAN1300HSRP1
X GW:10.23.152.30
element1
10.22.97.17VLAN1350HSRP1
VLAN300 GISU-2
10.23.153.68
IPDU-2
Spareunit GISU-3
Peer IPDU-3
network Workingunit GISU-4
element2 10.22.97.6
GW:169.254.0.4VLAN400
10.22.97.3
10.22.97.19
...
CE-2
Peer IPDU-4 GISU-n
network 10.22.97.7 Workingunit
element3 SIP LBunit
GW:169.254.0.05VLAN400
SIP-ITCP/UDP
SIPaccess
SIPtrunk
IP addresses in SIP over SCTP
VLAN1100:10.22.95.2 GW:169.254.0.1VLAN400
GW: 10.23.151.1VLAN100 MSS
10.50.51.52 IPDU-0
10.50.100.52 Workingunit SCTPPRI:10.23.151.66
CE-1 10.22.95.1
GISU-0 SCTPSEC:10.23.152.66
X IPDU-1
VLAN1200:10.22.96.2 Workingunit
GW:169.254.0.2
GISU-1
Peer VLAN400
network GW:10.23.152.1
element1 VLAN200 GISU-2
IPDU-2
Spareunit GISU-3
X
10.22.96.1
IPDU-3
Workingunit GISU-4
GW:169.254.0.4VLAN400
SCTPPRI:10.22.95.3 ...
CE-2 SCTPSEC:10.22.96.3
IPDU-4 GISU-n
Workingunit
SIP LBunit
GW:169.254.0.05VLAN400
SIP-IoverSCTP
Control Plane Internal (Media) Control Internal (Media) Internal (Media) Internal LBS back-end
VLANs for the VLAN (SCTP primary) Control VLAN Control VLAN VLAN
Open MSS (SCTP secondary) (TCP/UDP)
Internal FI LAN
IPDU-3 169.254.0.4 LI
169.254.0.13 PI
169.254.0.23 PI
169.254.0.33 PI
169.254.0.43 PI
IPDU-4 169.254.0.5 LI
169.254.0.14 PI
169.254.0.24 PI
169.254.0.34 PI
169.254.0.44 PI
courier font The IP address might have existed before the migration.
SIP LB needs one physical IP for each IPDU per WO-EX
IPDU. GISUs need physical address for SIP LB (also for
the spare unit), logical address for H.248 LB. Thus, 4+1
IPDUs means that 5 IPDUs (including spare) need 4 IP
addresses each. In addition, logical addresses (LI) are
needed in IPDU for GW addresses. The spare unit must
not be allocated with a logical IP address.
italics font address used for SIP prior to the migration, after migration
only for non-SIP traffic (DNS, and so on)
L Logical
LI Logical Internal
P Physical
PI Physical Internal
LV Logical Virtual
L Logical
LI Logical Internal
P Physical
PI Physical Internal
LV Logical Virtual
Procedure
1
Check if VLAN400 and MSTP instances exist in all AHUBs. The addition of a new
VLAN introduces a service break. The procedure of how to add VLANs is described
in the Site Configuration Guidelines document.
2
Bring the new IPDUs to spare (SP) operating state.
ZUSC:IPDU,3:SE;
ZUSC:IPDU,3:TE;
ZUSC:IPDU,3:SP;
ZUSC:IPDU,4:SE;
ZUSC:IPDU,4:TE;
ZUSC:IPDU,4:SP;
g Note: It is important to bring the new IPDU units to WO-EX state below rather than
bringing the old spare unit to WO-EX. This is due to SIP LB implementation, and how
SIP LB uses the spare unit.
3
When you add more IPDUs, set IP forwarding ON also in the new units.
ZQRT:IPDU,3::IPF=YES,;
ZQRT:IPDU,4::IPF=YES,;
Add all the Internet Protocol Director Unit (IPDU) computer units that are required to
the MSS. Check that all IPDU units have IP forwarder activated since in unit
switchover scenarios, IP forwarder functionality might roam around.
5 Activate and configure the IP realm feature if you use multiple SIP LBS instances.
For the detailed instructions, see Feature 1903: MSC Server - IP Realm, Feature
Activation Manual in M-release Feature Documentation. In the examples below,
REALM1, REALM2 and REALM3 are used.
6
Prevent IPDU switchover due to temporary interface problems during the migration
procedure.
In the event of a double link failure in EL4 and EL5, a forced IPDU switchover would
be done. To prevent this during migration, you need to set weight on the 3053
ETHERNET CONNECTION FAILURE alarm. When total weight of 64 is reached, a
switchover occurs. Weight 1 practically means that the switchover never occurs due
to this alarm.
It can be set by entering the following command:
ZARA:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,::1:N:;
7
If DNS is used by the adjacent network element(s), change the Time To Live (TTL)
timers to a shorter period. The Fully Qualified Domain Name (FQDN) visible to the
adjacent element might be MSS public FQDN, MSS’s private FQDN, or, unit level
FQDN.
TTL is on a zone basis, having a granularity of a second. Change the TTL gradually
to a very short period, for instance, one minute.
8
Create general IP configuration to the IPDUs, if not already created. In the example
below, two new IPDUs (IPDU-3 and IPDU-4) are created for the SIP LB.
Sub-steps
a)
Internal interfaces for general setup:
1. ZQRN:IPDU,3::EL0::UP;
2. ZQRN:IPDU,3::EL1::UP;
3. ZQRN:IPDU,4::EL0::UP;
4. ZQRN:IPDU,4::EL1::UP;
5. ZQRA:IPDU,3::BOND0,EL0,EL1:;
6. ZQRA:IPDU,4::BOND0,EL0,EL1:;
7. ZQRA:IPDU,3&4::VLAN400:400:BOND0::UP;
8. ZQRA:IPDU,0&&2::VLAN400:400:BOND0::UP;
(This step is applicable only if the configuration has not been created earlier.)
9. ZQRA:GISU,x::BOND0,EL0,EL1:;
(If not yet created, applicable for every GISU.)
10. ZQRA:GISU,x::VLAN400:400:BOND0;
(If not yet created, applicable for every GISU.)
11. ZQRA:IPDU,3&4::VLAN100,100,BOND0,,::UP:;
(These are existing VLANs, but new units here.)
12. ZQRA:IPDU,3&4::VLAN200,200,BOND0,,::UP:;
13. ZQRA:IPDU,3&4::VLAN300,300,BOND0,,::UP:;
14. ZQRA:IPDU,3&4::VLAN330,330,BOND0,,::UP:;
15. Logical IPDU IP addresses
If GW is not yet created:
ZQRN:IPDU,0&&1::VLAN400,:"169.254.0.1",23,L,I::;
If IPDU’s 0-2 has already GW IP:
ZQRN:IPDU,3&&4::VLAN400,:"169.254.0.4",23,L,I::;
Wait until IPDU-3 and 4 come to WO-EX state.
16. 1st IPDU-LBMgr IPDU physical IP’s
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.10",23,P,I::;
If IPDU’s 0-2 has already GW IP:
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.12",23,P,I::;
17. 2nd IPDU-LBMgr IPDU physical IP’s
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.20",23,P,I::;
or
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.22",23,P,I::;
18. 3rd IPDU-LBMgr IPDU physical IP’s
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.30",23,P,I::;
or
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.32",23,P,I::;
19. 4th IPDU-LBMgr IPDU physical IP’s
This step is applicable if the configuration has not been created earlier:
ZQRN:IPDU,0&&4::VLAN400,:"169.254.0.40",23,P,I::;
or
ZQRN:IPDU,3&4::VLAN400,:"169.254.0.42",23,P,I::;
20. This step is applicable if the configuration has not been created earlier:
ZQRN:GISU,0&&114::VLAN400,:"169.254.1.1",23,P,I::;
b)
External interfaces for general setup:
1. ZQRA:IPDU,3::EL4::UP:;
2. ZQRA:IPDU,4::EL4::UP:;
3. ZQRA:IPDU,3::EL5::UP:;
4. ZQRA:IPDU,4::EL5::UP:;
5. ZQRA:IPDU,3&4::VLAN1100,1100,EL4,,::UP:;
6. ZQRA:IPDU,3&4::VLAN1200,1200,EL5,,::UP:;
7. ZQRA:IPDU,3&4::VLAN1300,1300,EL4,,::UP:;
8. ZQRA:IPDU,3&4::VLAN1301,1300,EL5,,::UP:;
9. ZQRA:IPDU,0&&4::VLAN1350,1350,EL4,,::UP:;
10. ZQRA:IPDU,0&&4::VLAN1351,1350,EL5,,::UP:;
c)
MCGF/SIP-I over UDP/TCP
1. ZQRN:GISU,0::VLAN400,:”10.22.97.6”,32,L,V:::;
2. ZQRN:GISU,1::VLAN400,:”10.22.97.6,”,32,L,V::;
3. ZQRN:GISU,2::VLAN400,:”10.22.97.6,”,32,L,V::;
(It must be repeated in all WO-EX GISUs.)
4. ZQRN:IPDU,3::VLAN1300,:”10.22.97.6”,28,L,V::;
5. Wait until IPDU-3 comes to WO-EX state.
d)
MCGF/SIP-I over SCTP
SCTP multihoming VLANs for IPDU
1. ZQRN:GISU,0::VLAN400,:”10.22.95.3”,32,::;
2. ZQRN:GISU,1::VLAN400,:”10.22.95.3”,32,L,V::;
3. ZQRN:GISU,2::VLAN400,:”10.22.95.3”,32,L,V::;
(In case of multihoming VIPs for IPDU)
1. ZQRN:IPDU,3::VLAN1100,:”10.22.95.3”28,,L,V::;
2. ZQRN:IPDU,3::VLAN1200,:”10.22.96.3”28,,L,V::;
e)
SIP access
VIPs
1. ZQRN:GISU,0::VLAN400,:”10.22.97.7”,32,L,V::;
2. ZQRN:GISU,1::VLAN400,:”10.22.97.7”,32,L,V::;
3. ZQRN:GISU,2::VLAN400,:”10.22.97.7”,32,L,V::;
4. ZQRN:IPDU,4::VLAN1300,:”10.22.97.7”,28,L,V::;
f)
SIP trunk
VIPs
1. ZQRN:GISU,0::VLAN400,:”10.22.97.20”,32,L,V:::;
2. ZQRN:GISU,1::VLAN400,:”10.22.97.20”,32,L,V:::;
3. ZQRN:GISU,2::VLAN400,:”10.22.97.20”,32,L,V:::;
4. ZQRN:IPDU,0::VLAN1350,:”10.22.97.20”,28,L,V:::;
w NOTICE: Once created, do not disable the EL4 or EL5 physical interfaces on
IPDU units, or any logical VLAN interfaces configured on top of them which have
Virtual IP (VIP) addresses assigned in both IPDU and GISU units—that is, do not
set such interfaces to Down state using the QRA MML command. If you do, this
will have an adverse affect on all of the interfaces and IP addresses configured
on top of them and may result in those interfaces and IP addresses being
rendered unreachable by the system kernel routing process.
If IPDU-based SIP and/or H.248 load balancing is used with SCTP multihoming
(MH) transport support, then disabling the SCTP MH primary path interface on an
IPDU unit will cause packet drops among internal backend traffic sent between
IPDU and GISU units over VLAN400.
If IPDUs are used for IP forwarding purposes rather than load balancing, and
SRCNET policy routes are configured, then disabling the active external interface
(used for reaching the next hop address) will cause the cached route to be
disabled.
9
Create spare unit definition if not already created due to another LB.
ZJJC:CREATE:IPDU,2:;
In this example, unit 2 is spare at the moment.
10
Create LBS GROUPs. Add all the units for each load share group together. For each
LBS, configure IP addresses, port numbers and other relevant parameters. Different
SIP use cases might require different LBS groups. For example, one for SIP-I and
another for NVS SIP.
Sub-steps
a)
LBSSIP: MGCF/SIP-I over UDP/TCP
ZJJE:CREATE:NAME=LBS-
MGCFSIPI,TYPE=SIP:HASHT=120,:PROT=TCP,PORT=5060,PROLE=1,I
P="10.22.97.6":UNIT=GISU,IND=0;
ZJJE:MODIFY:NAME=LBS-MGCFSIPI,TYPE=SIP:::UNIT=GISU,IND=1;
ZJJE:MODIFY:NAME=LBS-MGCFSIPI,TYPE=SIP:::UNIT=GISU,IND=2;
ZJJE:MODIFY:NAME=LBS-
MGCFSIPI,TYPE=SIP::PROT=UDP,PORT=0,PROLE=2,IP="10.22.97.6
":;
ZJJE:MODIFY:NAME=LBS-
MGCFSIPI,TYPE=SIP::PROT=TCP,PORT=0,PROLE=2,IP="10.22.97.6
":;
ZJJE:MODIFY:NAME=LBS-
MGCFSIPI,TYPE=SIP::PROT=TCP,PORT=0,PROLE=1,IP="10.22.97.6
":;
Attach the LB to the correct IPDU unit. The command is different depending on
whether another LB is attached to the IPDU already.
ZJJC:<operation=CREATE|MODIFY>:<unit type>,<unit
index>:INT=<internal interface>,EXT=<external
interface>:NAME=<lbs name>;
ZJJC:CREATE:IPDU,3::NAME=LBS-MGCFSIPI;
b)
LBSSIP: SIP trunk over UDP/TCP
ZJJE:CREATE:NAME=LBS-SIPTRUNK,
TYPE=SIP:HASHT=120,:PROT=TCP,PORT=5060,PROLE=1,IP=10.22.9
7.20:UNIT=GISU,IND=0:;
ZJJE:MODIFY:NAME=LBS-SIPTRUNK,TYPE=SIP:::UNIT=GISU,IND=1;
ZJJE:MODIFY:NAME=LBS-SIPTRUNK,TYPE=SIP:::UNIT=GISU,IND=2;
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=UDP,PORT=0,PROLE=2,IP=10.22.97.20
;
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=TCP,PORT=5070,PROLE=1,IP=10.22.97
.20;
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=UDP,PORT=5070,PROLE=1,IP=10.22.97
.20;
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=TCP,PORT=0,PROLE=1,IP=10.22.97.20
;
ZJJE:MODIFY:NAME=LBS-
SIPTRUNK,TYPE=SIP::PROT=TCP,PORT=0,PROLE=2,IP=10.22.97.20
;
Attach the LB to the correct IPDU unit. The command is different depending on
whether another LB is attached to the IPDU already.
ZJJC: <operation=CREATE|MODIFY>:<unit type>, <unit
index>:INT=<internal interface>,EXT=<external
interface>:NAME=<lbs name>;
ZJJC:CREATE:IPDU,0::NAME=LBS-SIPTRUNK:
ZJJE:CREATE:NAME= LBS-NVS69,
TYPE=SIP:HASHT=120,:PROT=TCP,PORT=5060,PROLE=1,IP=10.22.9
7.7: UNIT=GISU,IND=0:;
ZJJE:MODIFY:NAME= LBS-NVS69,TYPE=SIP:::UNIT=GISU,IND=1;
ZJJE:MODIFY:NAME= LBS-NVS69,TYPE=SIP:::UNIT=GISU,IND=2;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=5060,PROLE=1,IP=10.22.97.7;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=UDP,PORT=5060,PROLE=1,IP=10.22.97.7;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=UDP,PORT=0,PROLE=2,IP=10.22.97.7;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=0,PROLE=1,IP=10.22.97.7;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=0,PROLE=2,IP=10.22.97.7;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=TCP,PORT=5070,PROLE=1,IP=10.22.97.7;
ZJJE:MODIFY:NAME=LBS-
NVS69,TYPE=SIP::PROT=UDP,PORT=5070,PROLE=1,IP=10.22.97.7;
Attach the LB to the correct IPDU unit. The command is different depending on
whether another LB is attached to the IPDU already.
ZJJC: <operation=CREATE|MODIFY>:<unit type>, <unit
index>:INT=<internal interface>,EXT=<external
interface>:NAME=<lbs name>;
ZJJC:CREATE:IPDU,4::NAME=LBS-NVS69:;
ZJJE:CREATE:NAME=LBS-
SCTPSIPI,TYPE=SIP:HASHT=120,:PROT=SCTP,PORT=5060,PROLE=1,
IP=10.22.95.3,IP2=10.22.96.3:UNIT=GISU,IND=0;
ZJJE:MODIFY:NAME=LBS-SCTPSIPI,TYPE=SIP:::UNIT=GISU,IND=1;
ZJJE:MODIFY:NAME=LBS-SCTPSIPI,TYPE=SIP:::UNIT=GISU,IND=2;
ZJJE:MODIFY:NAME=LBS-
SCTPSIPI,TYPE=SIP::PROT=SCTP,PORT=0,PROLE=2,IP=10.22.95.3
,IP2=10.22.96.3::;
Attach the LB to the correct IPDU unit. The command is different depending on
whether another LB is attached to the IPDU already.
ZJJC: <operation=CREATE|MODIFY>:<unit type>, <unit
index>:INT=<internal interface>,EXT=<external
interface>:NAME=<lbs name>;
ZJJC:CREATE:IPDU,3::NAME=LBS-SCTPSIPI;
11 Create the VLAN routing interfaces on CE devices for SCTP MH primary and
secondary path, and TCP/UDP traffic flows. Additionally, create the HSRP/VRRP
group(s) for the TCP/UDP traffic related VLAN routing interface.
You get an end result similar to this with Cisco OSR (Juniper commands are in
ANNEX C: Juniper commands). The interface IDs are only examples and vary by
configuration. Changes compared to IP forwarder are highlighted with bold font.
***site router CE-1
vlan 1100
name SCTP_MH1
interface Vlan1100
description SCTP_MH1
ip address 10.22.95.1 255.255.255.240
end
vlan 1300
name TCPUDP
interface Vlan1300
description TCPUDP
ip address 10.22.97.2 255.255.255.240
standby 93 10.22.97.1 255.255.255.240 ** HSRP
end
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1300
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
vlan 1350
name TCPUDP2
interface Vlan1350
description TCPUDP2
ip address 10.22.97.18 255.255.255.240 ** SIP trunk over TCP
standby 93 10.22.97.17 255.255.255.240 ** HSRP SIP trunk over TCP
end
vlan 1200
name SCTP_MH2
interface Vlan1200
description SCTP_MH2
ip address 10.22.96.1 255.255.255.240
end
vlan 1300
name TCPUDP
interface Vlan1300
description TCPUDP
ip address 10.22.97.3 255.255.255.240
standby 93 10.22.97.1 255.255.255.240 ** HSRP end
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1300
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
vlan 1350
name TCPUDP2
interface Vlan1350
description TCPUDP2
ip address 10.22.97.19 255.255.255.240 ** SIP trunk over TCP
standby 93 10.22.97.17 255.255.255.240 ** HSRP SIP trunk over TCP
end
12
Configuring the DNS Resolver Cache Service.
You must configure destination based static routes to the DNS core servers in those
signaling units used by the DNS Cache Service so that the system can select a local
IP address from the correct local VLAN interface. If you do not create static routes to
these servers, the system may select one of the VIP addresses on VLAN400 with the
result being that the DNS core servers become unreachable.
Configure static routes toward the primary, secondary and tertiary DNS servers using
the QRK MML command and then after that the static routes in GISU units via an
active IPDU unit toward the DNS nameservers using the QKC MML command.
Example:
ZQRK:"10.21.1.1","10.2.2.2","10.2.2.3","OPERATOR.COM"::;
ZQKC:GISU,0::"10.21.1.0",26:"10.23.153.1":LOG;
ZQKC:GISU,0::"10.2.2.0",26:"10.23.153.1":LOG;
Once the IPv4 network interface has been configured, test the connectivity to the
signaling unit by pinging its IP address from an external device.
13 Create source-based IP routing for the VIPs in the GISUs and IPDUs.
ATCA IPDUs GW logical addresses are from 169.254.0.0 link local address subnet.
For example, 169.254.0.4 is IPDU-3’s GW address for external traffic (UDP, TCP,
SCTP). 10.22.97.1 is HSRP address for UDP/TCP.
Sub-steps
a)
MGCF / SIP-I over UDP/TCP
ZQKM:GISU,x::"10.22.97.6":"169.254.0.4":LOG:;
(through IPDU-3, all GISUs)
ZQKM:IPDU,3::"10.22.97.6":"10.22.97.1":LOG:;
b)
SIP trunk
ZQKM:GISU,x::"10.22.97.20":"169.254.0.1":LOG:;
(IPDU-0)
ZQKM:IPDU,0::"10.22.97.20":"10.22.97.17":LOG:;
c)
SIP access
ZQKM:GISU,x::"10.22.97.7":"169.254.0.5":LOG:;
(IPDU-4)
ZQKM:IPDU,4::"10.22.97.7":"10.22.97.1":LOG:;
d)
MGCF / SIP-I over SCTP
ZQKM:GISU,x::"10.22.95.3":"169.254.0.4":LOG:;
(IPDU-3)
Static IP routes for IPDU with SCTP multihoming:
10.22.95.1 is the CE-1 IP address for the primary path, 10.22.96.1 is the CE-2 IP
address for the secondary path.
ZQKM:IPDU,3::"10.22.95.3":”10.22.95.1”:LOG:;
ZQKM:IPDU,3:: "10.22.96.3":”10.22.96.1”:LOG:;
14
Check the IP configuration in all IPDUs. Check if the network interface, IP address
and IP routing configurations are according to the network plan in all IPDU units.
Sub-steps
a)
Network interface and IP address configuration (in all IPDU units)
ZQRI:IPDU;
b)
Destination based static route configuration (in all IPDU units)
ZQKB:IPDU;
c) Local IP address based default gateway policy route configuration (in all IPDU
units)
ZQKO:IPDU;
d)
Local IP subnet based default gateway policy route configuration (in all IPDU
units)
ZQKQ:IPDU;
e)
Full routing information base (RIB) contents (in each IPDU unit)
ZQRS:IPDU,0::ROU;
ZQRS:IPDU,1::ROU;
ZQRS:IPDU,2::ROU;
ZQRS:IPDU,3::ROU;
ZQRS:IPDU,4::ROU;
Sub-steps
a)
If DNS is used by peer elements, change the DNS so that the MSS FQDN points
to the VIP. In the case of SIP-I over SCTP, the FQDN points to the IPDU's
primary SCTP multihoming VIP only, which is the same as GISU SCTP VIP
(10.22.95.3 in the example configuration).
b) Simultaneously for each SIP signaling unit, change in the MSS its end SIP IP
address to a VIP address (see Step 18 below). Non-balanced / non-SIP
addresses might, for instance, be used for Lightweight Directory Access Protocol
(LDAP).
c) The MSS software reads the end IP address at the beginning of the call, and
uses that value throughout the call session. Ongoing calls are not released and
will continue to use the old unit-specific IP address. After the end addresses have
been changed, new calls will use the new IP addresses.
d)
Non-SIP traffic can use the existing logical IP addresses. DNS changed to “point”
to the VIP.
g Note: Traffic cut-over starts here for incoming (from this MSS point of view) calls. There
are ip_cp_unreachable (920H) reason codes in the adjacent element if it is an Nokia
MSS. The problem is invisible in the MSS under migration, since the MSS is not
listening to the IP address and the socket related to it.
16 For each SIP signaling unit, if DNS is not used, configure the adjacent element or
Session Border Controller (SBC) to point to the VIP instead of the unit-based IP
address.
The SBC can be set to use fixed IP addresses.
The SBC might use two VIPs as MSC addresses, through different IPDUs. This
approach provides greater resiliency. The SBC is changed accordingly.
17 Flush the DNS cache for all peer elements if possible. If the adjacent element is an
Nokia MSS, switch the DNS cache OFF and then ON.
18
For each SIP signaling unit, change its end SIP IP address to a VIP address in the
MSS. After the license is active, you can create both load balanced SIP IP addresses
and non-load balanced / non-SIP address. The REALM parameter is not needed if
there is only one SIP LBS instance altogether, or different transport (SCTP) is used
(see the background information for details). Traffic cut-over ends when this step is
complete. If the adjacent element is an Nokia MSS, thencl_a_onhook_set_up_phase
(30AH) reason codes appear until this step is complete.
Sub-steps
a)
MGCF / SIP-I over UDP/TCP
ZJIM:GISU,0:REALM=REALM1,INTTYPE=NNI,TRANSPORT=UDP/TCP,:I
P="10.22.97.6"::;
ZJIM:GISU,1:
REALM=REALM1,INTTYPE=NNI,TRANSPORT=UDP/TCP,:IP="10.22.97.
6"::;
ZJIM:GISU,2:
REALM=REALM1,INTTYPE=NNI,TRANSPORT=UDP/TCP,:IP="10.22.97.
6"::;
b) SIP trunk
ZJIM:GISU,0:
REALM=REALM2,INTTYPE=NNI,TRANSPORT=UDP/TCP,PORT=5070,:IP=
"10.22.97.20"::;
ZJIM:GISU,1:
REALM=REALM2,INTTYPE=NNI,TRANSPORT=UDP/TCP,PORT=5070,:IP=
"10.22.97.20"::;
ZJIM:GISU,2:
REALM=REALM2,INTTYPE=NNI,TRANSPORT=UDP/TCP,PORT=5070,:IP=
"10.22.97.20"::;
c)
SIP access
ZJIM:GISU,0:
REALM=REALM3,INTTYPE=UNI,TRANSPORT=UDP/TCP,PORT=5060,:IP=
"10.22.97.7"::;
ZJIM:GISU,1:
REALM=REALM3,INTTYPE=UNI,TRANSPORT=UDP/TCP,PORT=5060,:IP=
"10.22.97.7"::;
ZJIM:GISU,2:
REALM=REALM3,INTTYPE=UNI,TRANSPORT=UDP/TCP,PORT=5060,:IP=
"10.22.97.7"::;
g Note: If the Open TAS is operating in application server mode, the INTTYPE
parameter value must be set to ISCNNI.
e) For LDAP + DNS. LDAP and DNS IP addresses are from the signaling unit
TCP/UDP subnet
ZJDC:GISU,0:NONSIPIP="10.23.153.2",:;
(if already not created)
ZJDC:GISU,1:NONSIPIP="10.23.153.3",:;
ZJDC:GISU,2:NONSIPIP="10.23.153.4",:;
19
For each SIP signaling unit, delete the unit-specific SIP IP addresses if they are not
used for other purposes. Delete SIP subnet's GW addresses from the IPDU, too.
g Note: Ongoing calls using the old unit-specific addresses will be dropped.
ZQRG:IPDU,0::VLAN300:10.23.153.65;
IP forwarder GW address for this subnet:
ZQRG:GISU,0::VLAN300:10.23.153.66;
ZQRG:GISU,1::VLAN300:10.23.153.67;
ZQRG:GISU,2::VLAN300:10.23.153.68;
etc.
ZQRG:IPDU,0::VLAN100:10.23.151.65;
IP forwarder GW address for this subnet, SCTP primary
ZQRG:GISU,0::VLAN100:10.23.151.66;
ZQRG:IPDU,1::VLAN200:10.23.152.65;
IP forwarder GW address for this subnet, SCTP secondary
ZQRG:GISU,2::VLAN200:10.23.152.66;
21
Change the DNS validity times back to normal.
22 Clean up IP forwarder. If everything is working fine, and there are no other users for
these IP forwarder VLANs (for example, H.248), delete VLAN from the GISU:
Sub-steps
a)
SCTP MH:
ZQRG:IPDU,0::VLAN100::;
ZQRG:IPDU,1::VLAN100::;
ZQRG:IPDU,2::VLAN100::;
ZQRG:IPDU,3::VLAN100::;
ZQRG:IPDU,4::VLAN100::;
ZQRG:IPDU,5::VLAN100::;
ZQRG:IPDU,0::VLAN200::;
ZQRG:IPDU,1::VLAN200::;
ZQRG:IPDU,2::VLAN200::;
ZQRG:IPDU,3::VLAN200::;
ZQRG:IPDU,4::VLAN200::;
ZQRG:IPDU,5::VLAN200::;
ZQRG:GISU,0::VLAN100::;
ZQRG:GISU,0::VLAN200::;
ZQRG:GISU,1::VLAN100::;
ZQRG:GISU,1::VLAN200::;
ZQRG:GISU,2::VLAN100::;
ZQRG:GISU,2::VLAN200::;
and so on.
ZQRG:IPDU,0::VLAN300::;
ZQRG:IPDU,1::VLAN300::;
ZQRG:IPDU,2::VLAN300::;
ZQRG:IPDU,3::VLAN300::;
ZQRG:IPDU,4::VLAN300::;
ZQRG:IPDU,5::VLAN300::;
ZQRG:GISU,0::VLAN300::;
ZQRG:GISU,1::VLAN300::;
ZQRG:GISU,3::VLAN300::;
and so on.
23 Enable IPDU switcher due to the 3053 alarm again by canceling the rule created
earlier. You can do this by entering the following command:
ZARD:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,:;
set interfaces vlan unit 1350 family inet address 10.22.97.18/28 vrrp-group
130 virtual-address 10.22.97.17
set interfaces vlan unit 1350 family inet address 10.22.97.18/28 vrrp-group
130 priority 80
set interfaces vlan unit 1350 family inet address 10.22.97.18/28 vrrp-group
130 fast-interval 200
set interfaces vlan unit 1350 family inet address 10.22.97.18/28 vrrp-group
130 accept-data
mss63_sip-tcp mss63_sip-tcp2
set interfaces vlan unit 1350 family inet address 10.22.97.19/28 vrrp-group
130 virtual-address 10.22.97.17
set interfaces vlan unit 1350 family inet address 10.22.97.19/28 vrrp-group
130 priority 100
set interfaces vlan unit 1350 family inet address 10.22.97.19/28 vrrp-group
130 fast-interval 200
set interfaces vlan unit 1350 family inet address 10.22.97.19/28 vrrp-group
130 preempt hold-time 30
set interfaces vlan unit 1350 family inet address 10.22.97.19/28 vrrp-group
130 accept-data
1. The host IP address(es) must be from different subnet(s) than the ones used for IP
forwarding.
2. IP forwarder units have enough capacity and bandwidth.
Before you start
The following prerequisites must be met:
1. An Open MSS is necessary for the migration. The IPDU-based M3UA load balancer
feature is not supported in MSS based on DX200.
2. IP forwarder is in use and the IP forwarder uses two working IPDUs. The IP
forwarder related configuration changes are described in section IP forwarder
migration (Open MSS).
3. A minimum of 4+1 IPDU units are required. (4 working and one spare are needed. A
minimum of two active IPDUs are recommended for the M3UA LB to obtain resiliency
and two for IP forwarder). IPDU computer units are installed and working.
4. There is an existing and functioning M3UA signaling connection network which will
be migrated to load balancer of the IPDU. The creation of a completely new
connection is described in Feature 1949: M3UA Load Balancer, Feature Activation
Manual in M-Release Fetaure Documentation.
5. The network planning has been performed – the new SCTP associations,
association sets and IP addressing are designed. If the I-HSPA flat architecture is
used, the SCTP association set’s activation delay parameter is designed as well. For
details, see Feature 1952: I-HSPA support in MSS, Feature Description and Feature
1952: I-HSPA support in MSS, Feature Activation Manual in M-Release Feature
Documentation.
6. Some SCTP associations might require a change. This is the case if both/all
associations of an association set are using the same IPDU.
7. If signaling units use the same IP addresses for H.248 and M3UA/SIGTRAN, distinct
IP addresses for H.248 have been assigned. This is due to the migration phase
setup – two units cannot own the same IP address.
8. IP addresses must have the same scope on SCTP association basis. For more
information about the classification of IPv4 addresses and IPv4 address scoping
policy considerations for SCTP, see section IP addressing and routing principles for
MSS System network elements in Site Connectivity Guidelines for MSS System.
Currently, in MSS System products, IP address scoping policy is always enabled by
default and cannot be disabled by configuration.
9. MSS sales item L.1637 is available for M3UA LB.
10. There must be enough spare GISU units.
Different scenarios
The migration can be done in either of the below cases:
• The SCTP associations are in GISU units, and GISU-based load balancer is not in
use.
• The SCTP associations are in load balancing GISU units.
This document describes the former case, that is, the IP address is kept. It is easier to
migrate a new IP address but it means change for the peer network element.
g Note: M3UA LB supports 100 listening server sockets per IPDU. The platform supports
10 IP addresses per VLAN interface. Thus, if the GISU-based M3UA LB was not used,
it might not be possible to migrate all old IP addresses into load balancer. It depends on
how many IPDUs are available. One option is to use a dedicated VLAN for old M3UA IP
addresses, by which we can obtain all 10 addresses per subnet per LB IPDU.
g Note: In the sequence below, two IPDUs are created with one M3UA load balancer
instance each, and then one SCTP association set is migrated to a load balancer.Both
are possible scenarios. Use of an IP forwarder unit requires distinct VLANs and also
change of IP addresses in MSS end.
g Note: It is not allowed to use same IP address as primary path IP address in one
association and the same IP address as secondary IP address for another SCTP
association.
IP addresses in SCTP MH
Figure 55 IP addresses used in SCTP multi-homing
IPforwarderunit
VLAN1100:10.22.95.2
GW:10.23.151.1 MSS
10.50.51.52 IPDU-0
10.50.100.52 CE-1 Workingunit SCTPPRI:10.23.151.2
10.22.95.1 GISU-0 SCTPSEC:10.23.152.2
TCP/UDP:10.23.153.66
X
IPDU-1
VLAN1200:10.22.96.2
Workingunit
GISU-1
RNC2 GW:10.23.152.1
GISU-2
10.23.151.60
IPDU-2
Spareunit
GISU-3
10.60.60.88
10.60.100.88 10.22.96.1 X
GISU-4
IPforwarderunit
10.23.160.61 VLAN1300:10.22.97.20
GW:10.23.160.1 MSS
10.23.160.125 IPDU-0
10.22.97.18 CE-1 Workingunit SCTPass1:10.23.160.2
10.70.51.44/24 10.22.97.26 GISU-0
10.70.52.44/24
XVLAN1300:10.22.97.21
IPDU-1
Workingunit
GISU-1
GW:10.23.160.65
I-BTS08 SCTPass2:10.23.160.67
10.22.97.17VLAN1300HSRP1 IPDU-2 GISU-2
10.23.160.60VLAN1340HSRP1 Spareunit
10.23.160.124VLAN1340HSRP1
10.22.97.25VLAN1300HSRP2 IPDU-3 GISU-3
VLAN1340SCTP1:10.23.160.2 Workingunit
X GISU-4
10.22.97.19
10.22.97.27
10.23.160.62 IPDU-4 ...
10.22.160.126
CE-2 VLAN1340SCTP:10.23.160.67 Workingunit
GISU-n
SCTPassociation1 M3UALBunit
SCTPassociation2
Each existing association is replaced by a new one, the same amount of IPDUs
Figure 57 Sub case 1: Each existing association is replaced by a new one, same
amount of IPDUs
IPforwarderunit
MSS
SCTPassociationset1 IPDU-0
path
on 1, primary Workingunit
ssociati GISU-0
SCTP a
SCTPassociation1,secondarypath
IPDU-1
Workingunit
GISU-1
Peer
GISU-2
network SC
TP
element SC a
TP sso IPDU-2
GISU-3
as cia Spareunit
so tio
cia n1
tio ,b
ot
n
2 hp GISU-4
ath
s
IPDU-3 ...
Workingunit
GISU-n
IPDU-4
Workingunit
SCTPassociation1
M3UALBunit
SCTPassociation2
Each existing association is replaced by a new one, fewer IPDUs
Figure 58 Sub case 2: Each existing association is replaced by a new one, fewer
IPDUs than associations
MSS
SCTPassociationset1 IPDU-0
Workingunit
GISU-0
IPDU-1
Workingunit
GISU-1
Peer
GISU-2
network
element
IPDU-2
Spareunit
GISU-3
GISU-4
SCTPassociation2 GISU-n
IPDU-4
Workingunit
SCTPassociation1
SCTPassociation2
Number of SCTP associations is reduced
Figure 59 Sub case 3: Number of SCTP associations is reduced
MSS
SCTPassociationset1 IPDU-0
XX Workingunit
GISU-0
XX IPDU-1
Workingunit
GISU-1
Peer
GISU-2
network
element
IPDU-2
Spareunit
GISU-3
GISU-4
SCTPassociation2 GISU-n
IPDU-4
Workingunit
SCTPassociation1
SCTPassociation2
Table 16 Internal LANs and IPv4 subnets in the example - multi-homing (Cont.)
IPv4 subnet 10.23.151.0/26 10.23.152.0/26 169.254.0.0/23 **
(local)
IPDU-2 (SP) - - **
169.254.0.12 PI
169.254.0.22 PI
169.254.0.32 PI
169.254.0.42 PI
IPDU-3 169.254.0.4 LI **
169.254.0.13 PI
169.254.0.23 PI
169.254.0.33 PI
169.254.0.43 PI
IPDU-4 169.254.0.5 LI **
169.254.0.14 PI
169.254.0.24 PI
169.254.0.34 PI
169.254.0.44 PI
L Logical
LI Logical Internal
P Physical
PI Physical Internal
LV Logical Virtual
IPDU-1 - 10.23.160.65 L
IPDU-2 -
IPDU-3
IPDU-4
M3UA LB does not need TCP/UDP VLANs. However, these VLANs must exist in all
IPDUs, since an IPDU switchover might happen and VLANs do not move along in a unit
switchover.
Load balanced traffic which uses LAN internally is using VLAN400. M3UA LB does not
need VLAN400, but here we configure it since SIP & H.248 LB might need it. VLANs do
not move along in a unit switchover.
VLAN
IPDU-0 (WO) 10.22.95.2 L -
routing
interface
(SVI) IPDU-1 (WO) - 10.22.96.2 L
* After migration
and the
IPDU-2 (SP) - - -
associated
IP addresses
IPDU-3 (WO) - - 10.23.160.2 L/LV
(in IPDU * **
units and
DCN/MPBN IPDU-4 (WO) - 10.23.160.67 L/LV
site routers * **
CE-1/CE-2)
CE-1 10.22.97.1 10.22.97.17 (10.22.97.44) 10.23.160.60 (10.23.160.124)
* After migration
Procedure
1 Check that VLAN400, and MSTP instances exist in all AHUBs. The addition of a new
VLAN introduces a service break. The Site Configuration Guidelines document
shows instructions how to add VLANs.
g Note: It is important to bring the new IPDU units to WO-EX state below rather than
bringing the old spare unit to WO-EX. This is due to SIP LB implementation, and how
SIP LB uses the spare unit.
3 When you add more IPDUs, set IP forwarding ON also in the new units.
ZQRT:IPDU,3::IPF=YES,;
ZQRT:IPDU,4::IPF=YES,;
Add all the Internet Protocol Director Unit (IPDU) computer units that are required to
the MSS. Check that all IPDU units have IP forwarder activated, since in unit
switchover scenarios IP forwarder functionality might roam around.
5
Prevent IPDU switchover due to temporary interface problems during the migration
procedure.
In the event of a double link failure in EL4 and EL5, a forced IPDU switchover would
be done. To prevent this during migration, you need to set weight on the 3053
ETHERNET CONNECTION FAILURE alarm. When total weight of 64 is reached, a
switchover occurs. Weight 1 practically means that the switchover never occurs due
to this alarm.
It can be set by entering the following command:
ZARA:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,::1:N:;
6
Create a general IP configuration to the IPDUs, if not already created. In the example
below, two new IPDUs (IPDU-3 and IPDU-4) are created for the M3UA unit pool.
Sub-steps
a)
1. ZQRN:IPDU,3::EL0::UP;
2. ZQRN:IPDU,3::EL1::UP;
3. ZQRN:IPDU,4::EL0::UP;
4. ZQRN:IPDU,4::EL1::UP;
5. ZQRA:IPDU,3::BOND0,EL0,EL1:;
6. ZQRA:IPDU,4::BOND0,EL0,EL1:;
7. ZQRA:IPDU,3&4::VLAN400:400:BOND0::UP;
8. ZQRA:IPDU,0&&2::VLAN400:400:BOND0::UP;
This step is applicable only if the configuration has not been created earlier.
9. ZQRN:IPDU,3::VLAN400:”169.254.0.4”,23,L,::;
The physical IP address is needed only if not created earlier. A minimum of
one IP address per IPDU is needed, however, SIP load balancer needs
more. Different load balancers might share the same physical IP address.
Create also other additional physical IPs as in table Table 16: Internal LANs
and IPv4 subnets in the example - multi-homing.
10. ZQRN:IPDU,4::VLAN400:”169.254.0.5”,23,L,::;
11. ZQRA:IPDU,3&4::VLAN100,100,BOND0,,::UP:;
12. ZQRA:IPDU,3&4::VLAN200,200,BOND0,,::UP:;
These are existing VLANs, but new units here.
13. ZQRA:IPDU,3&4::VLAN300,300,BOND0,,::UP:;
Internal interfaces for general setup:
b) 1. ZQRA:IPDU,3::EL4::UP:;
2. ZQRA:IPDU,4::EL4::UP:;
3. ZQRA:IPDU,3::EL5::UP:;
4. ZQRA:IPDU,4::EL5::UP:;
5. ZQRA:IPDU,3&4::VLAN1100,1100,EL4,,::UP:;
(Existing IPDUs are expected to have these VLANs.)
6. ZQRA:IPDU,3&4::VLAN1200,1200,EL5,,::UP:;
7. ZQRA:IPDU,0&4::VLAN1140,1140,EL4,,::UP:;
8. ZQRA:IPDU,0&4::VLAN1240,1240,EL5,,::UP:;
9. ZQRA:IPDU,3&4::VLAN1300,1300,EL4,,::UP:;
(SH)
10. ZQRA:IPDU,3&4::VLAN1301,1300,EL5,,::UP:;
(SH)
11. ZQRA:IPDU,3&4::VLAN1340,1340,EL4,,::UP:;
(SH)
12. ZQRA:IPDU,3&4::VLAN1341,1340,EL5,,::UP:;
(SH)
External interfaces for general setup:
w NOTICE: Once created, do not disable the EL4 or EL5 physical interfaces on IPDU
units, or any logical VLAN interfaces configured on top of them which have Virtual IP
(VIP) addresses assigned in both IPDU and GISU units—that is, do not set such
interfaces to Down state using the QRA MML command. If you do, this will have an
adverse affect on all of the interfaces and IP addresses configured on top of them and
may result in those interfaces and IP addresses being rendered unreachable by the
system kernel routing process.
If IPDU-based SIP and/or H.248 load balancing is used with SCTP multihoming (MH)
transport support, then disabling the SCTP MH primary path interface on an IPDU unit
will cause packet drops among internal backend traffic sent between IPDU and GISU
units over VLAN400.
If IPDUs are used for IP forwarding purposes rather than load balancing, and SRCNET
policy routes are configured, then disabling the active external interface (used for
reaching the next hop address) will cause the cached route to be disabled.
7 Verify the SS7 configuration according to Feature 1949: M3UA Load Balancer,
Feature Activation Manual in M-Release Feature Documentation using the NMM
command.
9 Configure the load balancing unit when no other LB exists in the IPDU prior to the
M3UA LB:
Sub-steps
a)
Define the external and internal interfaces to the spare unit:
ZJJC:CREATE:IPDU,2:;
b) Create a M3UA load balancer function to every IPDU that has M3UA LB:
ZJJC:<operation=CREATE|MODIFY>:<unit type>,<unit
index>:INT=<internal interface>,EXT=<external
interface>:NAME=<lbs name>;
ZJJC:CREATE:IPDU,3:NAME=M3UALBS0;
ZJJC:CREATE:IPDU,4:NAME=M3UALBS1;
g Note: The M3UA load balancer does not use the INT parameter but the
”INT=VLAN400” must be given due to the other load balancers’ operability. This
parameter is shared between load balancers.
g Note: The “EXT=EL4” shall be given in this command. However, the EL5 ethernet port
of the IPDU is also utilized by the M3UA LB in case of SCTP multi-homing.
10
Configure the load balancing unit when another LB already exist in the IPDU, for
example, SIP LB:
Add the load balancer to the IPDUs:
ZJJC:MODIFY:IPDU,3:NAME=M3UALBS0;
ZJJC:MODIFY:IPDU,4:NAME=M3UALBS1;
See also the notes in the previous step.
11
Verify the created load balancer configuration with the JJI command:
JJI:TYPE=UNIT:UTYPE=IPDU,;
INT
UNIT INDEX INTERFACE LBS NAME
---- ----- --------- --------
IPDU 0 VLAN400 H248LB0
LBS-MGCFSIPI
M3UALBS0
IPDU 1 VLAN400 H248LB1
IPDU 3 VLAN400 M3UALBS1
IPDU 2 VLAN400
COMMAND EXECUTED
12
SCTP MH: Create a host-based static IP routing in site routers for individual IP
addresses that use IP forwarder prior to the M3UA LB migration. The site routers
route the SCTP/M3UA traffic to the IP forwarding units until a particular GISU is
migrated.
In this example, 10.22.95.2 is the IPDU-0 (IP forwarder) uplink IP address to router
CE-1. 10.22.96.2 is the IPDU-1 (IP forwarder) uplink IP address to router CE-2.
Also non-M3UA IP addresses must be routed this way if they are in the same subnet.
Sub-steps
a)
Cisco OSR:
b)
Juniper:
CE-1:
set routing-options static route 10.23.151.2/32 next-hop
10.22.95.2
…
set routing-options static route 10.23.151.52/32 next-hop
10.22.95.2
CE-2:
set routing-options static route 10.23.152.2/32 next-hop
10.22.96.2
…
set routing-options static route 10.23.152.52/32 next-hop
10.22.96.2
13
SCTP SH: Create host-based static IP routing in site routers for individual IP
addresses that use IP forwarder prior to the M3UA LB migration. The site routers
route the SCTP/M3UA traffic to the IP forwarding units until a particular GISU is
migrated.
In this example, 10.22.97.20 is the IPDU-0 (IP forwarder) uplink IP address to router
CE-1. 10.22.97.33 is the IPDU-1 (IP forwarder) uplink IP address to router CE-2.
Sub-steps
a)
Cisco OSR:
CE-1:
ip route 10.23.160.2 255.255.255.255 10.22.97.20 name
MSS63_IPDU0_SH1
….
ip route 10.23.160.52 255.255.255.255 10.22.97.20 name
MSS63_IPDU0_SH1 <until last unit>
CE-2:
ip route 10.23.160.65 255.255.255.255 10.22.97.33 name
MSS63_IPDU1_SH2
….
ip route 10.23.160.115 255.255.255.255 10.22.97.33 name
MSS63_IPDU1_SH2 <until last unit>
b)
Juniper:
CE-1:
set routing-options static route 10.23.160.2/32 next-hop
10.22.97.20
….
set routing-options static route 10.23.160.52/32 next-hop
10.22.97.20 <until last unit>
CE-2:
set routing-options static route 10.23.160.65/32 next-hop
10.22.97.21
….
set routing-options static route 10.23.160.115/32 next-hop
10.22.97.33 <until last unit>
14
SCTP MH: Create SCTP VLANs for M3UA in the site routers, especially for the new
units. 10.22.95.1 is the Cisco port IP address of site router CE-1. Then 10.22.96.1 is
the Cisco port IP address of site router CE-2. Changes compared to IP forwarder are
highlighted with bold font. The end result of Cisco OSR is similar to the following:
(Juniper commands are in ANNEX D: Juniper commands for SCTP MH).
***site router 1
vlan 1100
name SCTP_MH1
interface Vlan1100
description SCTP_MH1
ip address 10.22.95.1 255.255.255.240
end
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1100
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1100
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1100
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
description SCTP_MH1-1140
ip address 10.23.151.60 255.255.255.192
end
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1140
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1140
switchport mode trunk
switchport nonegotiate
***site router 2
vlan 1200
name SCTP_MH2
interface Vlan1200
description SCTP_MH2
ip address 10.22.96.1 255.255.255.240
end
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1200
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1200
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
vlan 1240
name SCTP_MH2-1240
interface Vlan1240
description SCTP_MH2-1240
ip address 10.23.152.60 255.255.255.192
end
interface GigabitEthernet3/10
description IPDU-3
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1240
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
interface GigabitEthernet3/11
description IPDU-4
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1240
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
no cdp enable
15
SCTP SH: Create SCTP VLANs for M3UA in the site routers. The commands are
similar to this with Cisco 7600 series OSR commands. The Juniper commands are in
ANNEX E: Juniper commands for SCTP SH. VLAN1300 is the same as in the IP
forwarder migration part, except interfaces for new IPDUs must be added.
### CE1 STCP SH
vlan 1340
name SCTP_SH1
interface Vlan1340
description SCTP_SH1
ip address 10.23.160.125 255.255.255.192 secondary
ip address 10.23.160.61 255.255.255.192 **
standby version 2
standby 100 ip 10.23.160.60 ** HSRP
standby 100 timers 1 3
standby 100 priority 110
standby 100 preempt
standby 100 name SCTP_SH_1_1
vlan 1340
name SCTP_SH1
interface Vlan1340
description SCTP_SH1
ip address 10.23.160.126 255.255.255.192 secondary
ip address 10.23.160.62 255.255.255.192 **
standby version 2
standby 100 ip 10.23.160.60 ** HSRP
standby 100 timers 1 3
standby 100 priority 95
standby 100 preempt
standby 100 name SCTP_SH_1_1
16 Delete all the SCTP associations of the particular GISU, different GISU at each
round.
g Note: In this command below, we use the RNC as an example for a case when the
MSS is the SCTP server. HLR5 is used below as an example when the MSS is the
multi-homed SCTP client. I-BTS is the single-homed SCTP client with two associations.
Sub-steps
a)
SCTP MH:
ZOYS:-M3UA:HLR5,31:DOWN;
ZOYR:-HLR5:31;
or
ZOYS:-M3UA:RNC2,0:DOWN;
ZOYR:-RNC2:0;
b)
SCTP SH GISU-0: // To delete GISU-0 associations.
ZOYS:-M3UA:IBTS08,0:DOWN;
ZOYR:-IBTS08:0;
c)
SCTP SH GISU-2: // This is at different round of the loop than GISU-0.
ZOYS:-M3UA:IBTS08,1:DOWN;
ZOYR:-IBTS08:1;
17 Delete those associations which will be removed completely in the peer element.
This applies to the case when the number of SCTP associations is decreased.
18
Delete the SCTP-related IP routes from the GISU and remove the SCTP source
addresses:
Sub-steps
a)
Interrogate first all the default logical gateways:
ZQKO:;
b)
Then delete the logical route from the particular GISU:
ZQKP:<route number>:<unit type>,[<unit index...>|[ <unit
group>,<unit index...>]]:<plug-in unit type>,<plug-in
unit index>;
ZQKP:8:GISU,0;
c)
Remove the SCTP source addresses from the GISU under migration.
ZOYG:GISU,0:IPv4:;
19
Remove the SCTP source addresses from the GISU under migration.
ZOYG:GISU,0:IPv4:;
20
Remove the SCTP IP addresses from the particular GISU (GISUs are moved one by
one).
g Note: Each GISU must have either a minimum of one logical IP address, or a TDM
resource. Otherwise, a GISU would not stay in WO-EX state.
Sub-steps
a)
SCTP MH:
ZQRG:GISU,0::VLAN100:”10.23.151.2”;
(primary path)
ZQRG:GISU,0::VLAN200:”10.23.152.2”;
(secondary path)
b)
SCTP SH, GISU-0:
ZQRG:GISU,0::VLAN300:”10.23.160.2”;
(SCTP association 1)
c)
SCTP SH, GISU-2:
ZQRG:GISU,2::VLAN300:”10.23.160.67”;
(SCTP association 2)
21
Cisco OSR routers: Remove host IP static routes from the IP addresses of this GISU.
Sub-steps
a)
SCTP MH:
CE-1:
no ip route 10.23.151.2 255.255.255.255 10.22.95.2 name
MSS63_IPDU0
CE-2:
no ip route 10.23.152.2 255.255.255.255 10.22.96.2 name
MSS63_IPDU1
CE-1:
no ip route 10.23.160.2 255.255.255.255 10.22.95.2 name
MSS63_IPDU0_SH1
c)
SCTP SH, GISU-2 association 2:
CE-2:
22
Create GISU-specific IP configuration to an IPDU into which the SCTP association is
moved. In the IPDU-3, a configuration is created that relates to this particular GISU.
The next GISU’s M3UA associations might be moved to IPDU-4 and its IP address
as well.
Sub-steps
a)
1. ZQRN:IPDU,3::VLAN1140:”10.23.151.2”26,,L,::;// If H.248 LB
uses the same address, then L,V
2. ZQRN:IPDU,3::VLAN1240:”10.23.152.2”,26,L,::; // If H.248 LB
uses the same address, then L,V
3. ZQKM:IPDU,3::"10.23.151.2":"10.22.95.1":LOG:;
Site router CE-1 is set as the default GW.
4. ZQKM:IPDU,3::"10.23.152.2":"10.22.96.1":LOG:;
Site router CE-2 is set as the default GW.
5. Wait until IPDU-3 comes to WO-EX state.
IPDU-3 specific configuration changes for SCTP MH:
b)
1. ZQRN:IPDU,3::VLAN1340,VLAN1341:"10.23.160.2",26,L,:; //
If H.248 LB uses the same address, then L,V
2. ZQRP:IPDU,3::VLAN1340:"10.23.160.2":PRI:; // If H.248 LB
uses the same address, then L,V
(Prioritized: el4 ***)
3. ZQKM:IPDU,3::"10.23.160.2":"10.22.97.17":LOG:;
(HSSRP1)
IPDU-3 specific configuration changes for SCTP SH (first SCTP association of
GISU-0):
c)
1. ZQRN:IPDU,4::VLAN1341,VLAN1340:"10.23.160.67",28,L,:;
2. ZQRP:IPDU,4::VLAN1341:"10.23.160.67":PRI:;
(prioritized: el5 ***)
3. ZQKM:IPDU,4::"10.23.160.67":"10.22.97.44":LOG:;
(HSSRP2)
4. Wait until IPDU-4 comes to WO-EX state.
IPDU-4 specific configuration changes for SCTP SH (second SCTP association
of GISU-2 when that GISU is under processing):
24
Re-create the SCTP association using load balancer. It is important to use the
existing parameter set. Using the default stream count is recommended. This
command creates a new association index within the association set. Use that index
in the next step.
g Note: If the original indices were 0 and 1, after the sequence, they become 0 and 2.
This has no practical impact, though.
g Note: If the association set has already 32 associations, one of the migrated
associations must be deleted first.
Sub-steps
a)
SCTP MH:
ZOYA:RNC2:IPDU,3,RANAPS0::;
or
ZOYA:HLR5:IPDU,3,MAPS0::;
b) SCTP SH GISU-0:
ZOYA:IBTS08:IPDU,3,RANAPS0::;
c) SCTP SH GISU-2:
ZOYA:IBTS08:IPDU,4,RANAPS0::;
25
Set IP addresses and port numbers for the newly created SCTP association.
ZOYP:<SCTP user>:<association set name>,<association
index>:<source address 1>,[source address 2],[source
port]:<primary destination address>,
[netmask/prefix],[secondary destination address],
[netmask/prefix]:[destination port];
Sub-steps
a)
SCTP MH:
ZOYP:M3UA:RNC2,2:”10.23.151.2","10.23.152.2",2905:”10.50.
51.52”,“10.50.51.00/24”,”10.50.100.52”,”10.50.100.0/24”:2
905;
or
ZOYP:M3UA:HLR5,2:”10.23.151.2","10.23.152.2",2905:”10.60.
51.52”,“10.60.51.0/24”,”10.60.100.88”,”10.60.100.0/24”:29
05;
b)
SCTP SH (GISU-0):
ZOYP:M3UA:IBTS08,2:”10.23.160.2",,2905:”10.70.51.44”,
“10.70.51.00/24”,,:2905;
c) SCTP SH (GISU-2):
ZOYP:M3UA:IBTS08,0:”10.23.160.67",,2905:”10.70.52.44”,”10
.70.52.00/24”,,:2905;
26 SCTP SH: Set the activation delay parameter in the association set if relevant. This
step is relevant for the I-HSPA flat architecture’s overload control. The delay must be
set to the lower priority links, such as I-BTS connections, while for example, the HLR
links must be established as soon as possible without delay. The parameter helps
establishing vital connections in case of MSS overload.
The setting must be performed when the SCTP association is down.
ZOYM:<association set name>:DELAY=<delay>;
ZOYM:IBTS08:DELAY=5;
27 SCTP SH: Set the following parameter for flat architecture. The command must be
set for each peer element’s point code. When it is set, the MSS does not try to send
any network management messages (DUNA, DAVA, and so on) to that point code, or
send any network management messages to other signaling points concerning it.
This helps the recovery speed significantly at system restart. This command can be
given to directly connected elements, there must not be any signaling transfer point
(STP) in between.
ZNRB:IN0,D'10000:REST=R;
or range
ZNRB:IN0,D'10000&&D’16000:REST=R;
Sub-steps
a)
SCTP MH:
ZOYS:-M3UA:-RNC2,2:ACT;
ZOYS:-M3UA:-HLR5,2:ACT;
ZOYS:-M3UA:-IBTS08,2:ACT;
ZOYS:-M3UA:-IBTS08,0:ACT;
29
Check the state of the new SCTP association.
ZOYI:[[NAME=<association set name>|NBR=<association set
number>...]|<all> def]:[A|H|H def];
Sub-steps
a)
SCTP MH:
ZOYI:NAME=RNC2::;
ZOYI:NAME=HLR5::;
b) SCTP SH:
ZOYI:NAME=IBTS08::;
30
Check the alarms. There must not be any new alarms for about the SCTP
association or signaling link.
ZAHO:[[<unit type>|<all> def],[[ <stage>|<pair>]|<all>
def ],[<unit index>|<all>def ]]:[[ CLS=<alarm class>...|NR
= <alarm number>... ]|<all> def];
ZAHO:;
31
Modify the SCTP association in the remote element. This step is not needed if the
own end IP address and port number are unchanged, and the number of SCTP
associations is not reduced.
32
Repeat Steps 7 – 30 for the next migrated GISU. Nokia recommends that the SCTP
associations of each association set are spread out to different IPDUs and M3UA
LBs in order to achieve resiliency.
33
This step is relevant if the GISU-based M3UA LB is in use:
Once all the SCTP associations have been removed from a GISU unit, change the
GISU’s C-AMTA definition to S-AMTA. This enables the unit to do the call processing
utilization.
ZUST:GISU,0:S-AMTA;
After all the M3UA associations are removed from all GISU units, deactivate the old
GISU-based M3UA LB feature and ensure that all S-AMTA is also cleared from all
GISU units. It means taking AMTA out of use totally.
ZUST:GISU,:C-AMTA;
35
When everything works fine, and there are no other users for these IP forwarder
VLANs (for example, H.248), delete the M3UA IP addresses from the GISUs and the
GW addresses from the IPDUs:
SCTP MH:
ZQRG:IPDU,0::VLAN100:”10.23.151.1”;
(IP forwarder GW address for primary path)
ZQRG:GISU,0::VLAN100:10.23.151.2;
ZQRG:GISU,1::VLAN100:10.23.151.3;
ZQRG:GISU,2::VLAN100:10.23.151.4;
and so on.
ZQRG:IPDU,1::VLAN200:”10.23.152.1”;
(IP forwarder GW address for secondary path)
ZQRG:GISU,0::VLAN200:10.23.152.2;
ZQRG:GISU,1::VLAN200:10.23.152.3;
ZQRG:GISU,2::VLAN200:10.23.152.4;
and so on.
SCTP SH:
ZQRG:IPDU,0::VLAN300:”10.23.160.1”;
ZQRG:IPDU,1::VLAN300:”10.23.160.65”;
ZQRG:GISU,2::VLAN300:”10.23.160.67”;
36
This step is relevant if the GISU-based M3UA LB is in use:
Add the former C-AMTA units to the SIP load balancing group(s) and some/any
H.248 unit pools.
Sub-steps
a)
SCTP MH:
ZQRG:IPDU,0::VLAN100::;
ZQRG:IPDU,1::VLAN100::;
ZQRG:IPDU,2::VLAN100::;
ZQRG:IPDU,3::VLAN100::;
ZQRG:IPDU,4::VLAN100::;
ZQRG:IPDU,5::VLAN100::;
ZQRG:IPDU,0::VLAN200::;
ZQRG:IPDU,1::VLAN200::;
ZQRG:IPDU,2::VLAN200::;
ZQRG:IPDU,3::VLAN200::;
ZQRG:IPDU,4::VLAN200::;
ZQRG:IPDU,5::VLAN200::;
ZQRG:GISU,0::VLAN100::;
Repeat this step for all GISUs.
ZQRG:GISU,0::VLAN200::;
Repeat this step for all GISUs.
38
Enable the IPDU switcher due to the 3053 alarm again by cancelling the rule created
earlier.
You can do this by entering the following command:
ZARD:C,S:3053,IPDU,::TYPE=OTYPE,INDEX=OINDEX,:;
interface BVI1100
description SCTP_PRI
ipv4 address 10.22.95.1 255.255.255.240
l2vpn
bridge group ce1ce2
bridge-domain vlan1100
interface GigabitEthernet0/2/0/6.1100
interface GigabitEthernet0/2/0/7.1100
interface GigabitEthernet0/2/0/8.1100
interface GigabitEthernet0/2/0/9.1100
interface GigabitEthernet0/2/0/10.1100
description MSS63_SH_IPDU0_EL4
encapsulation dot1q 1140
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/7.1140 l2transport
description MSS63_SH_IPDU1_EL4
encapsulation dot1q 1140
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/8.1140 l2transport
description MSS63_SH_IPDU2_EL4
encapsulation dot1q 1140
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/9.1140 l2transport
description MSS63_SH_IPDU3_EL4
encapsulation dot1q 1140
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/10.1140 l2transport
description MSS63_SH_IPDU4_EL4
encapsulation dot1q 1140
rewrite ingress tag pop 1 symmetric
interface BVI1140
description SCTP_PRI
ipv4 address 10.23.151.60 255.255.255.192
l2vpn
bridge group ce1ce2
bridge-domain vlan1140
interface GigabitEthernet0/2/0/6.1140
interface GigabitEthernet0/2/0/7.1140
interface GigabitEthernet0/2/0/8.1140
interface GigabitEthernet0/2/0/9.1140
interface GigabitEthernet0/2/0/10.1140
routed interface BVI1140
!
!!
### CE-2iIP_FORWARDER
interface GigabitEthernet0/2/0/6.1200 l2transport
description MSS63_IPDU0_EL5
encapsulation dot1q 1200
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/7.1200 l2transport
description MSS63_IPDU1_EL5
encapsulation dot1q 1200
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/8.1200 l2transport
description MSS63_IPDU2_EL5
encapsulation dot1q 1200
rewrite ingress tag pop 1 symmetric
description MSS63_IPDU4_EL5
encapsulation dot1q 1200
rewrite ingress tag pop 1 symmetric
interface BVI1200
description SCTP_SEC
ipv4 address 10.22.96.1 255.255.255.240
l2vpn
bridge group ce1ce2
bridge-domain vlan1200
interface GigabitEthernet0/2/0/6.1200
interface GigabitEthernet0/2/0/7.1200
interface GigabitEthernet0/2/0/8.1200
interface GigabitEthernet0/2/0/9.1200
interface GigabitEthernet0/2/0/10.1200
routed interface BVI1200
!
!!
SCTP_SEC
description MSS63_SH_IPDU3_EL5
encapsulation dot1q 1240
rewrite ingress tag pop 1 symmetric
interface GigabitEthernet0/2/0/10.1240 l2transport
description MSS63_SH_IPDU4_EL5
encapsulation dot1q 1240
rewrite ingress tag pop 1 symmetric
interface BVI1240
description SCTP_SEC
ipv4 address 10.23.152.60 255.255.255.192 l2vpn
interface GigabitEthernet0/2/0/9.1240
interface GigabitEthernet0/2/0/10.1240
routed interface BVI1240
!
!!
### The following may have been performed already in IP forwarder migration:
set interfaces vlan unit 1300 family inet address 10.22.97.45/29 vrrp-group
130 virtual-address 10.22.97.44
set interfaces vlan unit 1300 family inet address 10.22.97.45/29 vrrp-group
130 priority 80
set interfaces vlan unit 1300 family inet address 10.22.97.45/29 vrrp-group
130 fast-interval 200
set interfaces vlan unit 1300 family inet address 10.22.97.45/29 vrrp-group
130 accept-data
set interfaces vlan unit 1340 family inet address 10.23.160.125/29 vrrp-
group 130 virtual-address 10.23.160.124
set interfaces vlan unit 1340 family inet address 10.23.160.125/29 vrrp-
group 130 priority 80
set interfaces vlan unit 1340 family inet address 10.23.160.125/29 vrrp-
group 130 fast-interval 200
set interfaces vlan unit 1340 family inet address 10.23.160.125/29 vrrp-
group 130 accept-data
set interfaces vlan unit 1300 family inet address 10.22.97.46/29 vrrp-group
130 virtual-address 10.22.97.44
set interfaces vlan unit 1300 family inet address 10.22.97.46/29 vrrp-group
130 priority 100
set interfaces vlan unit 1300 family inet address 10.22.97.46/29 vrrp-group
130 fast-interval 200
set interfaces vlan unit 1300 family inet address 10.22.97.46/29 vrrp-group
130 preempt hold-time 30
set interfaces vlan unit 1300 family inet address 10.22.97.46/29 vrrp-group
130 accept-data
set interfaces vlan unit 1340 family inet address 10.23.160.126/29 vrrp-
group 130 virtual-address 10.23.160.124
set interfaces vlan unit 1340 family inet address 10.23.160.126/29 vrrp-
group 130 priority 100
set interfaces vlan unit 1340 family inet address 10.23.160.126/29 vrrp-
group 130 fast-interval 200
set interfaces vlan unit 1340 family inet address 10.23.160.126/29 vrrp-
group 130 accept-data
16.1 Purpose
The CS fallback (CSFB) overlay concept is introduced in the initial phase of LTE/4G
when the number of LTE/4G access subscribers is likely to be moderate compared to the
amount of all subscribers in the network, and, therefore, not all MSC servers may
support CSFB. However, since the number of LTE/4G subscribers is expected to
increase rapidly, it is vital to provide MSS pool benefits for the LTE/4G access as well.
Terms
NNSF NAS Node Selection Function
Parallel MSS MSSs that belong to the same pool are referred to as
“parallel MSS”
Secondary MSS The radio configuration for the BTS, RNC/BSC, LAC and
Net-LAC in the primary MSS is available to be exported to
all the other MSSs. The MSSs that import this
configuration are referred to as “secondary MSS”.
• Multipoint A and Iu configuration has been finalized. All LACs are part of the required
pool, that is, in which they are planned to be included.
• Free TMSI NRI values are available for the overlay MSS, or some of the existing NRI
values can be moved to the overlay MSS. Change of NRI length should be avoided.
• CSFB capable MSSs
– MSS based on DX200: M15.1 software level is required in the overlay MSS. In
the non-overlay MSS M15.0 software level is the absolute minimum. However,
M15.1 is recommended.
– Open MSS: ATCA-based Open MSS shall have at least M16.1 software level
installed.
– Feature codes 1691 (basic CSFB), 1935 (full CSFB) and feature code 3602
(overlay CSFB) are needed in the overlay MSS. However, feature code 3602 is
not needed in non-overlay MSSs.
– In addition, CS Fallback Rel9 Enhancements - Phase 1 (feature code 3440) and
PSI support for CSFB (feature code 3459), may be used in the overlay MSS.
• All MSSs of the pooling area shall support Multipoint A and/or Iu depending which
access is using the pool. Feature codes 1694 (multipoint advanced kit, optional but
highly recommended), 1693 (Multipoint A), and 1692 (Multipoint Iu) are needed. See
Feature 1449: Multipoint Iu in MSS Concept, Feature Activation Manual and Feature
1564: Multipoint A Interface, Feature Activation Manual both in M-release product
documentation.
• MME shall support pooling in MSS.
• MME shall have all the necessary LTE/4G LACs pre-configured to support the final
phase.
• Backup of configuration files has been performed in all related network elements.
NRI=1
MSS2
NRI=2
MSS
poolarea A
MME
SGSN
MSS3 MSS
poolarea B
UE
NRI=3
LTE
RNC BSC
Option 2
The overlay MSS is inside the 2G/3G MSS Pool area but outside the 4G pool, thus, it is
to be migrated to MSS 4G pool (MSS pool B in the figure).
MSS
poolareaA
MME
NRI=2 NRI=2
UE
MSS3 MSS
poolareaB
NRI=3
MSS
poolarea A
MSS4 MSS1
NRI=4 NRI=1
MSS2
NRI=2
MME
SGSN
MSS3 MSS
poolarea B
UE
NRI=3
LTE
RNC BSC
1
License activation
Sub-steps
a)
Overlay MSS (CSFB features are supposed to be active):
ZW7M:FEA=1692:ON:; // Multipoint Iu ZW7M:FEA=1693:ON:; // Multipoint A
ZW7M:FEA=1694:ON:; // Multipoint advanced kit ZWOA:2,1651,A; //
NNSF_IN_MGW - if RAN independent multipoint is in use.
b)
Non-overlay MSS mandatory features:
ZW7M:FEA=1691:ON:; // Basic CSFB ZW7M:FEA=1935:ON:; // Full CSFB
c)
Non-overlay MSS, recommended features:
ZW7M: FEA=3440:ON; // Fallback Rel9 Enhancements - Phase 1 ZW7M:
FEA=3459:ON; // PSI support for CSFB ZWOC:2,1975,FF; // PSI support for
CSFB activation
2
In the overlay MSS: Define the MSS Global Core Network ID for Multipoint Iu.
ZWVS:CNID=<GlobalCNid>;
3
In the overlay MSS: Configure the pool area name and the NRI length.
ZE3M:POOLNAME=<poolname>,NRILEN=<nrilen>;
4
In the overlay MSS: Add the MSS to the pool area and configure the NRI value. The
CONFSEL parameter defines whether this is the primary MSS of the pool. The
Overlay MSS is unlikely to be the primary MSS.
ZE3A:TYPE=OWN,MSSNAME=<mssname>:CONFSEL=<confsel>,NRI=<nriv
alue>:;
If RAN independent Multipoint is used, see also Feature 1778: RAN Independent
Multipoint A/Iu Support in MGW, Feature Activation Manual in M-rel product
documentation.
5 In the overlay MSS: Set the TMSI reallocation to "ON" in the location update with
new visitor, IMSI attach, location update, and periodic location update.
ZMXN:NAME=<PLMN_name>::TNEW=Y,TIMSI=Y,TLOC=<locup_itv>,TPER
=<per_itv>;
7
In the overlay MSS: Add information about parallel MSSs, one set of commands per
each pool MSS.
ZE3E:MSSNAME=<MSS_name>:NBLAC=<maintlac>,NBMCC=<maintmcc>,N
BMNC=<maintmnc>;
ZE3A:TYPE=PAR,MSSNAME=<MSS_name>:NRI=<nrivalue>:
VDIG=<vlr_gt>,VNI=<vlr_network
_indicator>,VSPC=<vlr_spc>:MDIG=<mss_gt>,
MNI=<mss_network_indicator>,MSPC=<mss_spc>;
8
In the overlay MSS: Create neighboring pool areas:
ZE3C:POOLNAME=<poolname>,NRILEN=<nrilength>;
9
In the overlay MSS: Add MSSs to the neighbouring pool area:
ZE3L:POOLNAME=<poolname>:MSSNAME=<mssname>,NRI=<nrivalue>:V
DIG=<vlr_gt>:MDIG=<mss_gt>:;
10
In the overlay MSS: Check that authentication and ciphering are activated. See
Feature 1449: Multipoint Iu in MSS Concept, Feature Activation Manual and Feature
1564: Multipoint A Interface, Feature Activation Manual both in M-release product
documentation.
11 Bring in radio network configuration from the pool MSSs (non-overlay) to the overlay
MSS. Pool area configurations can be modified either manually, or by using the
export and import procedures. The method used depends on the number of changes
and the already existing pool configuration. Since the entire pool configuration needs
to be copied, the import/export procedure is recommended for the 2G/3G
configuration. See Feature 1449: Multipoint Iu in MSS Concept, Feature Activation
Manual and Feature 1564: Multipoint A Interface, Feature Activation Manual both in
M-release product documentation.
Sub-steps
a)
In the primary MSS of the pool: Verify that pool configuration is correct. The
LTE/4G LAC shall not be part of pool configuration in this phase. (Use the ZELL
and ZELO MML commands.)
b) In the overlay MSS: Check that the LTE/4G LAC is not part of the pool
configuration (to check, you need to use the ZELO MML command). If, by
chance, it is marked as part of a pool, exclude it with the ZELT MML command).
c)
In the primary MSS and overlay MSS: Export & import radio configuration of the
pool as described in Feature 1449: Multipoint Iu in MSS Concept, Feature
Activation Manual and Feature 1564: Multipoint A Interface, Feature Activation
Manual. Alternatively, the “NetAct Multipoint Configuration Assistant” tool can be
used.
• Export radio network configuration from the pooled non-overlay primary MSS
with the ZE3X MML command.
• Import radio network configuration to the overlay MSS (secondary MSS) with
the ZE3Y MML command.
• Activate the new radio network configuration in the overlay MSS with the
ZE3V MML comand.
d)
In the overlay MSS (which is not assumed to be the primary MSS because the
pool already exists): Configure user plane routing, codec information, control
plane routing, and so on towards the MSSs in the pool as designed in network
planning. The export / import procedure only copies the radio configuration.
NetAct “Core Configurator” and “CM Reference” tools can be utilized to create
the needed configuration.
e)
In the non-overlay MSSs: Configure user plane routing, codec information,
control plane routing, and so on towards the overlay MSS as designed in network
planning. Some specific user plane configuration may be inherited from the
overlay MSS. NetAct “Core Configurator” and “CM Reference” tools can be
utilized to create the needed configuration.
12
In the non-overlay MSSes: Add overlay MSS as parallel MSS to the pool (pool A in
the figure):
ZE3E:MSSNAME=<MSS_name>:NBLAC=<maintlac>,NBMCC=<maintmcc>,N
BMNC=<maintmnc>;
ZE3A:TYPE=PAR,MSSNAME=<MSS_name>:NRI=<nrivalue>:
VDIG=<vlr_gt>,VNI=<vlr_network
_indicator>,VSPC=<vlr_spc>:MDIG=<mss_gt>,
MNI=<mss_network_indicator>,MSPC=<mss_spc>;
Step example
Option 2 (CSFB overlay MSS inside 2G/3G pool, but not in 4G pool) starts here:
Sub-steps
a)
Non-overlay MSS: Remove LTE/4G LACs from the neighboring location area.
ZEIR:<lac>:[MCC=<mnc>,MNC=<mnc>];
b)
Create LTE/4G LAC(s). Separate set of LAIs is recommended for LTE/4G.
ZELC:NAME=<lac name>,LAC=<lac>:RNAME=<reference location
are>;
ZELT:TYPE=LANAME=<laname>:INC=Y;
For determining whether a LAC should be inside or outside of a pool, follow the
MME configuration. That is, if a LAC is using multiple MSSs, then the LAC should
be in the pool in the MSSs as well.
d)
Create MME connections. See section Creating a cellular radio network in
Cellular Radio Network Management.
14
In the overlay MSS: Define 4G LACs to be inside of pools.
ZELT:TYPE=LANAME=<laname>:INC=Y;
This should follow MME configuartion. That is, if a LAC is using multiple MSSs, then
the LAC should be in the pool. See also above the comment in step Define 4G LACs
to be inside of pools.
For setting up a cellular radio network, see section Creating a cellular radio network
in Cellular Radio Network Management. You can use the following NetAct tools:
• Multipoint Configuration Assistant
• Core Configurator
• CM Reference
Sub-steps
a)
Create user plane destinations (see User plane routing in M-release Product
Documentation). Connect the overlay MSS to the same set of pMGWs that the
pool MSSs were using. Also, the pool MSSs shall have access to pMGWs that
the overlay MSS had.
c)
Create and activate routing connections for each BSC using the RCC (for more
information, see Creating circuit groups and routes in M-release Product
Documentation), and for RNCs behind each MGW (for more information, see
User plane routing in M-release Product Documentation).
d)
Create SGSN connections.
e)
Arrange LAs into zone codes.
f) Change the administrative state of BSCs, RNCs and BTSs per service area.
16
In the MME: (detailed instructions should be obtained from the MME vendor)
Sub-steps
a)
Create Site connectivity.
b)
Enable MSS multipoint.
c)
Set SGs addresses of MSSs.
d)
Define TAI-LAI-MSC mapping.
17 In the non-overlay MSSs: Configure Feature 1914: CS Fallback in EPS for MSS
including the configuration of SGs interfaces. The below steps are further described
in Feature 1914: CS Fallback in EPS for MSS, Feature Activation Manual in M-
release Product Documentation.
Sub-steps
a)
For the MSS/VLR change support during location query (PSI), add SCP to local
signaling point of its own MSS. If CSFB to different MSS happens due to PSI,
MSS sends ATI to HLR as "SCP":
ZNFB:NA0,<own_spc>:93,SCP,0;
b)
Change SCP subsystem state to AV-EX:
ZNHC:NA0,<own_spc>:93:ACT;
c)
Configure the SGs interface, create the SCTP association:
ZOYX:MME1:SGSAP:S:GISU,0:SGS:16;
d)
Configure MME into MSS:
ZEJB:NAME=MME1,:ADDR="10.102.178.219",FQDN="mmec11.mmegi2
2.mme.epc.mnc33.mcc444.3gppnetwork.org",PIND=0:;
ZOYS:SGSAP:MME1:ACT:;
g)
(Optinal step) Configure charging-CDR generation parameter for Location Area
Update (LAU) over the SGs interface:
ZGTM:LOCA:12,ON:;
h)
(Optional step) Enable the location update CDR generation with the LUCDR
parameter on VLR level, if so desired:
ZMXM:LUCDR=Y;
j)
(Optional step) Configure CSMT guarding timer. Default is 5 seconds. For details,
see Feature 1914: CS Fallback in EPS for MSS, Feature Activation Manual in M-
release Product Documentation.
k)
(Optional step) Configure SGs cause list for A/Iu re-paging. For details, see
Feature 1914: CS Fallback in EPS for MSS, Feature Activation Manual in M-
release Product Documentation.
l)
(Optional step) Set the PSI pre-page control timer. For details, see Feature 1914:
CS Fallback in EPS for MSS, Feature Activation Manual in M-release Product
Documentation.
m) (Optional step) Set the Roaming Retry delay timer in MAP SendIdentification. For
details, see Feature 1914: CS Fallback in EPS for MSS, Feature Activation
Manual in M-release Product Documentation.
n) Set ATI delay timer. For details, see Feature 1914: CS Fallback in EPS for MSS,
Feature Activation Manual in M-release Product Documentation.
o)
Activate PSI search over SGs interface. For details, see Feature 1914: CS
Fallback in EPS for MSS, Feature Activation Manual in M-release Product
Documentation.
p) Configure PSI paging over SGs interface sending method. For details, see
Feature 1914: CS Fallback in EPS for MSS, Feature Activation Manual in M-
release Product Documentation.
q) Set AGELOC time. For details, see Feature 1914: CS Fallback in EPS for MSS,
Feature Activation Manual in M-release Product Documentation.
r) Configure CS Fallback - Rel9 baseline update. For details, see Feature 1914: CS
Fallback in EPS for MSS, Feature Activation Manual in M-release Product
Documentation.
s)
Set Call Forwarding on no Reply (CFNRy) if no final paging response is received
after SGsAP-SERVICE-REQUEST. For details, see Feature 1914: CS Fallback in
EPS for MSS, Feature Activation Manual in M-release Product Documentation.
t)
Configure SGs Address Complete Message (ACM) sending timer. For details,
see Feature 1914: CS Fallback in EPS for MSS, Feature Activation Manual in M-
release Product Documentation.
18
In the NNSF nodes: In the RNC, BSC or MGW, and SGSN and MME: Create NRI
configuration and Activate BSC/RNC NAS node selection.
Sub-steps
a) In case of RAN independent Iu, see section Configuring IP-based Iu-interface for
RAN independent multipoint A/Iu support in the MGW in Integrating MGW into
the MSC Server System in Ui-Release Product Documentation.
c)
In case of RAN based implementation, see the documentation of the RAN node.
d)
Define SGSN configuration according to the SGSN documentation.
19
In the overlay MSS: Deactivate the Overlay CSFB feature code after successful
migration.
ZW7M:FEA=3602:OFF:;
20 Check 4G LAC subscriber amount in each VLR. The amount should be about equal
in all MSSs under normal conditions. However, when there has been, for example,
an MSS restart, then the amounts may not be balanced for a short period.
ZMVF::LAC=ALL:RAT=LTE;
Procedure
Deactivation steps
In case of problems, the situation can be reversed in the following way:
1
In the MME: Cancel multipoint in MSS / revert the configuration to its state prior to
the migration.
2
In the NNSF (BSC, RNC or MGW): delete overlay MSS NRI based selection from
configuration.