[RFCs/IDs] [Plain Text]

INFORMATIONAL

Network Working Group                                     D. Oran, Editor
Request for Comments: 1142                        Digital Equipment Corp.
                                                            February 1990


                OSI IS-IS Intra-domain Routing Protocol

Status of this Memo

   This RFC is a republication of ISO DP 10589 as a service to the
   Internet community.  This is not an Internet standard.
   Distribution of this memo is unlimited.


NOTE:  This is a bad ASCII version of this document.  The official
document is the PostScript file, which has the diagrams in place.
Please use the PostScript version of this memo.


ISO/IEC DIS 10589

Information technology Telecommunications and information exchange
between systems Interme diate system to Intermediate system
Intra-Domain routeing exchange protocol for use in Conjunction with
the Protocol for providing the Connectionless- mode Network Service
(ISO 8473) Technologies de l'information Communication de donnies et
ichange d'information entre systhmes Protocole intra-domain de routage
d'un systhme intermediare ` un systhme intermediare ` utiliser
conjointement avec le protocole fournissant le service de riseau en
mode sans connexion (ISO 8473) UDC 00000.000 : 000.0000000000
Descriptors:

Contents
        Introduction            iv
        1       Scope and Field of Application  1
        2       References      1
        3       Definitions     2
        4       Symbols and Abbreviations       3
        5       Typographical Conventions       4
        6       Overview of the Protocol        4
        7       Subnetwork Independent Functions        9
        8       Subnetwork Dependent Functions  35
        9       Structure and Encoding of PDUs  47
        10      System Environment      65
        11      System Management       67
        12      Conformance     95
        Annex A         PICS Proforma   99
        Annex B         Supporting Technical Material   105
        Annex C         Implementation Guidelines and Examples  109
        Annex D         Congestion Control and Avoidance        115

Introduction

This Protocol is one of a set of International Standards produced to
facilitate the interconnection of open systems. The set of standards
covers the services and protocols re quired to achieve such
interconnection.  This Protocol is positioned with respect to other
related standards by the layers defined in the ISO 7498 and by the
structure defined in the ISO 8648. In particular, it is a protocol of
the Network Layer. This protocol permits Intermediate Systems within a
routeing Domain to exchange configuration and routeing information to
facilitate the operation of the route ing and relaying functions of
the Network Layer.  The protocol is designed to operate in close
conjunction with ISO 9542 and ISO 8473.  ISO 9542 is used to establish
connectivity and reachability between End Systems and Inter mediate
Systems on individual Subnetworks. Data is carried by ISO 8473.  The
related algo rithms for route calculation and maintenance are also
described.  The intra-domain ISIS routeing protocol is intended to
support large routeing domains consisting of combinations of many
types of subnetworks. This includes point-to-point links, multipoint
links, X.25 subnetworks, and broadcast subnetworks such as ISO 8802
LANs.  In order to support large routeing domains, provision is made
for Intra-domain routeing to be organised hierarchically. A large
domain may be administratively divided into areas.  Each system
resides in exactly one area. Routeing within an area is referred to as
Level 1 routeing. Routeing between areas is referred to as Level 2
routeing.  Level 2 Intermediate systems keep track of the paths to
destination areas. Level 1 Intermediate systems keep track of the
routeing within their own area. For an NPDU destined to another area,
a Level 1 Intermediate system sends the NPDU to the nearest level 2 IS
in its own area, re gardless of what the destination area is. Then the
NPDU travels via level 2 routeing to the destination area, where it
again travels via level 1 routeing to the destination End System.

Information technology

Telecommunications and information exchange between systems
Intermediate system to Intermediate system Intra-Domain routeing
exchange protocol for use in Conjunction with the Protocol for
providing the Connectionless-mode Network Service (ISO 8473)

1 Scope and Field of Application

This International Standard specifies a protocol which is used by
Network Layer entities operating ISO 8473 in In termediate Systems to
maintain routeing information for the purpose of routeing within a
single routeing domain. The protocol herein described relies upon the
provision of a connectionless-mode underlying service.11See ISO 8473
and its Addendum 3 for the mechanisms necessary to realise this
service on subnetworks based on ISO 8208, ISO 8802, and the OSI Data
Link Service.

This Standard specifies:

a)procedures for the transmission of configuration and
routeing information between network entities resid
ing in Intermediate Systems within a single routeing
domain;

b)the encoding of the protocol data units used for the
transmission of the configuration and routeing infor
mation;

c)procedures for the correct interpretation of protocol
control information; and

d)the functional requirements for implementations
claiming conformance to this Standard.

The procedures are defined in terms of:

a)the interactions between Intermediate system Network
entities through the exchange of protocol data units;
and

b)the interactions between a Network entity and an un
derlying service provider through the exchange of
subnetwork service primitives.

c)the constraints on route determination which must be
observed by each Intermediate system when each has
a routeing information base which is consistent with
the others.

2 References

2.1  Normative References

The following standards contain provisions which, through reference in
this text, constitute provisions of this Interna tional Standard.  At
the time of publication, the editions in dicated were valid. All
standards are subject to revision, and parties to agreements based on
this International Stan dard are encouraged to investigate the
possibility of apply ing the most recent editions of the standards
listed below.  Members of IEC and ISO maintain registers of currently
valid International Standards.  ISO 7498:1984, Information processing
systems Open Systems Interconnection Basic Reference Model.  ISO
7498/Add.1:1984, Information processing systems Open Systems
Interconnection Basic Reference Model Addendum 1: Connectionless-mode
Transmission.  ISO 7498-3:1989, Information processing systems Open
Systems Interconnection Basic Reference Model Part 3: Naming and
Addressing.  ISO 7498-4:1989, Information processing systems Open
Systems Interconnection Basic Reference Model Part 4: Management
Framework.  ISO 8348:1987, Information processing systems Data
communications Network Service Definition.  ISO 8348/Add.1:1987,
Information processing systems Data communications Network Service
Definition Addendum 1: Connectionless-mode transmission.  ISO
8348/Add.2:1988, Information processing systems Data communications
Network Service Definition Addendum 2: Network layer addressing.  ISO
8473:1988, Information processing systems Data communications Protocol
for providing the connectionless-mode network service.  ISO
8473/Add.3:1989, Information processing systems Telecommunications and
information exchange between
systems  Protocol for providing the connectionless-
mode network service  Addendum 3: Provision of the
underlying service assumed by ISO 8473 over
subnetworks which provide the OSI data link service.
ISO 8648:1988,  Information processing systems  Open
Systems Interconnection  Internal organisation of the
Network Layer.
ISO 9542:1988, Information processing systems  Tele
communications and information exchange between sys
tems  End system to Intermediate system Routeing ex
change protocol for use in conjunction with the protocol
for providing the connectionless -mode network service
(ISO 8473).
ISO 8208:1984, Information processing systems  Data
communications  X.25 packet level protocol for Data
terminal equipment
ISO 8802:1988, Information processing systems  Tele
communications and information exchange between sys
tems  Local area networks.
ISO/TR 9575:1989, Information technology   Telecom
munications and information exchange between systems
 OSI Routeing Framework.
ISO/TR 9577:1990, Information technology   Telecom
munications and information exchange between systems
 Protocol Identification in the Network Layer.
ISO/IEC DIS 10165-4:, Information technology  Open
systems interconnection  Management Information Serv
ices  Structure of Management Information Part 4:
Guidelines for the Definition of Managed Objects.
ISO/IEC 10039:1990, IPS-T&IEBS  MAC Service Defini
tion.

2.2 Other References

The following references are helpful in describing some of
the routeing algorithms:

McQuillan, J. et. al., The New Routeing Algorithm for the
ARPANET, IEEE Transactions on Communications, May
1980.

Perlman, Radia, Fault-Tolerant Broadcast of Routeing In
formation, Computer Networks, Dec. 1983. Also in IEEE
INFOCOM 83, April 1983.

Aho, Hopcroft, and Ullman, Data Structures and Algo
rithms, P204208  Dijkstra algorithm.

3 Definitions

3.1 Reference Model definitions

This International Standard  makes use of the following
terms defined in ISO 7498:

a)Network Layer
b)Network Service access point
c)Network Service access point address
d)Network entity
e)Routeing
f)Network protocol
g)Network relay
h)Network protocol data unit

3.2 Network Layer architecture
definitions

This International Standard makes use of the following
terms defined in ISO 8648:


a)Subnetwork
b)End system
c)Intermediate system
d)Subnetwork service
e)Subnetwork Access Protocol
f)Subnetwork Dependent Convergence Protocol
g)Subnetwork Independent Convergence Protocol

3.3 Network Layer addressing
definitions

This International Standard makes use of the following
terms defined in ISO 8348/Add.2:


a)Subnetwork address
b)Subnetwork point of attachment
c)Network Entity Title
3.4 Local Area Network Definitions
 This International Standard makes use of the following
terms defined in ISO 8802:
a)Multi-destination address
b)Media access control
c)Broadcast medium
3.5 Routeing Framework Definitions
 This document makes use of the following terms defined in
ISO/TR 9575:
a)Administrative Domain
b)Routeing Domain
c)Hop
d)Black hole


3.6 Additional Definitions
For the purposes of this International Standard, the follow
ing definitions apply:
3.6.1
Area: A routeing subdomain which maintains de
tailed routeing information about its own internal
composition, and also maintains routeing informa
tion which allows it to reach other routeing subdo
mains. It corresponds to the Level 1 subdomain.
3.6.2
Neighbour: An adjacent system reachable by tra
versal of a single subnetwork by a PDU.
3.6.3
Adjacency: A portion of the local routeing infor
mation which pertains to the reachability of a sin
gle neighbour ES or IS over a single circuit.
Adjacencies are used as input to the Decision Proc
ess for forming paths through the routeing domain.
A separate adjacency is created for each neighbour
on a circuit, and for each level of routeing (i.e.
level 1 and level 2) on a broadcast circuit.
3.6.4
Circuit: The subset of the local routeing informa
tion base pertinent to a single local SNPA.
3.6.5
Link: The communication path between two
neighbours.
A Link is up when communication is possible
between the two SNPAs.
3.6.6
Designated IS: The Intermediate system on a
LAN which is designated to perform additional du
ties. In particular it generates Link State PDUs on
behalf of the LAN, treating the LAN as a
pseudonode.
3.6.7
Pseudonode: Where a broadcast subnetwork has n
connected Intermediate systems, the broadcast
subnetwork itself is considered to be a
pseudonode.
The pseudonode has links to each of the n Interme
diate systems and each of the ISs has a single link
to the pseudonode (rather than n-1 links to each of
the other Intermediate systems). Link State PDUs
are generated on behalf of the pseudonode by the
Designated IS. This is depicted below in figure 1.
3.6.8
Broadcast subnetwork: A subnetwork which sup
ports an arbitrary number of End systems and In

termediate systems and additionally is capable of
transmitting a single SNPDU to a subset of these
systems in response to a single SN_UNITDATA
request.
3.6.9
General topology subnetwork: A subnetwork
which supports an arbitrary number of End sys
tems and Intermediate systems, but does not sup
port a convenient multi-destination connectionless
trans
 
mission facility, as does a broadcast sub
 
net
 

work.
3.6.10
Routeing Subdomain: a set of Intermediate sys
tems and End systems located within the same
Routeing domain.
3.6.11
Level 2 Subdomain: the set of all Level 2 Inter
mediate systems in a Routeing domain.
4 Symbols and Abbreviations
4.1 Data Units
PDU     Protocol Data Unit
SNSDU   Subnetwork Service Data Unit
NSDU    Network Service Data Unit
NPDU    Network Protocol Data Unit
SNPDU   Subnetwork Protocol Data Unit

4.2 Protocol Data Units
ESH PDU ISO 9542 End System Hello Protocol Data
Unit
ISH PDU ISO 9542 Intermediate System Hello Protocol
Data Unit
RD PDU  ISO 9542 Redirect Protocol Data Unit
IIH     Intermediate system to Intermediate system
Hello Protocol Data Unit
LSP     Link State Protocol Data Unit
SNP     Sequence Numbers Protocol Data Unit
CSNP    Complete Sequence Numbers Protocol Data
Unit
PSNP    Partial Sequence Numbers Protocol Data Unit


4.3 Addresses
AFI     Authority and Format Indicator
DSP     Domain Specific Part
IDI     Initial Domain Identifier
IDP     Initial Domain Part
NET     Network Entity Title
NSAP    Network Service Access Point
SNPA    Subnetwork Point of Attachment

4.4 Miscellaneous
DA      Dynamically Assigned
DED     Dynamically Established Data link
DTE     Data Terminal Equipment
ES      End System
IS      Intermediate System
L1      Level 1
L2      Level 2
LAN     Local Area Network
MAC     Media Access Control
NLPID   Network Layer Protocol Identifier
PCI     Protocol Control Information
QoS     Quality of Service
SN      Subnetwork
SNAcP   Subnetwork Access Protocol
SNDCP   Subnetwork Dependent Convergence Protocol
SNICP   Subnetwork Independent Convergence Proto
col
SRM     Send Routeing Message
SSN     Send Sequence Numbers Message
SVC     Switched Virtual Circuit
5 Typographical Conventions
This International Standard makes use of the following ty
pographical conventions:
a)Important terms and concepts appear in italic type
when introduced for the first time;
b)Protocol constants and management parameters appear
in sansSerif type with multiple words run together.
The first word is lower case, with the first character of
subsequent words capitalised;
c)Protocol field names appear in San Serif type with
each word capitalised.
d)Values of constants, parameters, and protocol fields
appear enclosed in double quotes.

6 Overview of the Protocol
6.1 System Types
There are the following types of system:
End Systems: These systems deliver NPDUs to other sys
tems and receive NPDUs from other systems, but do
not relay NPDUs. This International Standard does
not specify any additional End system functions be
yond those supplied by ISO 8473 and ISO 9542.
Level 1 Intermediate Systems: These systems deliver and
receive NPDUs from other systems, and relay
NPDUs from other source systems to other destina
tion systems. They route directly to systems within
their own area, and route towards a level 2 Interme
diate system when the destination system is in a dif
ferent area.
Level 2 Intermediate Systems: These systems act as Level 1
Intermediate systems in addition to acting as a sys
tem in the subdomain consisting of level 2 ISs. Sys
tems in the level 2 subdomain route towards a desti
nation area, or another routeing domain.
6.2 Subnetwork Types
There are two generic types of subnetworks supported.
a)broadcast subnetworks: These are multi-access
subnetworks that support the capability of addressing
a group of attached systems with a single NPDU, for
instance ISO 8802.3 LANs.
b)general topology subnetworks: These are modelled as
a set of point-to-point links each of which connects
exactly two systems.
There are several generic types of general topology
subnetworks:
1)multipoint links: These are links between more
than two  systems, where one system is a primary
system, and the remaining systems are secondary
(or slave) systems. The primary is capable of direct
communication with any of the secondaries, but
the secondaries cannot communicate directly
among themselves.
2)permanent point-to-point links: These are links
that stay connected at all times (unless broken, or
turned off by system management), for instance
leased lines or private links.
3)dynamically established data links (DEDs): these
are links over connection oriented facilities, for in
stance X.25, X.21, ISDN, or PSTN networks.
Dynamically established data links can be used in one
of two ways:
i)static point-to-point (Static): The call is estab
lished upon system management action and

cleared only on system management action (or
failure).
ii)dynamically assigned (DA): The call is estab
lished upon receipt of traffic, and brought
down on timer expiration when idle. The ad
dress to which the call is to be established is
determined dynamically from information in
the arriving NPDU(s). No ISIS routeing
PDUs are exchanged between ISs on a DA cir
cuit.
All subnetwork types are treated by the Subnetwork Inde
pendent functions as though they were connectionless
subnetworks, using the Subnetwork Dependent Conver
gence functions of ISO 8473 where necessary to provide a
connectionless subnetwork service. The  Subnetwork De
pendent functions do, however, operate differently on
connectionless and connection-oriented subnetworks.
6.3 Topologies
A single organisation may wish to divide its Administrative
Domain into a number of separate Routeing Domains.
This has certain advantages, as described in ISO/TR 9575.
Furthermore, it is desirable for an intra-domain routeing
protocol to aid in the operation of an inter-domain routeing
protocol, where such a protocol exists for interconnecting
multiple administrative domains.
In order to facilitate the construction of such multi-domain
topologies, provision is made for the entering of static
inter-domain routeing information. This information is pro
vided by a set of Reachable Address Prefixes entered by
System Management at the ISs which have links which
cross routeing domain boundaries. The prefix indicates that
any NSAPs whose NSAP address matches the prefix may
be reachable via  the SNPA with which the prefix is associ
ated. Where the subnetwork to which this SNPA is con
nected is a general topology subnetwork supporting dy
namically established data links, the prefix also has associ
ated with it the required subnetwork addressing
information, or an indication that it may be derived from
the destination NSAP address (for example, an X.121 DTE
address may sometimes be obtained from the IDI of the
NSAP address).
The Address Prefixes are handled by the level 2 routeing al
gorithm in the same way as information about a level 1 area
within the domain. NPDUs with a destination address
matching any of the prefixes present on any Level 2 Inter
mediate System within the domain can therefore be relayed
(using level 2 routeing) by that IS and delivered out of the
domain. (It is assumed that the routeing functions of the
other domain will then be able to deliver the NPDU to its
destination.)
6.4 Addresses
Within a routeing domain that conforms to this standard,
the Network entity titles of Intermediate systems shall be
structured as described in 7.1.1.
All systems shall be able to generate and forward data
PDUs containing NSAP addresses in any of the formats
specified by ISO 8348/Add.2. However,  NSAP addresses

of End systems should be structured as described in 7.1.1 in
order to take full advantage of ISIS routeing. Within such
a domain it is still possible for some End Systems to have
addresses assigned which do not conform to 7.1.1, provided
they meet the more general requirements of
ISO 8348/Add.2, but they may require additional configura
tion and be subject to inferior routeing performance.
6.5  Functional Organisation
The intra-domain ISIS routeing functions are divided into
two groups
-Subnetwork Independent Functions
-Subnetwork Dependent Functions
6.5.1 Subnetwork Independent Functions
The Subnetwork Independent Functions supply full-duplex
NPDU transmission between any pair of neighbour sys
tems. They are independent of the specific subnetwork or
data link service operating below them, except for recognis
ing two generic types of subnetworks:
-General Topology Subnetworks, which include
HDLC point-to-point, HDLC multipoint, and dynami
cally established data links (such as X.25, X.21, and
PSTN links), and
-Broadcast Subnetworks, which include ISO 8802
LANs.
The following Subnetwork Independent Functions are iden
tified
-Routeing. The routeing function determines NPDU
paths. A path is the sequence of connected systems
and links between a source ES and a destination ES.
The combined knowledge of all the Network Layer
entities of all the Intermediate systems within a route
ing domain is used to ascertain the existence of a path,
and route the NPDU to its destination. The routeing
component at an Intermediate system has the follow
ing specific functions:
7It extracts and interprets the routeing PCI in an
NPDU.
7It performs NPDU forwarding based on the desti
nation address.
7It manages the characteristics of the path. If a sys
tem or link fails on a path, it finds an alternate
route.
7It interfaces with the subnetwork dependent func
tions to receive reports concerning an SNPA
which has become unavailable, a system that has
failed, or the subsequent recovery of an SNPA or
system.
7It informs the ISO 8473 error reporting function
when the forwarding function cannot relay an
NPDU, for instance when the destination is un
reachable or when the NPDU would have needed

to be segmented and the NPDU requested no seg
mentation.
-Congestion control. Congestion control manages the
resources used at each Intermediate system.
6.5.2 Subnetwork Dependent Functions
The subnetwork dependent functions mask the characteris
tics of the subnetwork or data link service from the
subnetwork independent functions. These include:
-Operation of the Intermediate system functions of
ISO 9542 on the particular subnetwork, in order to
7Determine neighbour Network entity title(s) and
SNPA address(es)
7Determine the SNPA address(s) of operational In
termediate systems
-Operation of the requisite Subnetwork Dependent
Convergence Function as defined in ISO 8473 and its
Addendum 3, in order to perform
7Data link initialisation
7Hop by hop fragmentation over subnetworks with
small maximum SNSDU sizes
7Call establishment and clearing on dynamically es
tablished data links
6.6 Design Goals
This International Standard supports the following design
requirements. The correspondence with the goals for OSI
routeing stated in ISO/TR 9575 are noted.
-Network Layer Protocol Compatibility. It is com
patible with ISO 8473 and ISO 9542. (See clause 7.5
of ISO/TR 9575),
-Simple End systems: It requires no changes to end
systems, nor any functions beyond those supplied by
ISO 8473 and ISO 9542. (See clause 7.2.1 of ISO/TR
9575),
-Multiple Organisations: It allows for multiple route
ing and administrative domains through the provision
of static routeing information at domain boundaries.
(See clause 7.3 of ISO/TR 9575),
-Deliverability It accepts and delivers NPDUs ad
dressed to reachable destinations and rejects NPDUs
addressed to destinations known to be unreachable.
-Adaptability. It adapts to topological changes within
the routeing domain, but not to traffic changes, except
potentially as indicated by local queue lengths. It
splits traffic load on multiple equivalent paths. (See
clause 7.7 of ISO/TR 9575),
-Promptness. The period of adaptation to topological
changes in the domain is a reasonable function of the
domain diameter (that is, the maximum logical dis

tance between End Systems within the domain) and
Data link speeds. (See clause 7.4 of ISO/TR 9575),
-Efficiency. It is both processing and memory effi
cient. It does not create excessive routeing traffic
overhead. (See clause 7.4 of ISO/TR 9575),
-Robustness. It recovers from transient errors such as
lost or temporarily incorrect routeing PDUs. It toler
ates imprecise parameter settings. (See clause 7.7 of
ISO/TR 9575),
-Stability. It stabilises in finite time to good routes,
provided no continuous topological changes or con
tinuous data base corruptions occur.
-System Management control. System Management
can control many routeing functions via parameter
changes, and inspect parameters, counters, and routes.
It will not, however, depend on system management
action for correct behaviour.
-Simplicity. It is sufficiently simple to permit perform
ance tuning and failure isolation.
-Maintainability. It provides mechanisms to detect,
isolate, and repair most common errors that may affect
the routeing computation and data bases. (See clause
7.8 of ISO/TR 9575),
-Heterogeneity. It operates over a mixture of network
and system types, communication technologies, and
topologies. It is capable of running over a wide variety
of subnetworks, including, but not limited to: ISO
8802 LANs, ISO 8208 and X.25 subnetworks, PSTN
networks, and the OSI Data Link Service. (See clause
7.1 of ISO/TR 9575),
-Extensibility. It accommodates increased routeing
functions, leaving earlier functions as a subset.
-Evolution. It allows orderly transition from algorithm
to algorithm without shutting down an entire domain.
-Deadlock Prevention. The congestion control compo
nent prevents buffer deadlock.
-Very Large Domains. With hierarchical routeing, and
a very large address space, domains of essentially un
limited size can be supported. (See clause 7.2 of
ISO/TR 9575),
-Area Partition Repair. It permits the utilisation of
level 2 paths to repair areas which become partitioned
due to failing level 1 links or ISs. (See clause 7.7 of
ISO/TR 9575),
-Determinism. Routes are a function only of the physi
cal topology, and not of history. In other words, the
same topology will always converge to the same set of
routes.
-Protection from Mis-delivery. The probability of
mis-delivering a NPDU, i.e. delivering it to a Trans
port entity in the wrong End System, is extremely low.

-Availability. For domain topologies with cut set
greater than one, no single point of failure will parti
tion the domain. (See clause 7.7 of ISO/TR 9575),
-Service Classes. The service classes of transit delay,
expense22Expense is referred to as cost in ISO 8473. The latter term is
not used here because of possible confusion with the more general usage
of the term to
indicate path cost according to any routeing metric.
, and residual error probability of ISO 8473
are supported through the optional inclusion of multi
ple routeing metrics.
-Authentication. The protocol is capable of carrying
information to be used for the authentication of Inter
mediate systems in order to increase the security and
robustness of a routeing domain. The specific mecha
nism supported in this International Standard how
ever, only supports a weak form of authentication us
ing passwords, and thus is useful only for protection
against accidental misconfiguration errors and does
not protect against any serious security threat. In the
future, the algorithms may be enhanced to provide
stronger forms of authentication than can be provided
with passwords without needing to change the PDU
encoding or the protocol exchange machinery.
6.6.1 Non-Goals
The following are not within the design scope of the intra-
domain ISIS routeing protocol described in this Interna
tional Standard:
-Traffic adaptation. It does not automatically modify
routes based on global traffic load.
-Source-destination routeing. It does not determine
routes by source as well as destination.
-Guaranteed delivery. It  does not guarantee delivery
of all offered NPDUs.
-Level 2 Subdomain Partition Repair. It will not util
ise Level 1 paths to repair a level 2 subdomain parti
tion. For full logical connectivity to be available, a
connected level 2 subdomain is required.
-Equal treatment for all ES Implementations. The
End system poll function defined in 8.4.5 presumes
that End systems have implemented the Suggested ES
Configuration Timer option of ISO 9542. An End sys
tem which does not implement this option may experi
ence a temporary loss of connectivity following cer
tain types of topology changes on its local
subnetwork.
6.7 Environmental Requirements
For correct operation of the protocol, certain guarantees are
required from the local environment and the Data Link
Layer.
The required local environment guarantees are:
a)Resource allocation such that the certain minimum re
source guarantees can be met, including

1)memory (for code, data, and buffers)
2)processing;
See 12.2.5 for specific performance levels required for
conformance
b)A quota of buffers sufficient to perform routeing func
tions;
c)Access to a timer or notification of specific timer expi
ration; and
d)A very low probability of corrupting data.
The required subnetwork guarantees for point-to-point links
are:
a)Provision that both source and destination systems
complete start-up before PDU exchange can occur;
b)Detection of remote start-up;
c)Provision that no old PDUs be received after start-up
is complete;
d)Provision that no PDUs transmitted after a particular
startup is complete are delivered out of sequence;
e)Provision that failure to deliver a specific subnetwork
SDU will result in the timely disconnection of the
subnetwork connection in both directions and that this
failure will be reported to both systems;  and
f)Reporting of other subnetwork failures and degraded
subnetwork conditions.
The required subnetwork guarantees for broadcast links are:
a)Multicast capability, i.e., the ability to address a subset
of all connected systems with a single PDU;
b)The following events are low probability, which
means that they occur sufficiently rarely so as not to
impact performance, on the order of once per  thou
sand PDUs
1)Routeing PDU non-sequentiality,
2)Routeing PDU loss due to detected corruption; and
3)Receiver overrun;
c)The following events are very low probability,
which means performance will be impacted unless
they are extremely rare, on the order of less than one
event per four years
1)Delivery of NPDUs with undetected data corrup
tion; and
2)Non-transitive connectivity, i.e. where system A
can receive transmissions from systems B and C,
but system B cannot receive transmissions from
system C.

The following services are assumed to be not available
from broadcast links:
a)Reporting of failures and degraded subnetwork condi
tions that result in NPDU loss, for instance receiver
failure. The routeing functions are designed to account
for these failures.
6.8 Functional Organisation of
Subnetwork Independent
Components
The Subnetwork Independent Functions are broken down
into more specific functional components. These are de
scribed briefly in this sub-clause and in detail in clause 7.
This International Standard uses a functional decomposition
adapted from the model of routeing presented in clause 5.1
of ISO/TR 9575. The decomposition is not identical to that
in ISO/TR 9575, since that model is more general and not
specifically oriented toward a detailed description of intra-
domain routeing functions such as supplied by this proto
col.

The functional decomposition is shown below in figure 2.
6.8.1 Routeing
The routeing processes are:
-Decision Process
-Update Process
NOTE  this comprises both the Information Collection
and Information Distribution components identified in
ISO/TR 9575.
-Forwarding Process
-Receive Process
6.8.1.1 Decision Process
This process calculates routes to each destination in the do
main.  It is executed separately for level 1 and level 2 route
ing, and separately within each level for each of the route
ing metrics supported by the Intermediate system. It uses
the Link State Database, which consists of information

from the latest Link State PDUs from every other Interme
diate system in the area, to compute shortest paths from this
IS to all other systems in the area  9in figure 2. The
Link State Data Base is maintained by the Update Process.
Execution of the Decision Process results in the determina
tion of [circuit, neighbour] pairs (known as adjacencies),
which are stored in the appropriate Forwarding Information
base  10  and used by the Forwarding process as paths
along which to forward NPDUs.
Several of the parameters in the routeing data base that the
Decision Process uses are determined by the implementa
tion. These include:
-maximum number of Intermediate and End systems
within the IS's area;
-maximum number of Intermediate and End system
neighbours of the IS, etc.,
so that databases can be sized appropriately. Also parame
ters such as
-routeing metrics for each circuit; and
-timers
can be adjusted for enhanced performance. The complete
list of System Management set-able parameters is listed in
clause 11.
6.8.1.2 Update Process
This process constructs, receives and propagates Link State
PDUs. Each Link State PDU contains information about the
identity and routeing metric values of the  adjacencies of
the IS that originated the Link State PDU.
The Update Process receives Link State and Sequence
Numbers PDUs from the Receive Process  4in figure
2. It places new routeing information in the routeing infor
mation base 6 and propagates routeing information to
other Intermediate systems  7and 8 .
General characteristics of the Update Process are:
-Link State PDUs are generated  as a result of topologi
cal changes, and also periodically. They may also be
generated indirectly as a result of System Manage
ment actions (such as changing one of the routeing
metrics for a circuit).
-Level 1 Link State PDUs are propagated to all Inter
mediate systems within an area, but are not propa
gated out of an area.
-Level 2 Link State PDUs are propagated to all Level 2
Intermediate systems in the domain.
-Link State PDUs are not propagated outside of a do
main.

-The update process, through a set of System Manage
ment parameters, enforces an upper bound on the
amount of routeing traffic overhead it generates.
6.8.1.3 Forwarding Process
This process supplies and manages the buffers necessary to
support NPDU relaying to all destinations.
It receives, via the Receive Process, ISO 8473 PDUs to be
forwarded  5 in figure 2.
It performs a lookup in the appropriate33The appropriate Forwarding
Database is selected by choosing a routeing metric based on fields in
the QoS Maintenance option of ISO 8473.
 Forwarding Data
base  11  to determine the possible output adjacencies
to use for forwarding to a given destination, chooses one
adjacency  12, generates error indications to ISO 8473
 14 , and  signals ISO 9542 to issue Redirect PDUs
13.
6.8.1.4 Receive Process
The Receive Process obtains its inputs from the following
sources
-received PDUs with the NPID of Intra-Domain route
ing  2 in figure 2,
-routeing information derived by the ESIS protocol
from the receipt of ISO 9542 PDUs  1;  and
-ISO 8473 data PDUs handed to the routeing function
by the ISO 8473 protocol machine  3.
It then performs the appropriate actions, which may involve
passing the PDU to some other function (e.g. to the For
warding Process for forwarding  5).
7 Subnetwork Independent
Functions
This clause describes the algorithms and associated data
bases used by the routeing functions. The managed objects
and attributes defined for System Management purposes are
described in clause 11.
The following processes and data bases are used internally
by the subnetwork independent functions. Following each
process or data base title, in parentheses, is the type of sys
tems which must keep the database. The system types are
L2 (level 2 Intermediate system), and L1 (level 1 Inter
mediate system). Note that a level 2 Intermediate system is
also a level 1 Intermediate system in its home area, so it
must keep level 1 databases as well as level 2 databases.

Processes:
-Decision Process (L2, L1)
-Update Process (L2, L1)
-Forwarding Process (L2, L1)
-Receive Process (L2, L1)
Databases:
-Level 1 Link State data base (L2, L1)
-Level 2 Link State data base (L2)
-Adjacency Database (L2, L1)
-Circuit Database (L2, L1)
-Level 1 Shortest Paths Database (L2, L1)
-Level 2 Shortest Paths Database (L2)
-Level 1 Forwarding Databases  one per routeing
metric  (L2, L1)
-Level 2 Forwarding Database  one per routeing
metric  (L2)
7.1 Addresses
The NSAP addresses and NETs of systems are variable
length quantities that conform to the requirements of ISO
8348/Add.2. The corresponding NPAI contained in ISO
8473 PDUs and in this protocol's PDUs (such as LSPs and
IIHs) must use the preferred binary encoding; the underly
ing syntax for this information may be either abstract binary
syntax or abstract decimal syntax. Any of the AFIs and
their corresponding DSP syntax may be used with this pro
tocol.
7.1.1 NPAI Of Systems Within A Routeing
Domain
Figure 3 illustrates the structure of an encoded NSAP ad
dress or NET.

The structure of the NPAI will be interpreted in the follow
ing way by the protocol described in this international stan
dard:
Area Address
address of one area within a routeing domain  a
variable length quantity consisting of the entire high-
order part of the NPAI, excluding the ID and SEL
fields, defined below.
ID      System identifier  a variable length field from 1 to
8 octets (inclusive). Each routeing domain employ
ing this protocol shall select a single size for the ID
field and all Intermediate systems in the routeing do
main shall use this length for the system IDs of all
systems in the routeing domain.
        The set of ID lengths supported by an implementa
tion is an implementation choice, provided that at
least one value in the permitted range can be ac
cepted. The routeing domain administrator must en
sure that all ISs included in a routeing domain are
able to use the ID length chosen for that domain.
SEL     NSAP Selector  a 1-octet field which acts as a se
lector for the entity which is to receive the PDU(this
may be a Transport entity or the Intermediate system
Network entity itself). It is the least significant (last)
octet of the NPAI.
7.1.2 Deployment of Systems
For correct operation of the routeing protocol defined in
this international standard, systems deployed in a routeing
domain must meet the following requirements:
a)For all systems:
1)Each system in an area must have a unique sys
temID: that is, no two systems (IS or ES) in an
area can use the same ID value.
2)Each area address must be unique within the global
OSIE: that is, a given area address can be associ
ated with only one area.
3)All systems having a given value of area address
must be located in the same area.

b)Additional Requirements for Intermediate systems:
1)Each Level 2 Intermediate system within a route
ing domain must have a unique value for its ID
field: that is, no two level 2 ISs in a routeing do
main can have the same value in their ID fields.
c)Additional Requirements for End systems:
1)No two End systems in an area may have ad
dresses that match in all but the SEL fields.
d)An End system can be attached to a level 1 IS only if
its area address matches one of the entries in the adja
cent IS's manual
 
Area
 
Addresses parameter.
It is the responsibility of the routeing domain's administra
tive authority to enforce the requirements of 7.1.2. The pro
tocol defined in this international standard assumes that
these requirements are met, but has no means to verify
compliance with them.
7.1.3 Manual area addresses
The use of several synonymous area addresses by an IS is
accommodated through the use of the management parame
ter manual
 
Area
 
Addresses. This parameter is set locally
for each level 1 IS by system management; it contains a list
of all synonymous area addresses associated with the IS, in
cluding the IS's area address as contained in its own NET.
Each level 1 IS distributes its manual
 
Area
 
Addresses in
its Level 1 LSP's Area Addresses field, thus allowing
level 2 ISs to create a composite list of all area addresses
supported within a given area. Level 2 ISs in turn advertise
the composite list throughout the level 2 subdomain by in
cluding it in their Level 2 LSP's Area Addresses field,
thus distributing information on all the area addresses asso
ciated with the entire routeing domain. The procedures for
establishing an adjacency between two level 1 ISs require
that there be at least one area address in common between
their two manual
 
Area
 
Addresses lists, and the proce
dures for establishing an adjacency between a level 1 Is and
an End system require that the End system's area address
must match an entry in the IS's manual
 
Area
 
Addresses
list. Therefore, it is the responsibility of System Manage
ment to ensure that each area address associated with an IS
is included: in particular, system management must ensure
that the area addresses of all ESs and Level 1 ISs adjacent
to a given level 1 IS are included in that IS's manual
 

Area
 
Addresses list.
If the area address field for the destination address of an
8473 PDU  or for the next entry in its source routeing
field, when present  is not listed in the parameter area
 

Addresses of a level 1 IS receiving the PDU, then the
destination system does not reside in the IS's area. Such
PDUs will be routed by level-2 routeing.
7.1.4 Encoding of Level 2 Addresses
When a full NSAP address is encoded according to the pre
ferred binary encoding specified in ISO 8348/Add.2, the

IDI is padded with leading digits (if necessary) to obtain the
maximum IDP length specified for that AFI.
A Level 2 address prefix consists of a leading sub-string of
a full NSAP address, such that it matches a set of full
NSAP addresses that have the same leading sub-string.
However this truncation and matching is performed on the
NSAP represented by the abstract syntax of the NSAP ad
dress, not on the encoded (and hence padded) form.11An example of
prefix matching may be found in annex B, clause B.1.

Level 2 address prefixes are encoded in LSPs in the same
way as full NSAP addresses, except when the end of the
prefix falls within the IDP. In this case the prefix is directly
encoded as the string of semi-octets with no padding.
7.1.5 Comparison of Addresses
Unless otherwise stated, numerical comparison of addresses
shall be performed on the encoded form of the address, by
padding the shorter address with trailing zeros to the length
of the longer address, and then performing a numerical
comparison.
The addresses to which this precedure applies include
NSAP addresses, Network Entity Titles, and SNPA ad
dresses.
7.2 The Decision Process
This process uses the database of Link State information to
calculate the forwarding database(s), from which the for
warding process can know the proper next hop for each
NPDU. The Level 1 Link State Database is used for calcu
lating the Level 1 Forwarding Database(s), and the Level 2
Link State Database is used for calculating the Level 2 For
warding Database(s).
7.2.1 Input and output
INPUT
-Link State Database  This database is a set of infor
mation from the latest Link State PDUs from all
known Intermediate systems (within this area, for
Level 1, or within the level 2 subdomain, for Level 2).
This database is received from the Update Process.
-Notification of an Event  This is a signal from the
Update Process that a change to a link has occurred
somewhere in the domain.
 OUTPUT
-Level 1 Forwarding Databases  one per routeing
metric
-(Level 2 Intermediate systems only) Level 2 Forward
ing Databases   one per routeing metric
-(Level 2 Intermediate systems only) The Level 1 De
cision Process informs the Level 2 Update Process of
the ID of the Level 2 Intermediate system within the
area with lowest ID reachable with real level 1 links

(as opposed to a virtual link consisting of a path
through the level 2 subdomain)
-(Level 2 Intermediate systems only) If this Intermedi
ate system is the Partition Designated Level 2 Inter
mediate system in this partition, the Level 2 Decision
Process informs the Level 1 Update Process of the
values of the default routeing metric to and ID of the
partition designated level 2 Intermediate system in
each other partition of this area.
7.2.2 Routeing metrics
There are four routeing metrics defined, corresponding to
the four possible orthogonal qualities of service defined by
the QoS Maintenance field of ISO 8473. Each circuit ema
nating from an Intermediate system shall be assigned a
value for one or more of these metrics by System manage
ment. The four metrics are as follows:
a)Default metric: This is a metric understood by every
Intermediate system in the domain. Each circuit shall
have a positive integral value assigned for this metric.
The value may be associated with any objective func
tion of the circuit, but by convention is intended to
measure the capacity of the circuit for handling traffic,
for example, its throughput in bits-per-second.  Higher
values indicate a lower capacity.
b)Delay metric:  This metric measures the transit delay
of the associated circuit. It is an optional metric, which
if assigned to a circuit shall have a positive integral
value. Higher values indicate a longer transit delay.
c)Expense metric: This metric measures the monetary
cost of utilising the associated circuit. It is an optional
metric, which if assigned to a circuit shall have a posi
tive integral value22The path computation algorithm utilised in this
International Standard requires that all circuits be assigned a
positive value for a metric. Therefore, it is
not possible to represent a free circuit by a zero value of the expense
metric. By convention, the value 1 is used to indicate a free circuit.
. Higher values indicate a larger
monetary expense.
d)Error metric: This metric measures the residual error
probability of the associated circuit. It is an optional
metric, which if assigned to a circuit shall have a non-
zero value. Higher values indicate a larger probability
of undetected errors on the circuit.
NOTE - The decision process combines metric values by
simple addition.  It is important, therefore, that the values of
the metrics be chosen accordingly.
Every Intermediate system shall be capable of calculating
routes based on the default metric. Support of any or all of
the other metrics is optional. If an Intermediate system sup
ports the calculation of routes based on a metric, its update
process may report the metric value in the LSPs for the as
sociated circuit; otherwise, the IS shall not report the met
ric.
When calculating paths for one of the optional routeing
metrics, the decision process only utilises LSPs with a
value reported for the corresponding metric. If no value is

associated with a metric for any of the IS's circuits the sys
tem shall not calculate routes based on that metric.
NOTE - A consequence of the above is that a system reach
able via the default metric may not be reachable by another
metric.
See 7.4.2 for a description of how the forwarding process
selects one of these metrics based on the contents of the
ISO 8473 QoS Maintenance option.
Each of the four metrics described above may be of two
types: an  Internal metric or an External metric. Internal
metrics are used to describe links/routes to destinations in
ternal to the routeing domain. External metrics are used to
describe links/routes to destinations outside of the routeing
domain. These two types of metrics are not directly compa
rable, except the internal routes are always preferred over
external routes. In other words an internal route will always
be selected even if an external route with lower total cost
exists.
7.2.3 Broadcast Subnetworks
Instead of treating a broadcast subnetwork as a fully con
nected topology, the broadcast subnetwork is treated as a
pseudonode, with links to each attached system. Attached
systems shall only report their link to the pseudonode. The
designated Intermediate system, on behalf of the
pseudonode, shall construct Link State PDUs reporting the
links to all the systems on the broadcast subnetwork with a
zero value for each supported routeing metric33They are set to zero
metric values since they have already been assigned  metrics by the
link to the pseudonode. Assigning a non-zero value in the
pseudonode LSP would have the effect of doubling the actual value.
.
The pseudonode shall be identified by the sourceID of the
Designated Intermediate system, followed by a non-zero
pseudonodeID assigned by the Designated Intermediate
system. The pseudonodeID is locally unique to the Desig
nated Intermediate system.
Designated Intermediate systems are determined separately
for level 1 and level 2. They are known as the LAN Level 1
Designated IS and the LAN Level 2 Designated IS respec
tively. See 8.4.4.
An Intermediate system may resign as Designated Interme
diate System on a broadcast circuit either because it (or it's
SNPA on the broadcast subnetwork) is being shut down or
because some other Intermediate system of higher priority
has taken over that function. When an Intermediate system
resigns as Designated Intermediate System, it shall initiate a
network wide purge of its pseudonode Link State PDU(s)
by setting their Remaining Lifetime to zero and performing
the actions described in 7.3.16.4. A LAN Level 1 Desig
nated Intermediate System purges Level 1 Link State PDUs
and a LAN Level 2 Designated Intermediate System purges
Level 2 Link State PDUs.  An Intermediate system which
has resigned as both Level 1 and Level 2 Designated Inter
mediate System shall purge both sets of LSPs.

When an Intermediate system declares itself as designated
Intermediate system and it is in possession of a Link State
PDU of the same level issued by the previous Designated
Intermediate System for that circuit (if any), it shall initiate
a network wide purge of that (or those) Link State PDU(s)
as above.
7.2.4 Links
Two Intermediate systems are not considered neighbours
unless each reports the other as directly reachable over one
of their SNPAs. On a Connection-oriented subnetwork
(either point-to-point or general topology), the two Interme
diate systems in question shall ascertain their neighbour re
lationship when a connection is established and hello PDUs
exchanged. A malfunctioning IS might, however, report an
other IS to be a neighbour when in fact it is not. To detect
this class of failure the decision process checks that each
link reported as up in a LSP is so reported by both Inter
mediate systems. If an Intermediate system considers a link
down it shall not mention the link in its Link State PDUs.
On broadcast subnetworks, this class of failure shall be de
tected by the designated IS, which has the responsibility to
ascertain the set of Intermediate systems that can all com
municate on the subnetwork. The designated IS shall in
clude these Intermediate systems (and no others) in the
Link State PDU it generates for the pseudonode represent
ing the broadcast subnetwork.
7.2.5 Multiple LSPs for the same system
The Update process is capable of dividing a single logical
LSP into a number of separate PDUs for the purpose of
conserving link bandwidth and processing (see 7.3.4).  The
Decision Process, on the other hand, shall regard the LSP
with LSP Number zero in a special way. If the LSP with
LSP Number zero and remaining lifetime > 0, is not present
for a particular system then the Decision Process shall not
process any LSPs with non-zero LSP Number which may
be stored for that system.
The following information shall be taken only from the LSP
with LSP Number zero. Any values which may be present
in other LSPs for that system shall be disregarded by the
Decision Process.
a)The setting of the LSP Database Overload bit.
b)The value of the IS Type field.
c)The Area Addresses option.
7.2.6 Routeing Algorithm Overview
The routeing algorithm used by the Decision Process is a
shortest path first (SPF) algorithm. Instances of the algo
rithm are run independently and concurrently by all Inter
mediate systems in a routeing domain. Intra-Domain route
ing of a PDU occurs on a hop-by-hop basis: that is, the al
gorithm determines only the next hop, not the complete
path, that a data PDU will take to reach its destination. To
guarantee correct and consistent route computation by
every Intermediate system in a routeing domain, this Inter
national Standard depends on the following properties:

a)All Intermediate systems in the routeing domain con
verge to using identical topology information; and
b)Each Intermediate system in the routeing domain gen
erates the same set of routes from the same input to
pology and set of metrics.
The first property is necessary in order to prevent inconsis
tent, potentially looping paths. The second property is nec
essary to meet the goal of determinism stated in 6.6.
A system executes the SPF algorithm to find a set of legal
paths to a destination system in the routeing domain. The
set may consist of:
a)a single path of minimum metric sum: these are
termed minimum cost paths;
b)a set of paths of equal minimum metric sum: these are
termed equal minimum cost paths; or
c)a set of paths which will get a PDU closer to its desti
nation than the local system: these are called down
stream paths.
Paths which do not meet the above conditions are illegal
and shall not be used.
The Decision Process, in determining its paths, also ascer
tains the identity of the adjacency which lies on the first
hop to the destination on each path. These adjacencies are
used to form the Forwarding Database,  which the forward
ing process uses for relaying PDUs.
Separate route calculations are made for each pairing of a
level in the routeing hierarchy (i.e. L1 and L2) with a sup
ported routeing metric. Since there are four routeing metrics
and two levels some systems may execute multiple in
stances of the SPF algorithm. For example,
-if an IS is a L2 Intermediate system which supports all
four metrics and computes minimum cost paths for all
metrics, it would execute the SPF calculation eight
times.
-if an IS is a L1 Intermediate system which supports all
four metrics, and additionally computes downstream
paths, it would execute the algorithm  4 W (number of
neighbours + 1) times.
Any implementation of an SPF algorithm meeting both the
static and dynamic conformance requirements of clause 12
of this International Standard may be used. Recommended
implementations are described in detail in Annex C.
7.2.7 Removal of Excess Paths
When there are more than max
 
i
 
mum
 
Path
 
Splits legal
paths to a destination, this set shall be pruned until only
max
 
i
 
mum
 
Path
 
Splits remain. The Intermediate system
shall discriminate based upon:
NOTE - The precise precedence among the paths is speci
fied in order to meet the goal of determinism defined in 6.6.

-adjacency type: Paths associated with End system or
level 2 reachable address prefix adjacencies are re
tained in preference to other adjacencies
-metric sum: Paths having a lesser metric sum are re
tained in preference to paths having a greater metric
sum. By metric sum is understood the sum of the
metrics along the path to the destination.
-neighbour ID: where two or more paths are associ
ated with adjacencies of the same type, an adjacency
with a lower neighbour ID is retained in preference to
an adjacency with a higher neighbour id.
-circuit ID: where two or more paths are associated
with adjacencies of the same type, and same neigh
bour ID, an adjacency with a lower circuit ID is re
tained in preference to an adjacency with a higher cir
cuit ID, where circuit ID is the value of:
7ptPtCircuitID for non-broadcast circuits,
7l1CircuitID for broadcast circuits when running
the Level 1 Decision Process, and
7l2CircuitID for broadcast circuits when running
the Level 2 Decision Process.
-lANAddress: where two or more adjacencies are of
the same type, same neighbour ID, and same circuit
ID (e.g. a system with multiple LAN adapters on the
same circuit) an adjacency with a lower lANAddress
is retained in preference to an adjacency with a higher
lANAddress.
7.2.8 Robustness Checks
7.2.8.1 Computing Routes through Overloaded
Intermediate systems
The Decision Process shall not utilise a link to an Interme
diate system neighbour from an IS whose LSPs have the
LSP Database Overload indication set. Such paths may in
troduce loops since the overloaded IS does not have a com
plete routeing information base. The Decision Process shall,
however utilise the link to reach End system neighbours
since these paths are guaranteed to be non-looping.
7.2.8.2 Two-way connectivity check
The Decision Process shall not utilise a link between two
Intermediate Systems unless both ISs report the link.
NOTE - the check is not applicable to links to an End Sys
tem.
Reporting the link indicates that it has a defined value for at
least the default routeing metric. It is permissible for two
endpoints to report different defined values of the same
metric for the same link. In this case, routes may be asym
metric.

7.2.9 Construction of a Forwarding Database
The information that is needed in the forwarding database
for routeing metric k is the set of adjacencies for each sys
tem N.
7.2.9.1 Identification of Nearest Level 2 IS by a
Level 1 IS
Level 1 Intermediate systems need one additional piece of
information per routeing metric: the next hop to the nearest
level 2 Intermediate system according to that routeing met
ric. A level 1 IS shall ascertain the set, R, of attached
level 2 Intermediate system(s) for metric k such that the to
tal cost to R for metric k is minimal.
If there are more adjacencies in this set than max
 
i
 
mum
 

Path
 
Splits, then the IS shall remove excess adjacencies as
described in 7.2.7.
7.2.9.2 Setting the Attached Flag in Level 2
Intermediate Systems
If a level 2 Intermediate system discovers, after computing
the level 2 routes for metric k, that it cannot reach any other
areas using that metric, it shall:
-set AttachedFlag for metric k to False;
-regenerate its Level 1 LSP with LSP number zero; and
-compute the nearest level 2 Intermediate system for
metric k for insertion in the appropriate forwarding
database, according to the algorithm described in
7.2.9.1 for level 1 Intermediate systems.
NOTE - AttachedFlag for each metric k is examined by the
Update Process, so that it will report the value in the ATT
field of its Link State PDUs.
If a level 2 Intermediate system discovers, after computing
the level 2 routes for metric k, that it can reach at least one
other area using that metric, it shall
-set AttachedFlag for metric k to True;
-regenerate its Level 1 LSP with LSP number zero; and
-set the level 1 forwarding database entry for metric k
which corresponds to nearest level 2 Intermediate
system to Self.
7.2.10 Information for Repairing Partitioned
Areas
An area may become partitioned as a result of failure of one
or more links in the area. However, if each of the partitions
has a connection to the level 2 subdomain, it is possible to
repair the partition via the level 2 subdomain, provided that
the level 2 subdomain itself is not partitioned. This is illus
trated in Figure 4.
All the systems A  I, R and P are in the same area n.
When the link between D and E is broken, the area be

comes partitioned. Within each of the partitions the Parti
tion Designated Level 2 Intermediate system is selected
from among the level 2 Intermediate systems in that parti
tion. In the case of partition 1 this is P, and in the case of
partition 2 this is R. The level 1 repair path is then estab
lished between between these two level 2 Intermediate sys
tems. Note that the repaired link is now between P and R,
not between D and E.
The Partition Designated Level 2 Intermediate Systems re
pair the partition by forwarding NPDUs destined for other
partitions of the area through the level 2 subdomain. They
do this by acting in their capacity as Level 1 Intermediate
Systems and advertising in their Level 1 LSPs adjacencies
to each Partition Designated Level 2 Intermediate System
in the area. This adjacency is known as a Virtual Adja
cency or Virtual Link. Thus other Level 1 Intermediate
Systems in a partition calculate paths to the other partitions
through the Partition Designated Level 2 Intermediate Sys
tem. A Partition Designated Level 2 Intermediate System
forwards the Level 1 NPDUs through the level 2 subdomain
by encapsulating them in 8473 Data NPDUs with its Virtual
Network Entity Title as the source NSAP and the adja
cent Partition Designated Level 2 Intermediate System's
Virtual Network Entity Title as the destination NSAP. The
following sub-clauses describe this in more detail.
7.2.10.1 Partition Detection and Virtual Level 1
Link Creation
Partitions of a Level 1 area are detected by the Level 2 In
termediate System(s) operating within the area.  In order to
participate in the partition repair process, these Level 2 In
termediate systems must also act as Level 1 Intermediate
systems in the area. A partition of a given area exists when
ever two or more Level 2 ISs located in that area are re
ported in the L2 LSPs as being a Partition Designated
Level 2 IS. Conversely, when only one Level 2 IS in an
area is reported as being the Partition Designated Level 2

IS, then that area is not partitioned.  Partition repair is ac
complished by the Partition Designated Level 2 IS.  The
election of the Partition Designated Level 2 IS as described
in the next subsection must be done before the detection
and repair process can begin.
In order to repair a partition of a Level 1 area, the Partition
designated Level 2 IS creates a Virtual Network Entity to
represent the partition.  The Network Entity Title for this
virtual network entity shall be constructed from the first
listed area address from its Level 2 Link State PDU, and the
ID of the Partition Designated Level 2 IS.  The IS shall also
construct a virtual link (represented by a new Virtual Adja
cency managed object) to each Partition Designated Level 2
IS in the area, with the NET of the partition recorded in the
Identifier attribute.  The virtual links are the repair paths for
the partition.  They are reported by the Partition Designated
Level 2 IS into the entire Level 1 area by adding the ID of
each adjacent Partition Designated Level 2 IS to the In
termediate System Neighbours field of its Level 1 Link
State PDU.  The Virtual Flag shall be set True for these
Intermediate System neighbours.  The metric value for this
virtual link shall be the default metric value d(N) obtained
from this system's Level 2 PATHS database, where N is the
adjacent Partition Designated Level 2 IS via the Level 2
subdomain.
An Intermediate System which operates as the Partition
Designated Level 2 Intermediate System shall perform the
following steps after completing the Level 2 shortest path
computation in order to detect partitions in the Level 1 area
and create repair paths:
a)Examine Level 2 Link State PDUs of all Level 2 Inter
mediate systems. Search area
 
Addresses for any ad
dress that matches any of the addresses in partition
 

Area
 
Addresses. If a match is found, and the Parti
tion Designated Level 2 Intermediate system's ID
does not equal this system's ID, then inform the level
1 update process at this system of the identity of the

Partition Designated Level 2 Intermediate system, to
gether with the path cost for the default routeing met
ric to that Intermediate system.
b)Continue examining Level 2 LSPs until all Partition
Designated Level 2 Intermediate systems in other par
titions of this area are found, and inform the Level 1
Update Process of all of the other Partition Designated
Level 2 Intermediate systems in other partitions of this
area, so that
1)Level 1 Link State PDUs can be propagated to all
other Partition designated level 2 Intermediate sys
tems for this area (via the level 2 subdomain).
2)All the Partition Designated Level 2 Intermediate
systems for other partitions of this area can be re
ported as adjacencies in this system's Level 1 Link
State PDUs.
If a partition has healed, the IS shall destroy the associated
virtual network entity and virtual link by deleting the Vir
tual Adjacency.  The Partition Designated Level 2 IS de
tects a healed partition when another Partition Designated
Level 2 IS listed as a virtual link in its Level 1 Link State
PDU was not found after running the partition detection and
virtual link creation algorithm described above.
If such a Virtual Adjacency is created or destroyed, the IS
shall generate a partitionVirtualLinkChange notification.
7.2.10.2 Election of Partition Designated Level 2
Intermediate System
 The Partition Designated Level 2 IS is a Level 2 IS which:
-reports itself as attached by the default metric in its
LSPs;
-reports itself as implementing the partition repair op
tion;
-operates as a Level 1 IS in the area;
-is reachable via Level 1 routeing without traversing
any virtual links; and
-has the lowest ID
The election of the Partition Designated Level 2 IS is per
formed by running the decision process algorithm after the
Level 1 decision process has finished, and before the
Level 2 decision process to determine Level 2 paths is exe
cuted.
In order to guarantee that the correct Partition Designated
Level 2 IS is elected, the decision process is run using only
the Level 1 LSPs for the area, and by examining only the
Intermediate System Neighbours whose Virtual Flag is
FALSE.  The results of this decision process is a set of all
the Level 1 Intermediate Systems in the area that can be
reached via Level 1, non-virtual link routeing.  From this
set, the Partition Designated Level 2 IS is selected by
choosing the IS for which
-IS Type (as reported in the Level 1 LSP) is Level 2
Intermediate System;

-ATT  indicates attached by the default metric;
-P indicates support for the partition repair option;  and
-ID is the lowest among the subset of attached Level 2
Intermediate Systems.
7.2.10.3 Computation of Partition area addresses
A Level 2 Intermediate System shall compute the set of
partition
 
Area
 
Addresses, which is the union of all
manual
 
area
 
Addresses as reported in the Level 1 Link
State PDUs of all Level 2 Intermediate systems reachable in
the partition by the traversal of non-virtual links.  If more
than max
 
i
 
mum
 
Area
 
Addresses are present, the Interme
diate system shall retain only those areas with numerically
lowest area address (as described in 7.1.5). If one of the lo
cal system's manual
 
Area
 
Addresses is so rejected the
notification manualAddressDroppedFromArea shall be
generated.
7.2.10.4 Encapsulation of NPDUs Across the
Virtual Link
All NPDUs sent over virtual links shall be encapsulated as
ISO 8473 Data NPDUs.  The encapsulating Data NPDU
shall contain the Virtual Network Entity Title of the Parti
tion Designated Level 2 IS that is forwarding the NPDU
over the virtual link in the Source Address field, and the
Virtual NET of the adjacent Partition Designated Level 2
IS in the Destination Address field.  The SEL field in
both NSAPs shall contain the IS-IS routeing selector
value.  The QoS Maintenance field of the outer PDU shall
be set to indicate forwarding via the default routeing metric
(see table 1 on page 32).
For Data and  Error Report NPDUs the Segmentation
Permitted and Error Report flags and the Lifetime field
of the outer NPDU shall be copied from the inner NPDU.
When the inner NPDU is decapsulated, its Lifetime field
shall be set to the value of the Lifetime field in the outer
NPDU.
For LSPs and SNPs the Segmentation Permitted flag
shall be set to True and the Error Report flag shall be set
to False.  The Lifetime field shall be set to 255.  When an
inner LSP is decapsulated, its remaining lifetime shall be
decremented by half the difference between 255 and the
value of the Lifetime field in the outer NPDU.
Data NPDUs shall not be fragmented before encapsulation,
unless the total length of the Data NPDU (including header)
exceeds 65535 octets.  In that case, the original Data NPDU
shall first be fragmented, then encapsulated.  In all cases,
the encapsulated Data NPDU may need to be fragmented
by ISO 8473 before transmission in which case it must be
reassembled and decapsulated by the destination Partition
Designated Level 2 IS.  The encapsulation is further de
scribed as part of the forwarding process in 7.4.3.2.  The
decapsulation is described as part of the Receive process in
7.4.4.
7.2.11 Computation of area addresses
A Level 1 or Level 2 Intermediate System shall compute
the values of area
 
Addresses (the set of area addresses

for this Level 1 area), by forming the union of the sets of
manual
 
area
 
Addresses reported in the Area Addresses
field of all Level 1 LSPs with LSP number zero in the local
Intermediate system's link state database.
NOTE - This includes all source systems, whether currently
reachable or not. It also includes the local Intermediate sys
tem's own Level 1 LSP with LSP number zero.
NOTE - There is no requirement for this set to be updated
immediately on each change to the database contents. It is
permitted to defer the computation until the next running of
the Decision Process.
If more than max
 
i
 
mum
 
Area
 
Addresses are present, the
Intermediate system shall retain only those areas with nu
merically lowest area address (as described in 7.1.5). If one
of the local system's manual
 
area
 
Addresses is rejected
the notification manual
 
Address
 
Dropped
 
From
 
Area shall
be generated.
7.2.12 Order of Preference of Routes
If an Intermediate system takes part in level 1 routeing, and
determines (by looking at the area address) that a given des
tination is reachable within its area, then that destination
will be reached exclusively by use of level 1 routeing. In
particular:
a)Level 1 routeing is always based on internal metrics.
b)Amongst routes in the area, routes on which the re
quested QoS (if any) is supported are always preferred
to routes on which the requested QoS is not supported.
c)Amongst routes in the area of the same QoS, the short
est routes are preferred. For determination of the
shortest path, if a route with specific QoS support is
available, then the specified QoS metric is used, other
wise the default metric is used.
d)Amongst routes of equal cost, load splitting may be
performed.
If an Intermediate system takes part in level 1 routeing,
does not take part in level 2 routeing, and determines (by
looking at the area address) that a given destination is not
reachable within its area, and at least one attached level 2
IS is reachable in the area, then that destination will be
reached by routeing to a level 2 Intermediate system as fol
lows:
a)Level 1 routeing is always based on internal metrics.
b)Amongst routes in the area to attached level 2 ISs,
routes on which the requested QoS (if any) is sup
ported are always preferred to routes on which the re
quested QoS is not supported.
c)Amongst routes in the area of the same QoS to at
tached level 2 ISs, the shortest route is preferred. For
determination of the shortest path, if a route on which
the specified QoS is available, then the specified QoS
metric is used, otherwise the default metric is used.

d)Amongst routes of equal cost, load splitting may be
performed.
If an Intermediate system takes part in level 2 routeing and
is attached, and the IS determines (by looking at the area
address) that a given destination is not reachable within its
area, then that destination will be reached as follows:
a)Routes on which the requested QoS (if any) is sup
ported are always preferred to routes on which the re
quested QoS is not supported.
b)Amongst routes of the same QoS, routes are priori
tised as follows:
1)Highest precedence: routes matching the area ad
dress of any area in the routeing domain
2)Medium precedence: Routes matching a reachable
address prefix with an internal metric. For destina
tions matching multiple reachable address prefix
entries all with internal metrics, the longest prefix
shall be preferred.
3)Lowest precedence: Routes matching a reachable
address prefix with an external metric. For destina
tions matching multiple reachable address prefix
entries all with external metrics, the longest prefix
shall be preferred.
c)For routes with equal precedence as specified above,
the shortest path shall be preferred. For determination
of the shortest path, a route supporting the specified
QoS is used if available; otherwise a route using the
default metric shall be used. Amongst routes of equal
cost, load splitting may be performed.
7.3 The Update Process
The Update Process is responsible for generating and
propagating Link State information reliably throughout the
routeing domain.
The Link State information is used by the Decision Process
to calculate routes.
7.3.1 Input and Output
INPUT
-Adjacency Database  maintained by the Subnetwork
Dependent Functions
-Reachable Address managed objects - maintained by
System Management
-Notification of Adjacency Database Change  notifi
cation by the Subnetwork Dependent Functions that
an adjacency has come up, gone down, or changed
cost. (Circuit up, Circuit down, Adjacency Up, Adja
cency Down, and Cost change events)
-AttachedFlag  (level 2 Intermediate systems only),
a flag computed by the Level 2 Decision Process indi
cating whether this system can reach (via level 2
routeing) other areas

-Link State PDUs  The Receive Process passes Link
State PDUs to the Update Process, along with an indi
cation of which adjacency it was received on.
-Sequence Numbers PDUs  The Receive Process
passes Sequence Numbers PDUs to the Update Proc
ess, along with an indication of which adjacency it
was received on.
-Other Partitions  The Level 2 Decision Process
makes available (to the Level 1 Update Process on a
Level 2 Intermediate system) a list of aPartition Desig
nated Level 2 Intermediate system, Level 2 default
metric valueq pairs, for other partitions of this area.
 OUTPUT
-Link State Database
-Signal to the Decision Process of an event, which is
either the receipt of a Link State PDU with different
information from the stored one, or the purging of a
Link State PDU from the database. The reception of a
Link State PDU which has a different sequence num
ber or Remaining Lifetime from one already stored in
the database, but has an identical variable length por
tion, shall not cause such an event.
NOTE - An implementation may compare the checksum of
the stored Link State PDU, modified according to the
change in sequence number, with the checksum of the re
ceived Link State PDU. If they differ, it may assume that the
variable length portions are different and an event signalled
to the Decision Process. However, if the checksums are the
same, an octet for octet comparison must be made in order
to determine whether or not to signal the event.
7.3.2 Generation of Local Link State
Information
The Update Process is responsible for constructing a set of
Link State PDUs. The purpose of these Link State PDUs is
to inform all the other Intermediate systems (in the area, in
the case of Level 1, or in the Level 2 subdomain, in the case
of Level 2), of the state of the links between the Intermedi
ate system that generated the PDUs and its neighbours.
The Update Process in an Intermediate system shall gener
ate one or more new Link State PDUs under the following
circumstances:
a)upon timer expiration;
b)when notified by the Subnetwork Dependent Func
tions of an Adjacency Database Change;
c)when a change to some Network Management charac
teristic would cause the information in the LSP to
change (for example, a change in manual
 
area
 

Addresses).
7.3.3 Use of Manual Routeing Information
Manual routeing information is routeing information en
tered by system management. It may be specified in two
forms.

a)Manual Adjacencies
b)Reachable Addresses
These are described in the following sub-clauses.
7.3.3.1 Manual Adjacencies
An End system adjacency may be created by System Man
agement. Such an adjacency is termed a manual End sys
tem adjacency. In order to create a manual End system ad
jacency, system managements shall specify:
a)the (set of) system IDs reachable over that adjacency;
and
b)the corresponding SNPA Address.
 These adjacencies shall appear as adjacencies with type
Manual, neighbourSystemType End system and
state Up. Such adjacencies provide input to the Update
Process in a similar way to adjacencies created through the
operation of ISO 9542. When the state changes to Up the
adjacency information is included in the Intermediate Sys
tem's own Level 1 LSPs.
NOTE - Manual End system adjacencies shall not be in
cluded in a Level 1 LSPs issued on behalf of a pseudonode,
since that would presuppose that all Intermediate systems on
a broadcast subnetwork had the same set of manual adjacen
cies as defined for this circuit.
Metrics assigned to Manual adjacencies must be Internal
metrics.
7.3.3.2 Reachable Addresses
A Level 2 Intermediate system may have a number of
Reachable Address managed objects created by System
management. When a Reachable Address is in state On
and its parent Circuit is also in state On, the name and
each of its defined routeing metrics shall be included in
Level 2 LSPs generated by this system.
Metrics assigned to Reachable Address managed objects
may be either Internal or External.
A reachable address is considered to be active when all
the following conditions are true:
a)The parent circuit is in state On;
b)the Reachable Address is in state On; and
c)the parent circuit is of type broadcast or is in data link
state Running.
Whenever a reachable address changes from being inac
tive to active a signal shall be generated to the Update
process to cause it to include the Address Prefix of the
reachable address in the Level 2 LSPs generated by that
system as described in 7.3.9.
Whenever a reachable address changes from being active
to inactive, a signal shall be generated to the Update

process to cause it to cease including the Address Prefix of
the reachable address in the Level 2 LSPs.
7.3.4 Multiple LSPs
Because a Link State PDU is limited in size to Receive
 

LSP
 
Buffer
 
Size, it may not be possible to include infor
mation about all of a system's neighbours in a single LSP.
In such cases, a system may use multiple LSPs to convey
this information. Each LSP in the set carries the same
sourceID field (see clause 9), but sets its own LSP Num
ber field individually. Each of the several LSPs is handled
independently by the Update Process, thus allowing distri
bution of topology updates to be pipelined. However, the
Decision Process recognises that they all pertain to a com
mon originating system because they all use the same
sourceID.
NOTE - Even if the amount of information is small enough
to fit in a single LSP, a system may optionally choose to use
several LSPs to convey it; use of a single LSP in this situ
ation is not mandatory.
NOTE - In order to minimise the transmission of redundant
information, it is advisable for an IS to group Reachable
Address Prefix information by the circuit with which it is as
sociated. Doing so will ensure that the minimum  number of
LSP fragments need be transmitted if a circuit to another
routeing domain changes state.
The maximum sized Level 1 or Level 2 LSP which may be
generated by a system is controlled by the values of the
management parameters originating
 
L1
 
LSP
 
Buf
 
fer
 
Size or
ori
 
ginat
 
ing
 
L2
 
LSP
 
Buffer
 
Size respectively.
NOTE - These parameters should be set consistently by sys
tem management. If this is not done, some adjacencies will
fail to initialise.
The IS shall treat the LSP with LSP Number zero in a spe
cial way, as follows:
a)The following fields are meaningful to the decision
process only when they are present in the LSP with
LSP Number zero:
1)The setting of the LSP Database Overload bit.
2)The value of the IS Type field.
3)The Area Addresses option. (This is only present
in the LSP with LSP Number zero, see below).
b)When the values of any of the above items are
changed, an Intermediate System shall re-issue the
LSP with LSP Number zero, to inform other Interme
diate Systems of the change. Other LSPs need not be
reissued.
Once a particular adjacency has been assigned to a particu
lar LSP Number, it is desirable that it not be moved to an
other LSP Number. This is because moving an adjacency
from one LSP to another can cause temporary loss of

connectivity to that system. This can occur if the new ver
sion of the LSP which originally contained information
about the adjacency (which now does not contain that infor
mation) is propagated before the new version of the other
LSP (which now contains the information about the adja
cency). In order to minimise the impact of this, the follow
ing restrictions are placed on the assignment of information
to LSPs.
a)The Area Addresses option field shall occur only in
the LSP with LSP Number  zero.
b)Intermediate System Neighbours options shall occur
after the Area Addresses option and before any End
System (or in the case of Level 2, Prefix) Neigh
bours options.
c)End System (or Prefix) Neighbour options (if any)
shall occur after any Area Address or Intermediate
System Neighbour options.
NOTE  In this context, after means at a higher octet
number from the start of the same LSP or in an LSP with
a higher LSP Number.
NOTE  An implementation is recommended to ensure
that the number of LSPs generated for a particular system
is within approximately 10% of the optimal number
which would be required if all LSPs were densely packed
with neighbour options. Where possible this should be
accomplished by re-using space in LSPs with a lower
LSP Number for new adjacencies. If it is necessary to
move an adjacency from one LSP to another, the
SRMflags (see 7.3.15) for the two new LSPs shall be
set as an atomic action.44If the two SRMflags are not set atomically, a
race condition will exist in which one of the two LSPs may be
propagated quickly, while the other waits for
an entire propagation cycle. If this occurs, adjacencies will be
falsely eliminated from the topology and routes may become unstable for
period of time
potentially as large as maximumLSPGeneratonInterval.

When some event requires changing the LSP information
for a system, the system shall reissue that (or those) LSPs
which would have different contents. It is not required to
reissue the unchanged LSPs. Thus a single End system ad
jacency change only requires the reissuing of the LSP con
taining the End System Neighbours option referring to
that adjacency. The parameters max
 
imum
 
LSP
 
Gen
 
er
 
a
 

tion
 
Int
 
er
 
val and minimumLSPGenerationInterval shall
apply to each LSP individually.
7.3.5 Periodic LSP Generation
The Update Process shall periodically re-generate and
propagate on every circuit with an IS adjacency of the ap
propriate level (by setting SRMflag on each circuit), all the
LSPs (Level 1 and/or Level 2) for the local system and any
pseudonodes for which it is responsible. The Intermediate
system shall re-generate each LSP at intervals of at most
max
 
i
 
mum
 
LSP
 
Gen
 
era
 
tion
 
Interval seconds, with jitter
applied as described in 10.1.
These LSPs may all be generated on expiration of a single
timer or alternatively separate timers may be kept for each
LSP Number and the individual LSP generated on expira
tion of this timer.

7.3.6 Event Driven LSP Generation
In addition to the periodic generation of LSPs, an Interme
diate system shall generate an LSP when an event occurs
which would cause the information content to change. The
following events may cause such a change.
-an Adjacency or Circuit Up/Down event
- a change in Circuit metric
-a change in Reachable Address metric
-a change in manual
 
Area
 
Addresses
-a change in systemID
-a change in Designated  Intermediate System status
-a change in the waiting status
When such an event occurs the IS shall re-generate changed
LSP(s) with a new sequence number. If the event necessi
tated the generation of an LSP which had not previously
been generated (for example, an adjacency Up event for
an adjacency which could not be accommodated in an exist
ing LSP), the sequence number shall be set to one. The IS
shall then propagate the LSP(s) on every circuit by setting
SRMflag for each circuit. The timer maximum
 
LSP
 
Gen
 

er
 
ation
 
Interval shall not be reset.
There is a hold-down timer (min
 
i
 
mum
 
LSP
 
Generation
 

Interval) on the generation of each individual LSP.
7.3.7 Generation of Level 1 LSPs
(non-pseudonode)
The Level 1 Link State PDU not generated on behalf of a
pseudonode contains the following information in its vari
able length fields.
-In the Area Addresses option the set of manual
 

Area
 
Addresses for this Intermediate System.
-In the Intermediate System Neighbours option
the set of Intermediate system IDs of neighbouring In
termediate systems formed from:
7The set of neighbourSystemIDs with an ap
pended zero octet (indicating non-pseudonode)
from adjacencies in the state Up, on circuits of
type Point-Point, In or Out, with
xneighbourSystemType L1 Intermediate
System
xneighbourSystemType L2 Intermediate
System and adjacencyUsage Level 2 or
Level1 and 2.
The metrics shall be set to the values of Level 1
metrick of the circuit for each supported routeing
metric.
7The set of l1CircuitIDs for all circuits of type
Broadcast (i.e. the neighbouring pseudonode
IDs) .

The metrics shall be set to the values of Level 1
metrick of the circuit for each supported routeing
metric.
7The set of IDs with an appended zero octet derived
from the Network Entity Titles of all Virtual Adja
cencies of this IS. (Note that the Virtual Flag is set
when encoding these entries in the LSP  see
7.2.10.)
The default metric shall be set to the total cost to
the virtual NET for the default routeing metric.
The remaining metrics shall be set to the value in
dicating unsupported.
-In the End System Neighbours option  the set of
IDs of neighbouring End systems formed from:
7The systemID of the Intermediate System itself,
with a value of zero for all supported metrics.
7The set of endSystemIDs from all adjacencies
with type Auto-configured, in state Up, on
circuits of type Point-to-Point, In or Out,
with neighbourSystemType End system.
The metrics shall be set to the values of Level 1
metrick of the circuit for each supported routeing
metric.
7The set of endSystemIDs from all adjacencies
with type Manual in state Up, on all circuits.
The metrics shall be set to the values of Level 1
metrick of the circuit for each supported routeing
metric.
-In the Authentication Information field  if the
system's areaTransmitPassword is non-null, in
clude the Authentication Information field contain
ing an Authentication Type  of Password, and the
value of the areaTransmitPassword.
7.3.8 Generation of Level 1 Pseudonode LSPs
An IS shall generate a  Level 1 pseudonode Link State PDU
for each circuit for which this Intermediate System is the
Level 1 LAN Designated Intermediate System. The LSP
shall specify the following information in its variable length
fields. In all cases a value of zero shall be used for all sup
ported routeing metrics
-The Area Addresses option is not present.
Note - This information is not required since the set of
area addresses for the node issuing the pseudonode
LSP will already have been made available via its own
non-pseudonode LSP.
-In the Intermediate System Neighbours option
the set of Intermediate System IDs of neighbouring In
termediate Systems on the circuit for which this
pseudonode LSP is being generated formed from:
7The Designated Intermediate System's own sys
temID with an appended zero octet (indicating
non-pseudonode).

7The set of neighbourSystemIDs with an ap
pended zero octet (indicating non-pseudonode)
from adjacencies on this circuit in the state Up,
with
xneighbourSystemType L1 Intermediate
System
xL2 Intermediate System and adjacency
Usage Level 1.
-In the End System Neighbours option  the set of
IDs of neighbouring End systems formed from:
7The set of endSystemIDs from all adjacencies
with type Auto-configured, in state Up, on
the circuit for which this pseudonode is being gen
erated, with neighbourSystemType End sys
tem.
-In the Authentication Information field  if the
system's areaTransmitPassword is non-null, in
clude the Authentication Information field contain
ing an Authentication Type  of Password, and the
value of the areaTransmitPassword.
7.3.9 Generation of Level 2 LSPs
(non-pseudonode)
The Level 2 Link State PDU not generated on behalf of a
pseudonode contains the following information in its vari
able length fields:
-In the Area Addresses option  the set of area
 

Addresses for this Intermediate system computed as
described in 7.2.11.
-In the Partition Designated Level 2 IS option  the
ID of the Partition Designated Level 2 Intermediate
System for the partition.
-In the Intermediate System Neighbours option
the set of Intermediate system IDs of neighbouring In
termediate systems formed from:
7The set of neighbourSystemIDs with an ap
pended zero octet (indicating non-pseudonode)
from adjacencies in the state Up, on circuits of
type Point-to-Point, In or Out, with neigh
bourSystemType L2 Intermediate System.
7The set of l2CircuitIDs for all circuits of type
Broadcast. (i.e. the neighbouring pseudonode
IDs)
The metric and metric type shall be set to the val
ues of Level 2 metrick of the circuit for each sup
ported routeing metric.
-In the Prefix Neighbours option  the set of vari
able length prefixes formed from:
7The set of names of all Reachable Address man
aged objects in state On, on all circuits in state
On.

The metrics shall be set to the values of Level 2
metrick for the reachable address.
-In the Authentication Information field  if the
system's domainTransmitPassword is non-null,
include the Authentication Information field con
taining an Authentication Type  of Password, and
the value of the domainTransmitPassword.
7.3.10 Generation of Level 2 Pseudonode LSPs
A Level 2 pseudonode Link State PDU is generated for
each circuit for which this Intermediate System is the
Level 2 LAN Designated Intermediate System and contains
the following information in its variable length fields. In all
cases a value of zero shall be used for all supported route
ing metrics.
-The Area Addresses option is not present.
Note - This information is not required since the set of
area addresses for the node issuing the pseudonode
LSP will already have been made available via its own
non-pseudonode LSP.
-In the Intermediate System Neighbours option
the set of Intermediate System IDs of neighbouring In
termediate Systems on the circuit for which this
pseudonode LSP is being generated formed from:
7The Designated Intermediate System's own sys
temID with an appended zero octet (indicating
non-pseudonode).
7The set of neighbourSystemIDs with an ap
pended zero octet (indicating non-pseudonode)
from adjacencies on this circuit in the state Up
with neighbourSystemType L2 Intermediate
System.
-The Prefix Neighbours option is not present.
-In the Authentication Information field  if the
system's domainTransmitPassword is non-null,
include the Authentication Information field con
taining an Authentication Type  of Password, and
the value of the domainTransmitPassword.
7.3.11 Generation of the Checksum
This International Standard makes use of the checksum
function defined in ISO 8473.
The source IS shall compute the LSP Checksum when the
LSP is generated. The checksum shall never be modified by
any other system. The checksum allows the detection of
memory corruptions and thus prevents both the use of in
correct routeing information and its further propagation by
the Update Process.
The checksum shall be computed over all fields in the LSP
which appear after the Remaining Lifetime field. This
field (and those appearing before it) are excluded so that the
LSP may be aged by systems without requiring re-
computation.

As an additional precaution against hardware failure, when
the source computes the Checksum, it shall start with the
two checksum variables (C0 and C1) initialised to what
they would be after computing for the systemID portion
(i.e. the first 6 octets) of its Source ID. (This value is com
puted and stored when the Network entity is enabled and
whenever systemID changes.) The IS shall then resume
Checksum computation on the contents of the PDU after
the first ID Length octets of the Source ID field.
NOTE - All Checksum calculations on the LSP are per
formed treating the Source ID field as the first octet. This
procedure prevents the source from accidentally sending out
Link State PDUs with some other system's ID as source.
7.3.12 Initiating Transmission
The IS shall store the generated Link State PDU in the Link
State Database, overwriting any previous Link State PDU
with the same LSP Number generated by this system. The
IS shall then set all SRMflags for that Link State PDU, in
dicating it is to be propagated on all circuits with Intermedi
ate System adjacencies.
An Intermediate system shall ensure (by reserving re
sources, or otherwise) that it will always be able to store
and internalise its own non-pseudonode zeroth LSP. In the
event that it is not capable of storing and internalising one
of its own LSPs it shall enter the overloaded state as de
scribed in 7.3.19.1.
NOTE - It is recommended that an Intermediate system en
sure (by reserving resources, or otherwise) that it will al
ways be able to store and internalise all its own (zero and
non-zero, pseudonode and non-pseudonode) LSPs.
7.3.13 Preservation of order
When an existing Link State PDU is re-transmitted (with
the same or a different sequence number), but with the
same information content (i.e. the variable length part) as a
result of there having been no changes in the local topology
databases, the order of the information in the variable
length part shall be the same as that in the previously trans
mitted LSP.
NOTE - If a sequence of changes result in the state of the
database returning to some previous value, there is no re
quirement to preserve the ordering. It is only required when
there have been no changes whatever. This allows the re
ceiver to detect that there has been no change in the infor
mation content by performing an octet for octet comparison
of the variable length part, and hence not re-run the decision
process.
7.3.14 Propagation of LSPs
The update process is responsible for propagating Link
State PDUs throughout the domain (or in the case of
Level 1, throughout the area).
The basic mechanism is flooding, in which each Intermedi
ate system propagates to all its neighbour Intermediate sys
tems except that neighbour from which it received the
PDU. Duplicates are detected and dropped.

Link state PDUs are received from the Receive Process.
The maximum size control PDU (Link State PDU or Se
quence Numbers PDU) which a system expects to receive
shall be Receive
 
LSP
 
Buffer
 
Size octets. (i.e. the Update
process must provide buffers of at least this size for the re
ception, storage and forwarding of received Link State
PDUs and Sequence Numbers PDUs.)  If a control PDU
larger than this size is received, it shall be treated as if it
had an invalid checksum (i.e. ignored by the Update Proc
ess and a corruptedLSPReceived notification generated).
Upon receipt of a Link State PDU the Update Process shall
perform the following functions:
a)Level 2 Link State PDUs shall be propagated on cir
cuits which have at least one Level 2 adjacency.
b)Level 1 Link State PDUs shall be propagated on cir
cuits which have at least one Level 1 adjacency or at
least one Level 2 adjacency not marked Level 2
only.
c)When propagating a Level 1 Link State PDU on a
broadcast subnetwork, the IS shall transmit to the
multi-destination subnetwork address AllL1IS.
d)When propagating a Level 2 Link State PDU on a
broadcast subnetwork, the IS shall transmit to the
multi-destination subnetwork address AllL2IS.
NOTE  When propagating a Link State PDU on a
general topology subnetwork the Data Link Address
is unambiguous (because Link State PDUs are not
propagated across Dynamically Assigned circuits).
e)An Intermediate system receiving a Link State PDU
with an incorrect LSP Checksum or with an invalid
PDU syntax shall
1)log a circuit notification, corruptedLSPRe
ceived,
2)overwrite the Checksum and Remaining Lifetime
with 0, and
3)treat the Link State PDU as though its Remaining
Lifetime had expired (see 7.3.16.4.)
f)A Intermediate system receiving a Link State PDU
which is new (as identified in 7.3.16) shall
1)store the Link State PDU into Link State database,
and
2)mark it as needing to be propagated upon all cir
cuits except that upon which it was received.
g)When a Intermediate system receives a Link State
PDU from source S, which it considers older than the
one stored in the database for S, it shall set the
SRMflag for S's Link State PDU associated with the
circuit from which the older Link State PDU was re
ceived. This indicates that the stored Link State PDU
needs to be sent on the link from which the older one
was received.

h)When a system receives a Link State PDU which is
the same (not newer or older) as the one stored, the In
termediate system shall
1)acknowledge it if necessary, as described in 7.3.17,
and
2)clear the SRMflag for that circuit for that Link
State PDU.
i)A Link State PDU received with a zero checksum
shall be treated as if the Remaining Lifetime were 0.
The age, if not 0, shall be overwritten with 0.
The Update Process scans the Link State Database for Link
State PDUs with SRMflags set. When one is found, pro
vided the timestamp lastSent indicates that it was propa
gated no more recently than min
 
i
 
mum
 
LSP
 
Trans
 
mis
 
sion
 

Int
 
er
 
val, the IS shall
a)transmit it on all circuits with SRMflags set, and
b)update lastSent.
7.3.15 Manipulation of SRM and SSN Flags
For each Link State PDU, and for each circuit over which
routeing messages are to be exchanged (i.e. not on DA cir
cuits), there are two flags:
Send Routeing Message (SRMflag)  if set, indicates that
Link State PDU should be transmitted on that cir
cuit.  On broadcast circuits SRMflag is cleared as
soon as the LSP has been transmitted, but on non-
broadcast circuits SRMflag is only cleared on recep
tion of a Link State PDU or Sequence Numbers
PDU as described below.
        SRMflag shall never be set for an LSP with se
quence number zero, nor on a circuit whose exter
nalDomain attribute is True (See 7.3.15.2).
Send Sequence Numbers (SSNflag)  if set, indicates that
information about that Link State PDU should be in
cluded in a Partial Sequence Numbers PDU trans
mitted on that circuit.  When the Sequence Numbers
PDU has been transmitted SSNflag is cleared.  Note
that the Partial Sequence Numbers PDU serves as an
acknowledgement that a Link State PDU was re
ceived.
        SSNflag shall never be set on a circuit whose ex
ternalDomain attribute is True.
7.3.15.1 Action on Receipt of a Link State PDU
 When a Link State PDU is received on a circuit C, the IS
shall perform the following functions
a)Perform the following PDU acceptance tests:
1)If the LSP was received over a circuit whose ex
ternalDomain attribute is True, the IS shall dis
card the PDU.
2)If the ID Length field of the PDU is not equal to
the value of the IS's routeingDomainIDLength,

the PDU shall be discarded and an iDField
LengthMismatch notification generated.
3)If this is a level 1 LSP, and the set of areaRe
ceivePasswords is non-null, then perform the
following tests:
i)If the PDU does not contain the Authentica
tion Information field then the PDU shall be
discarded and an authenticationFailure no
tification generated.
ii)If the PDU contains the Authentication In
formation field, but the Authentication
Type is not equal to Password, then the
PDU shall be accepted unless the IS imple
ments the authenticatiion procedure indicated
by the Authentication Type. In this case
whether the IS accepts or ignores the PDU is
outside the scope of this International Stan
dard.
iii)Otherwise, the IS shall compare the password
in the received PDU with the passwords in the
set of areaReceivePasswords, augmented
by the value of the areaTransmitPassword.
If the value in the PDU matches any of these
passwords, the IS shall accept the PDU for
further processing. If the value in the PDU
does not match any of the above values, then
the IS shall ignore the PDU and generate an
authenticationFailure notification.
4)If this is a level 2 LSP, and the set of domainRe
ceivePasswords is non-null, then perform the
following tests:
i)If the PDU does not contain the Authentica
tion Information field then the PDU shall be
discarded and an authenticationFailure no
tification generated.
ii)If the PDU contains the Authentication In
formation field, but the Authentication
Type is not equal to Password, then the
PDU shall be accepted unless the IS imple
ments the authenticatiion procedure indicated
by the Authentication Type. In this case
whether the IS accepts or ignores the PDU is
outside the scope of this International Stan
dard.
iii)Otherwise, the IS shall compare the password
in the received PDU with the passwords in the
set of domainReceivePasswords, aug
mented by the value of the domainTransmit
Password. If the value in the PDU matches
any of these passwords, the IS shall accept the
PDU for further processing. If the value in the
PDU does not match any of the above values,
then the IS shall ignore the PDU and generate
an authenticationFailure notification.
b)If the LSP has zero Remaining Lifetime, perform the
actions described in 7.3.16.4.
c)If the source S of the LSP is an IS or pseudonode for
which all but the last octet are equal to the systemID

of the receiving Intermediate System, and the receiv
ing Intermediate System does not have that LSP in its
database, or has that LSP, but no longer considers it to
be in the set of LSPs generated by this system (e.g. it
was generated by a previous incarnation of the sys
tem), then initiate a network wide purge of that LSP as
described in 7.3.16.4.
d)If the source S of the LSP is a system (pseudonode or
otherwise) for which the first ID Length octets are
equal to the systemID of the receiving Intermediate
system, and the receiving Intermediate system has an
LSP in the set of currently generated LSPs from that
source in its database (i.e. it is an LSP generated by
this Intermediate system), perform the actions de
scribed in 7.3.16.1.
e)Otherwise, (the source S is some other system),
1)If the LSP is newer than the one in the database, or
if an LSP from that source does not yet exist in the
database:
i)Store the new LSP in the database, overwriting
the existing database LSP for that source (if
any) with the received LSP.
ii)Set SRMflag for that LSP for all circuits
other than C.
iii)Clear SRMflag for C.
iv)If C is a non-broadcast circuit, set SSNflag
for that LSP for C.
v)Clear SSNflag for that LSP for the circuits
other than C.
2)If the LSP is equal to the one in the database (same
Sequence Number, Remaining Lifetimes both zero
or both non-zero, same checksums):
i)Clear SRMflag for C.
ii)If C is a non-broadcast circuit, set SSNflag
for that LSP for C.
3)If the LSP is older than the one in the database:
i)Set SRMflag for C.
ii)Clear SSNflag for C.
When storing a new LSP, the Intermediate system shall first
ensure that it has sufficient memory resources to both store
the LSP and generate whatever internal data structures will
be required to process the LSP by the Update Process.  If
these resources are not available the LSP shall be ignored.
It shall neither be stored nor acknowledged. When an LSP
is ignored for this reason the IS shall enter the Waiting
State. (See 7.3.19).
When attempting to store a new version of an existing LSP
(with the same LSPID), which has a length less than or
equal to that of the existing LSP, the existing LSP shall be
removed from the routeing information base and the new
LSP stored as a single atomic action. This ensures that such
an LSP (which may be carrying the LSP Database Overload
indication from an overloaded IS) will never be ignored as
a result of a lack of memory resources.

7.3.15.2 Action on Receipt of a Sequence Numbers
PDU
When a Sequence Numbers PDU (Complete or Partial, see
7.3.17) is received on circuit C the IS shall perform the fol
lowing functions:
a)Perform the following PDU acceptance tests:
1)If the SNP was received over a circuit whose ex
ternalDomain attribute is True, the IS shall dis
card the PDU.
2)If the ID Length field of the PDU is not equal to
the value of the IS's routeingDomainIDLength,
the PDU shall be discarded and an iDField
 

Length
 
Mismatch notification generated.
3)If this is a level 1 SNP and the set of areaRe
ceivePasswords is non-null, then perform the
following tests:
i)If the PDU does not contain the Authentica
tion Information field then the PDU shall be
discarded and an authenticationFailure no
tification generated.
ii)If the PDU contains the Authentication In
formation field, but the Authentication
Type is not equal to Password, then the
PDU shall be accepted unless the IS imple
ments the authenticatiion procedure indicated
by the Authentication Type. In this case
whether the IS accepts or ignores the PDU is
outside the scope of this International Stan
dard.
iii)Otherwise, the IS shall compare the password
in the received PDU with the passwords in the
set of areaReceivePasswords, augmented
by the value of the areaTransmitPassword.
If the value in the PDU matches any of these
passwords, the IS shall accept the PDU for
further processing. If the value in the PDU
does not match any of the above values, then
the IS shall ignore the PDU and generate an
authenticationFailure notification.
4)If this is a level 2 SNP, and the set of domainRe
ceivePasswords is non-null, then perform the
following tests:
i)If the PDU does not contain the Authentica
tion Information field then the PDU shall be
discarded and an authenticationFailure no
tification generated.
ii)If the PDU contains the Authentication In
formation field, but the Authentication
Type is not equal to Password, then the
PDU shall be accepted unless the IS imple
ments the authenticatiion procedure indicated
by the Authentication Type. In this case
whether the IS accepts or ignores the PDU is
outside the scope of this International Stan
dard.

iii)Otherwise, the IS shall compare the password
in the received PDU with the passwords in the
set of domainReceivePasswords, aug
mented by the value of the domainTransmit
Password. If the value in the PDU matches
any of these passwords, the IS shall accept the
PDU for further processing. If the value in the
PDU does not match any of the above values,
then the IS shall ignore the PDU and generate
an authenticationFailure notification.
b)For each LSP reported in the Sequence Numbers
PDU:
1)If the reported value equals the database value and
C is a non-broadcast circuit, Clear SRMflag for C
for that LSP.
2)If the reported value is older than the database
value, Clear SSNflag, and Set SRMflag.
3)If the reported value is newer than the database
value, Set SSNflag, and if C is a non-broadcast
circuit Clear SRMflag.
4)If no database entry exists for the LSP, and the re
ported Remaining Lifetime, Checksum and Se
quence Number fields of the LSP are all non-
zero, create an entry with sequence number 0 (see
7.3.16.1), and set SSNflag for that entry and cir
cuit C.  Under no circumstances shall SRMflag be
set for such an LSP with zero sequence number.
NOTE - This is because possessing a zero sequence
number LSP is semantically equivalent to having no
information about that LSP.  If such LSPs were
propagated by setting SRMflag it would result in an
unnecessary consumption of both bandwidth and
memory resources.
c)If the Sequence Numbers PDU is a Complete Se
quence Numbers PDU, Set SRMflags for C for all
LSPs in the database (except those with zero sequence
number or zero remaining lifetime) with LSPIDs
within the range specified for the CSNP by the Start
LSPID and End LSPID fields, which were not men
tioned in the Complete Sequence Numbers PDU (i.e.
LSPs this system has, which the neighbour does not
claim to have).
7.3.15.3 Action on expiration of Complete SNP
Interval
The IS shall perform the following actions every
CompleteSNPInterval seconds for circuit C:
a)If C is a broadcast circuit, then
1)If this Intermediate system is a Level 1 Designated
Intermediate System on circuit C, transmit a com
plete set of Level 1 Complete Sequence Numbers
PDUs on circuit C. Ignore the setting of SSNflag
on Level 1 Link State PDUs.
If the value of the IS's areaTransmitPassword
is non-null, then the IS shall include the Authenti
cation Information field in the transmitted

CSNP, indicating an Authentication Type of
Password and containing the areaTransmit
Password as the authentication value.
2)If this Intermediate system is a Level 2 Designated
Intermediate System on circuit C, transmit a com
plete set of Level 2 Complete Sequence Numbers
PDUs on circuit C. Ignore the setting of SSNflag
on Level 2 Link State PDUs.
If the value of the IS's domainTransmitPass
word is non-null, then the IS shall include the
Authentication Information field in the trans
mitted CSNP, indicating an Authentication Type
of Password and containing the domainTrans
mitPassword as the authentication value.
A complete set of CSNPs is a set whose startLSPID
and endLSPID ranges cover the complete possible
range of LSPIDs. (i.e. there is no possible LSPID
value which does not appear within the range of one
of the CSNPs in the set).  Where more than one CSNP
is transmitted on a broadcast circuit, they shall be
separated by an interval of at least min
 
i
 
mum
 
Broad
 

cast
 
LSP
 
TransmissionInterval seconds.
NOTE  An IS is permitted to transmit a small number
of CSNPs (no more than 10) with a shorter separation in
terval, (or even back to back), provided that no more
than 1000/minimum
 
Broad
 
cast
 
LSP
 
Trans
 
mis
 
sion
 
Int
 
er
 

val CSNPs are transmitted in any one second period.
b)Otherwise (C is a point to point circuit, including non-
DA DED circuits and virtual links), do nothing.
CSNPs are only transmitted on point to point circuits
at initialisation.
7.3.15.4 Action on expiration of Partial SNP
Interval
The maximum sized Level 1 or Level 2 PSNP which may
be generated by a system is controlled by the values of
originating
 
L1
 
LSP
 
Buf
 
fer
 
Size or  originating
 
L2
 
LSP
 

Buffer
 
Size respectively. An Intermediate system shall per
form the following actions every partialSNPInterval sec
onds for circuit C with jitter applied as described in 10.1:
a)If C is a broadcast circuit, then
1)If this Intermediate system is a Level 1 Intermedi
ate System or a Level 2 Intermediate System with
manual
 
L2
 
Only
 
Mode False, but is not a
Level 1 Designated