Distributed competitive satellite optical burst switching mechanism for satellite networks with ultra-high link delays

Abstract. The ultra-high link latency of the conventional central reservation switching method in satellite optical networks limits link utilization. In this study, a novel fully distributed competitive satellite optical burst switching (DC-SOBS) mechanism was proposed to overcome the drawbacks of the optical circuit switching method and achieve link multiplexing. Furthermore, the proposed method addressed the low link utilization problem caused by the high link latency of the central-control-related methods. The DC-SOBS mechanism in the ring network of six Geostationary Orbit (GEO) satellites improved link utilization by three orders of magnitude compared with the reservation method. A data conflict processing method was proposed to ensure that the same information was provided to various optical switching nodes, and the same algorithm was used to achieve a consistent scheduling result in each node.

network capacity. In the method proposed in Refs. 9-12, the data are sent after obtaining link resources because of limited resources and feedback at each switching node of the whole transmission link. In Ref. 13, a burst chain switching mechanism was proposed to improve link utilization and avoid conflict. However, in this method, link reservation is required, which is not suitable for long-delay satellite links. Yan et al. 14 proposed a method to implement OPS in a data center network. However, the method requires additional buffers and fiber delay lines, which is not suitable for engineering implementation on satellites. Tode et al. 15 proposed an OCS/OPS service offload method to reduce the probability of optical switching conflicts.
In Refs. [16][17][18], an optical time-slice switching method was proposed. This method requires system synchronization to complete the optical time-slice service demarcation. Furthermore, a centralized controller is required to allocate the link resources before service transmission. In Refs. 5,19, and 20, a timeslot-based optical switching method was proposed; however, this approach requires strict network synchronization and a ring topology. References 3,8,21, and 22 provide a hardware basis for the distributed competitive satellite optical burst switching (DC-SOBS) mechanism in this study. An optical signal switching of ∼2.3 μs was realized based on the experimental results in Ref. 8. Zhai et al. 3 implemented a novel OBS switching hardware platform that is suitable for implementing the DC-SOBS switching method proposed in this paper. Since a wavelength division multiplexing system needs high power, and wavelength-dependent crosstalk in space is currently difficult to reduce, 5 we considered a single wavelength data channel for analysis and evaluation. The impact of dense wavelength division multiplexing (DWDM) on the algorithm performance and link bandwidth of this paper will be considered in the future. Wang et al. 1 and Kumar et al. 23 proposed a burst assembly algorithm and satellite OBS nodes structure; however, only the implementation method of optical network accessing node was proposed, and the optical switching node was not involved.
In this study, a distributed switching mechanism based on OBS was proposed considering the characteristics of high-orbit satellite link transmission delay. The proposed mechanism does not require network time synchronization or a central control node. Instead, each node in the network completes the scheduling and switching independently and supports both end-to-end guaranteed and hop-to-hop competitive transmission services. Therefore, this technique not only ensures transmission reliability but also considerably improves the utilization of satellite links and is suitable for Geostationary Orbit (GEO) satellite optical networks with fewer satellites.
In the conventional OBS method, a fixed processing time is preset between the burst data packet (BDP) and burst control packet (BCP) to ensure that each switching node in the network can successfully schedule and switch BDPs. In the proposed method, the fixed processing time is categorized into four parts, namely, registration, conflict detection, conflict information distribution, and switching scheduling to support the proposed DC-SOBS mechanism. The nodes in the network do not require time synchronization. To reduce the computational complexity of nodes, a variable of the length of the scheduling time window is introduced to divide the scheduled BDPs into various subsets for processing. The algorithm of BDP collision detection and results distribution is applied to send conflict information to back-end switching nodes. Each switching node obtains the conflict state of the previous nodes and adopts the same scheduling algorithm to realize consistent switching results with related nodes. Because a conflict chain is formed between BDPs, which may lead to an infinite link of conflict information, two BDP hop count constraint variables are introduced to limit the conflict chain in a specific range to ensure that the transmission of conflict information can be completed within a limited time range.

Satellite Network Topology
The satellite optical switching network includes two nodes, namely, a boundary-accessing node and an internal switching node. The function of photoelectric aggregation and de-aggregation processing is completed at the accessing node of the satellite optical switching network. 1,24,25 The optical switching function is processed at the internal nodes of the network. As shown in Fig. 1(a), in a six-GEO-satellite ring network each GEO satellite completes the accessing and switching functions. The relationship between the number of GEO satellites and coverage can be seen in Ref. 26. As the number of GEO satellites increases, the global coverage gap decreases until multi-satellite coverage of key regions is achieved. In this study, the same six-GEO-satellite ring network as in Ref. 5 is used for analysis and latency performance comparison, and the proposed network topology can achieve a better two-GEO-satellite coverage. The satellite optical switching network domain is independent of the electrical network domain. Independent addressing and routing are performed in an optical switching network, and the accessing node of the optical switching network includes the gateway function.

Data Format and Switching Type
BDP and BCP are transmitted in the optical switching system. BDP is transparently transmitted by the aggregation of the boundary nodes of the network. BCPs include burst registration packets (BRPs), burst registration response packets (BRPas), and switching conflict detection packets (BCoLs). BRPs are generated at the boundary source node of satellite optical networks, BCoLs are generated at optical switching nodes, and BRPa are generated at all switching nodes passing through BDPs. Detailed descriptions of the various types of packets are presented in Tables 1-6. Each optical link is composed of a control channel and data channel, which transmit BCPs and BDPs, respectively. The accessing node of the satellite optical network completes the generation, transmission, and reception of BCPs and BDPs. The switching node can obtain the arrival time, type, destination address, and other information of each BDP through a corresponding BRP (one type of BCP) and send the switching collision result, BCoLs, to other nodes.
To satisfy the transmission requirements of various types of services, the proposed satellite optical switching method supports both random collision switching and link reservation switching, as shown in Fig. 2. The boundary node of the satellite optical network sends the corresponding BRP before sending the BDP. The time interval between BRPs and BDPs is reserved according to the total processing time of various types of BCPs in the optical network. The arrival time and switching information of BDPs can be obtained after receiving BRPs. After receiving the transmission failure of a BDP, the source node resends the BDP after randomly backing off a time value in a scheduling cycle until it is successfully sent.

DC-SOBS Mechanism
At the boundary source node of the optical switching network, the electrical packets sent to the same destination node through the optical network are aggregated according to the BDP format, which is temporarily stored in the high-speed BDP buffer for transmission. Because all-optical transparent transmission and switching is used for the BDP in the data channel of the network, no specific packet format is required when generating a BDP. A functional diagram of the accessing node is shown in Fig. 3. The length of a BDP is specified as a fixed value T BDP , and T G is the protection time of the optical switching operation at the beginning and end of a BDP. The effective length of a BDP is T Bl , which is related to T G as shown in Eq. (1). If the remaining space is not sufficient to encapsulate the last electrical frame, this BDP encapsulation is completed and the electrical frame is encapsulated into the next BDP. When the current BDP transmission time is reached, regardless of whether the BDP is encapsulated, the electric packets are transformed to a fixed-length BDP and sent to the optical network. The BDP de-aggregation converge at the destination boundary accessing the node of the optical network. Each BRP and BDP are a group, and the transmissions of each group are independent of each other. Because the processing time of each optical switching node is reserved, the BRP sends first at time t Sbr , and then sends the corresponding BDP at time t Sbd . The offset interval between each BRP and BDP is T offset , which satisfies Eq. (2) (Fig. 4 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 4 7 0 After the optical switching nodes receive BRPs and BCoLs, they complete the switching processing independently. Each optical switching node processes four algorithms, namely, BRP relay, BDP collision detection, BDP collision results send, and BDP switching scheduling. BDP collision detection should be performed at each current node, and the detection results are sent to the corresponding later nodes. After s i receives the detection result of the previous hop nodes, it executes the BDP switching scheduling iteration to obtain the scheduling results and configure the optical devices. A functional diagram of the optical switching node is shown in Fig. 3. When the timer is equal to the schedule time point, the switching node executes the scheduling of all the BDPs in a subset schedule window. Each egress port has a scheduling unit, and the unit obtains the results within a fixed time and configures the optical devices to complete switching. The time relationship of each egress port is shown in Fig. 5. For ∀ BDP i , BdArrT i is obtained when BRP i arrives at the switching node. When the most recently arrived time of the BDP is equal to the reserved scheduling time T SP , the switching node starts the BDP switching scheduling. One switching scheduling completes the scheduling of all BRP in one scheduling window time t WD . All BRP j with arrival times BdArrT j satisfying Eq. (3) require scheduling in the current scheduling cycle and results. According to the scheduling result,

Optical network
Optical network  performing the configuration operation before BDP i arrives completes the switching function. The trigger time of the next scheduling cycle is determined by the arrival time of the first BDP k outside the current scheduling window E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 6 9 9 BdArrT i ≤ BdarrT j < BdArrT i þ t WD : Formally, the topology of a network in Fig. 1(b) is defined by an undirected graph GðS; LÞ, where the switching nodes in satellites are vertices S and the communication links connecting vertices are edges L. s i represents the i'th switching node, S ¼ fs 1 ; s 2 ; : : : ; s N g, and the total number of switching nodes in the network is N. l i represents the i'th laser link, L ¼ fl 1 ; l 2 ; : : : ; l M g, and the total number of links in the network is M. B represents the set of BDPs in the optical switching network, b i represents the i'th BDP, and B ¼ fb 1 ; b 2 ; : : : g. Br represents the set of BRPs, br i denotes the BRP that b i corresponds to. Br ¼ fbr 1 ; br 2 ; : : : g. V i ðS Ã i ; P i ; T i Þ indicates a set of switching nodes, egress ports, and time slices; b i is occupied in the optical switching network, where S Ã i represents the switching node set; P i represents the b i switching egress ports set; and T i is a set of b i arrival times. h i is the total number of switching nodes passed by b i . v i ðs Ã ik ; p ik ; t ik Þ indicates that b i at t ik arrives at s Ã ik , and the output from the egress port p ik . S Ã i ¼ fs Ã i1 ; s Ã i2 ; : : : ; s Ã ihi g, P ¼ fp i1 ; p i2 ; : : : ; p ihi g, and T ¼ ft i 1; t i 2; : : : ; t ihi g. Because the DC-SOBS mechanism is adopted in each switching node, node s q is used to determine whether b i can successfully reach when performing b i switching scheduling. Therefore, obtaining all the BCoLs of b i is necessary before s q , ∀ q ∈ ð1; q − 1Þ, and the same scheduling algorithm is used to iteratively obtain the scheduling result of s q . Because b j , which conflicts with b i in one of q − 1 nodes before, may also conflict with b k at any other node, a conflict chain is formed between b i , b j , and b k in the network. Each collision requires a scheduling iteration to obtain passable b i , b j , and b k . The collision chain of BDPs is shown in Fig. 6. When scheduling BDP1 in S7, not only the conflict in S7, but also the scheduling results of BDP1 to BDP5 from S1 to S6 should be considered because a collision chain is formed from BDP1 to BDP5. The dashed BDP in Fig. 6 shows the BDP discarded at the current switching node.
To limit the number of scheduling iterations in each switching node within a certain range, limiting the range of the collision chain is necessary. Two BDP hop count constraints were added for each BDP: TTL bh , collision hops, and TTL bc , cumulative collision times. TTL bh is used to control the length of each collision chain branch. One variable TTL bhi corresponds to one b i , and the initial TTL bhi ¼ TTL bh . When a conflict exists in b i , TTL bhi reduces by 1 and discards b i as the value of TTL bhi is 0. The function of TTL bc is to control the number of branches in a collision chain. One variable TTL bci also corresponds to one b i , and the initial TTL bci ¼ TTL bc . The counting rule takes the minimum TTL bci of all conflicting BDPs to −1 as the new TTL bcs of all collision BDPs. Figure 7 shows a collision example of the set TTL bc ¼ 4. Because two collision constraints (TTL bh and TTL bc ) are set, the current BDP is directly discarded when a collision occurs at the number of hops from TTL bh to H max . Therefore, when the conflict of the next BDP occurs from 1 to TTL bh − 1, the collision chain will continue to link to the next BDP, as displayed in Fig. 7. All BDPs in the collision chain are discarded in the next switching node when the TTL bc constraint is satisfied. Thus, the maximum hop length of collision chain L ColMax is given by Eq. (4). The iteration cycles parameter is set to N sch ¼ L ColMax . H max is the maximum number of hops in the BDPs in the satellite optical switching network.
The switching nodes execute the DC-SOBS mechanism proposed in this study, including BRP relay, BDP collision detection, BDP collision results distribution, and BDP switching scheduling in four steps. BRPs relay register the arrival time, forwarding address, remaining hops, and other information of BDPs. The BRP relay processing is presented in Algorithm 1.
After receiving a BRP, the switching node opens the RRP relay wait time window, as shown in Fig. 5, and starts the BDP collision detection when the window timer satisfies the constraint. The relay wait time window of br i is t cwi , as given by Eq. (5). BDP collision detection detects whether br i has a conflict and then generates BCoLs according to the collision detection results. The BDP collision detection process is shown in Algorithm 2. When the accessing node of the network generates BDPs, the transmission interval between each BRP and BDP is fixed, and the processing time overhead is introduced when it passes through a switching node. The BRP processing delay is reserved according to the maximum BRP processing delay for all switching nodes. Therefore, the time interval between the BRP and BDP after each switching node is reduced by t BRmax . If a switching node receives a br i of b i , then br j of other b j in conflict with b i will arrive within t cwi . When the wait timer Timer cwi ¼ t cwi , it executes the steps in Algorithm 2 to search ArriInfTable to find conflict BDPs and generate a BCoL frame. When the BCoL processing timer Timer B coli ¼ t BCmax the switching node sends the BCoL frame E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 5 ; 1 1 6 ; 1 1 6 After s Ã ij collision detection is complete, the node starts the BDP collision results distribution, as shown in Algorithm 3. The collision result distribution ensures that the BCoLs related to b i if Typ i ¼ BRP AND BRTyp i ¼ CompetitiveData is competitive data then Record the value of Dst i , Src i , BdNum i , BdArrT i , BdHopC i , BdHopR i in br i to ArriInfTable;   are s Ã i−1 , s i , and s j . The partial switching paths of BDP 2 are s i−1 , s i , and s iþ1 . In egress port A of the switching node s i−1 , BDP 1 , and BDP 2 conflict, BDP 2 forward from egress port A of switching node s i−1 to the switching node s i with scheduling, and BDP 1 is discarded. In switching node s i egress port A, BDP 2 and BDP 3 conflict, and BDP 3 forward to the switching node s iþ1 from egress port A of s i with scheduling, and BDP 2 is discarded. If there is no conflict at switching node s iþ1 , BDP 3 passes through switching node s iþ1 to switching node s iþ2 . BDP 4 and BDP 3 conflict at switching node s iþ2 . The scheduling result discards b 3 and allows b 4 to pass. Because the collision information of BDP 1 and BDP 2 can only be obtained at the switching node in their switching paths, the switching node s iþ2 cannot obtain the collision information of BDP 1 . However, when scheduling BDP 3 and BDP 4 , the switching node s iþ2 considers the collision between BDP 2 and BDP 3 in the switching node s i , and BDP 1 and BDP 2 in s i−1 , and determines whether BDP 3 is dropped in the previous nodes. Here, BDP 1 is the hidden conflict BDP of switching node s iþ2 . Therefore, it is necessary to copy BCoL 12 , the collision detection results of BDP 1 to switching node s iþ2 , and related subsequent switching nodes. Each egress port performs BCoL processing independently.
As shown in Fig. 5, when the latest BDP arrival time satisfies the constraints in Eq. (6), the switching scheduling of all BDPs in the schedule window is performed. Here, t SHmax is the scheduling time, and t SCmax is the computation time. During BDP switching scheduling, BDP validity detection is performed first, and each egress port conduct BDPs validity detection independently. By iterating the received BCoLs of the previous switching node, the BDPs that can reach the current switching node are screened, and then these BDPs are scheduled to obtain the results. According to the results, the optical switching device is configured, and switching is completed when the BDPs reach the switching node. The BDP switching scheduling is displayed in Algorithm 3.
First, the identification numbers of all b i in this scheduling cycle are obtained, then the collision information of these b i is searched in ColTab to obtain the conflict status of these b i in the previous switching nodes. BDP validity detection and switching scheduling were performed according to the collision tree. Each node of the collision tree is a conflict-switching node of the BDP switching path, and the information in the node is the collision information related to the BDP at the conflict-switching node. The next-level conflict tree node is established according to the conflict information. Each upper-level switching node of the BDP from various ingress ports that conflicts in the current node correspond to a lower-level collision tree node. The root node is the current switching node, and the construction method for the collision tree is shown in Fig. 9. BDP 1 and BDP 4 establish a collision tree at switching node s i . All nodes at the same level in the collision tree are simultaneously judged.  If the collision tree level of a BDP is greater than or equal to N sch , it indicates that the BDP exceeds the collision-restriction conditions, and the BDP is directly discarded. BDPs that are not directly discarded are judged iteratively from the leaf node to the root switching node. According to TTL bc and TTL bh , the BDP validity detection is judged to determine whether the BDP in the current switching node is effective. If BDP is effective, the BDP is valid. If the BDP is discarded, the BDP is invalid. Then, the next BDP collision tree judge is performed. After completing all BDP judges of the current scheduling cycle, the judge results of the current node are obtained.

Current switching node
In this study, a simple first-come-first-service (FCFS) scheduling strategy is used to verify the realizability and performance of the switching method. After the BDP validity detection is performed in each scheduling cycle, the BDP valid at the current switching node is scheduled, and the reserved BDP is scheduled first. The reserved BDP can pre-empt competitive BDP. Because BDPs at the end of the current cycle may span the next scheduling cycle and affect the BDP scheduling of the next cycle, the BDP of the previous cycle should be considered when scheduling the next scheduling cycle. The specific switching scheduling algorithm is as follows:

Analysis of transmission interval T offset
The reserved processing time in T offset includes four parts: BRP relay wait time T BR of the switching node, collision detection time T CD , BDPs collision results distribution time T CR , and switching scheduling time T SP . As shown in Eq. (8), with the DC-SOBS mechanism, BRP processing in the electrical domain introduces processing overhead time, whereas BDP does not introduce this time when passing through the switching node. Therefore, the time interval between BDP and BRP is shortened when passing through a network-switching node. When a network-accessing node sends BRP and BDP pairs, it is necessary to ensure that the time interval T offset between BDP and BRP can still satisfy the processing time of the last optical switching node when the BDP reaches the last switching node on the switching path. The reservation-time relationship of each part in T offset is shown in Fig. 10, and a quantitative analysis and description of T offset are as follows: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 8 ; 1 1 6 ; 4 0 1 First, T BR is analyzed. The BRP relay processing time of b i reserved in T offset is T BR ðb i Þ, and T BR ðb i Þ satisfies the constraints of Eq. (9), where t BRj represents the BRP processing time of s Ã ij , and h i represents the number of switching nodes passed by b i E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 9 ; 1 1 6 ; 3 3 2 Second, the BDP collision detection time T CD is analyzed. The time switching node s n performs BDP collision detection is t CDn , and each s n has a different t CDn , so it is difficult to obtain t CD1 ; t CD2 ; : : : ; t CDN for any of the accessing nodes. Let t CDmax be the upper maximum value of t CDn , as shown in Eq. (10). Thus, we take T CD ¼ t CDmax E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 0 ; 1 1 6 ; 2 3 1 Third, the BDP collision results distribution time T CR is analyzed. After completing the BDP collision detection, the detection results (BcoLs) should be sent to all relevant subsequent switching nodes, which should be received by all BCoLs before they perform switching scheduling. The maximum scheduling iteration period is N sch . The T CR reserves time for receiving the BCoLs of N sch -level switching nodes. Therefore, the T CR and T BR time analyses are similar. We set t CRmax as the upper limit maximum value of t CRn , as expressed in Eq. (11). T CR satisfies the constraint in Eq. (12), where T BCoL is the sending time of BCoL E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 1 ; 1 1 6 ; 1 1 0 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 2 ; 1 1 6 ; 6 2 T CR ¼ ðH max − 1 þ ðTTL bc − 1ÞðTTL bh − 1ÞÞt CRmax þ T BCoL :  Fourth, the BDP switching scheduling time T SP is analyzed. T SP includes a fixed schedule window time t WD , switching scheduling time t SHmax , and configuration time t SCmax . The fixed window time t WD is a constant value for the user set. The switching schedule time for each s n is distinct. Here, t SHmax is set as the maximum of t SHn , which is the switching schedule time of S, as displayed in Eqs. (13)(14)(15) E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 3 ; 1 1 6 ; 6 7 5

tBRmax=max(tbrn) tCDmax=max(tCDn) tCRmax=max(tcrn) tSHmax=max(tSHn) tSCmax=max(tSCn)
E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 4 ; 1 1 6 ; 6 2 8 E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 5 ; 1 1 6 ; 6 0 1 In summary, the time interval T offset is expressed by the following equation E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 1 6 ; 1 1 6 ; 5 7 8

Simulation and Discussion
The optical switching method proposed in this study was evaluated through simulation by constructing a network simulation environment consisting of six uniformly distributed ring-shaped GEO satellites labeled S 1 to S 6 . Each GEO satellite had two receiving and two transmitting intersatellite laser links with two adjacent GEO satellites, one of which was a BCP transmission link and the other was a BDP transmission link. Each satellite switching node was clockwise for port 0 and counterclockwise for port 1. GEO satellites provide communication services for ground equipment through microwave links: ground port 2. Each GEO satellite performs the function of photoelectric convergence and de-convergence of ground equipment electric domain data and intersatellite optical domain data, as well as GEO intersatellite BDP switching. The network simulation topology is shown in Fig. 1(b). The orbital altitude of each GEO satellite was 35,786 km, and the distance between the GEO satellites was 42,166 km. Therefore, the transmission delay of the intersatellite links was set to 141 ms, and the intersatellite BDP link rates were 10, 40, and 100 Gb∕s with one wavelength. 16 Because a six-node ring network was formed, the maximum number of hops in the optical switching network was H max ¼ 3, TTL bc ¼ 2, and TTL bh ¼ 2. The scheduling window width t WD was set to 1 ms, and the pre-and post-protection interval was T G ¼ 5 μs. Here, T BDP was set to 100, 200, and 500 μs.
The satellite platform adopted a field-programmable gate array (FPGA) to implement the scheduling algorithm. The BCP link rate was set to 10 Gb∕s. A Xilinx xc7vx690tffg1927-2 FPGA was selected to process the platform's switching controls and evaluate the setting of the simulation parameters t BRmax , t CRmax , t CDmax , t SCmax , and t SHmax . The FPGA operating clock frequency was set to 156.25 MHz (the clock cycle was 6.4 ns). The interface processing section used two Xilinx 10G IP cores with 64-bit data width and 512-bit internal data width. Using the lengths of different types of BCPs and the algorithm processing times, t BRmax , t CRmax , t CDmax , t SCmax , and t SHmax were calculated (see Table 7) t BRmax consists of sending and receiving BRP data and the processing time. It takes eight clock cycles to send and receive BRP frames, and another 20 cycles to process and update BRP registration and frame information, respectively, so the estimated processing time of t BRmax in the FPGA was 307.2 ns, and 500 ns was selected as a typical value for simulation. The value of t CRmax is similar to t BRmax , but the BCoL frame is 84 bytes, therefore, four transceiver processing cycles were added. t CRmax had a predicted processing time of 332.8 ns within the FPGA, and 500 ns was selected as a typical value for the simulation. The conflict detection process was completed by table lookup and comparison in the FPGA. Since the maximum number of hops in the network was three, and the maximum depth of conflict table entries was set to 100, the FPGA can meet the requirements of storing conflict information. Five clock cycles are used at each instance of completing the table entry data reading and conflict detection; therefore, the estimated processing time of t CDmax in FPGA was 3200 ns, and 5 μs was selected as a typical value for simulation. t SCmax contains two parts: FPGA control signal generation and optical device configuration; where the control signal generation time is 20 cycles and the optical device configuration time is 2300 ns according to Ref. 8. Therefore, t SCmax was 2428 ns, and 5 μs was selected as a typical value for simulation. Since the scheduling needs to iterate according to the collision tree and judge all the collision data, the processing clock period of each cycle is 1000, and the maximum number of hops in the network is three, the estimated processing time of t CRmax within the FPGA is 19.2 μs, and 20 μs was selected as a typical value for the simulation. The performance of DC-SOBS, conventional OCS, and reservation optical burst switching (R-OBS) mechanisms were compared and analyzed in three aspects: link utilization, packet loss rate, and delay through simulation.
Each satellite node sends BDPs to the surrounding random one-hop, two-hop, and three-hop satellite nodes to compare the satellite optical network throughput and link utilization with the DC-SOBS, R-OBS and OCS methods proposed in this article, Refs. 9 and 10, and Refs. 27 and 28, respectively. The BDPs generated by each switching node obey a Poisson distribution. 29 Each interstellar link load was from 10% to 100% and the BDP lengths were 100, 200, and 500 μs. With R-OBS, a centralized control node was required to complete the management and allocation of network link resources, and the centralized control node function was placed at node 1 in Fig. 1(b).
The simulation results shown in Fig. 11 reveal that the satellite link utilization was extremely low under R-OBS because of the link transmission delay, and the maximum link utilization was only ∼0.0038. With the OCS, the link cannot be multiplexed for various destination BDPs. In a one-hop scenario, the link utilization with the circuit switching approach increased linearly with the network load and can reach the maximum link utilization of 1. However, in the two-hop and three-hop randomly generated BDP simulation environment, the link utilization decreases considerably with the increase in the hop, which is only 0.25 and 0.11 under 100% load. Compared with the two conventional switching approaches, the maximum link utilization was achieved under one-hop, two-hop, and three-hop scenarios using the proposed DC-SOBS approach.  In the three-hop simulation scenario, when the network load reached 100%, the maximum value of link utilization was 0.563, 0.415, and 0.37 for the BDP length of 500, 200, and 100 μs, respectively. In the two-hop simulation scenario, the link utilization was 0.659, 0.553, and 0.512 for a BDP length of 500, 200, and 100 μs, respectively. In the one-hop scenario, the link utilization was the same as that in the OCS.
The results for the BDP loss ratio are shown in Fig. 12. With the R-OBS method, the entire links for a BDP should be reserved before sending, so the BDP loss ratio is 0. With the OCS method, the packet loss rate increased because of the increase in switching hops as the random BDP destination generation method resulted in more BDPs being outside of the OCS-established links. The maximum packet loss rate was reached in three-hop simulation conditions, which was ∼88.7%. With the DC-SOBS method, the trend of the BDP loss ratio was similar to the trend of its link utilization. The DC-OBS mechanism exhibited a considerably superior BDP loss rate than the OCS. Under the same network load condition, as the BDP length decreased, the BDP arrival rate increased, and thus the conflict probability increased. The maximum packet loss rate was 63% when the network load reached 100%, the BDPs length was 100 μs, and there were three hops.
Because only one end-to-end link can be established at the same time in one optical link using the OCS mechanism, link multiplexing is not possible, which leads to infinite transmission delay of part of the BDPs at randomly generated destination address simulation scenarios. Therefore, the transmission delay simulation was used to compare only the performances of the R-OBS and DC-SOBS mechanisms. The results of the time-delay simulations are shown in Fig. 13. Using the R-OBS method, the average transmission delay increased as the network load and hop increased, and the BDP length decreased. The R-OBS method reached a minimum delay of 14.1 s at the minimum network load of 10%, and a maximum delay of 2878.9 s when the network load reached 100%. With the DC-SOBS method, latency was primarily affected by the number of retransmissions of the BDPs. The maximum average transmission delay was 0.55 s when the network load was 100%, three-hop, and BDP 500 μs. This simulation result is a significant improvement on the 1.   0.14 s when the network load was 10%, one-hop, and BDP 100 μs. Therefore, the simulation comparison reveals that the latency performance can be improved by three orders of magnitude.
The BDP length of 500 μs was selected for link throughput and buffer size simulation analysis. The single link throughput and buffer sizes used were compared for one, two, and three hops for random services with 10, 40, and 100 Gb∕s link rates. Since data is only buffered when the source node forms BDPs, the buffer sizes were the average value for each optical link to reach the transmit set load ratio. As seen in Fig. 14(a), the link throughput was mainly affected by the number of hops for the same link rate condition. At 100 Gb∕s link rate, the maximum throughput of 98 Gb∕s (including a 10 μs protection interval) can be achieved at one hop without conflict, and as the number of hops increases, the link throughput decreases to 55 Gb∕s at three hops. The reason for this phenomenon is that as the number of hops increases, the probability of BDPs conflict increases, resulting in a decrease in the link throughput. As can be seen from the results in Fig. 14(b), the average buffer size required per link increased with the link rate and number of hops, reaching a maximum of 5600 MB at 100 Gb∕s in three-hop conditions. Since the average buffer size is proportional to the data transmission delay, the increase in hop count increases both the transmission delay and BDPs conflict probability, resulting in an increase in the average BDPs end-to-end delay; therefore, the average buffer sizes increases.

Conclusion
In this study, a novel DC-SOBS mechanism was proposed for satellite ultra-high-delay laser links. This mechanism supports both end-to-end reservation and hop-to-hop competition switching types depending on the requirements of various quality of services. The processing time of the optical switching node is preset in the offset time between each BCP and BDP pair. Each optical switching node independently schedules the BDP according to the received information to realize a distributed switching schedule. To solve the problem of the collision of BDPs in distributed scheduling, BDP collision detection, BDP collision results distribution, and BDP switching scheduling were introduced. The simulation results revealed that the DC-SOBS mechanism exhibited a higher link utilization performance than R-OBS under ultra-high delay of satellite links and evaluated the throughput and buffer sizes for different link rates of 10, 40, and 100 Gb∕s, ensuring satellite link rate requirements. Furthermore, compared with