Skip to content

Commit d8a392f

Browse files
committed
Update revised codes for camera ready
1 parent 1414c26 commit d8a392f

7 files changed

Lines changed: 473 additions & 622 deletions

README.md

Lines changed: 18 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
# SKYFALL
22

3-
SKYFALL for Low-earth orbit (LEO) satellite networks (LSNs). This repository contains code for paper #109: "Time-varying Bottleneck Links in LEO Satellite Networks: Identification, Exploits, and Countermeasures", to appear at the 32nd Network and Distributed System Security Symposium (NDSS 2025). The DOI is 10.5281/zenodo.13723143. The target URL is https://zenodo.org/doi/10.5281/zenodo.13723142.
3+
SKYFALL for Low-earth orbit (LEO) satellite networks (LSNs). This repository contains code for paper "Time-varying Bottleneck Links in LEO Satellite Networks: Identification, Exploits, and Countermeasures", to appear at the 32nd Network and Distributed System Security Symposium (NDSS 2025).
44

55
## What is SKYFALL?
66

7-
SKYFALL helps you to analyze bottleneck ground-satellite links (GSLs) and how to deploy malicious terminals (where and how many) to accordingly achieve link-flooding attacks of various throughput degradations.
7+
SKYFALL helps you analyze bottleneck ground-satellite links (GSLs) and the possible consequences of being congested.
88

99
## What are the components?
1010

1111
1. A configuration file (`config.json`).
1212
2. A code directory (`skyfall`).
1313
3. Bash scripts to run the experiments (`*.sh`).
14-
4. Links of reproduced data, satellite geo-information, and network data ([`starlink_shell_one-3600-backup`](https://drive.google.com/file/d/1VauMH0Dm6CLrvr9cGfB6mLm6YlLt9QQf/view?usp=sharing) and [`starlink_shell_one-100-backup`](https://drive.google.com/file/d/1py1jELENHA4I_RcOwxnMk4lYSNEdhu92/view?usp=sharing)). The storage for the datasets are 3.3GB and 95MB respectively.
14+
4. Links of reproduced data, satellite geo-information, and network data ([`starlink_shell_one-3600-backup`](https://drive.google.com/file/d/1rTuCinLNDnB9q8lyPyZgIaXxpHscX5my/view?usp=drive_link) and [`starlink_shell_one-100-backup`](https://drive.google.com/file/d/1eNZg-OF8xsjjjJNGbJ_8kE_j0x-MtSFR/view?usp=drive_link)). The storage for the datasets are 3.3GB and 104MB respectively.
1515

1616
## Preparation
1717

@@ -44,19 +44,17 @@ Or if you have a virtual environment like Conda, you can simply run `bash ./inst
4444
bash find_vital_GS.sh 3600 64
4545
```
4646

47-
5. Time-Slot Analysis (as shown in Section 5.1. You should specify the number of timeslots, the number of available logical processors for multi-thread, and throughput degradation, e.g., 3600, 64, and 0.9). Throughput degradation could be 1, 0.9, 0.8, 0.7, 0.6, and 0.5 as shown in Section 6. Thus, run the following commands sequentially to analyze the deployment for various degradations: (three hours taken with our hardware, half an hour for each command below)
47+
5. Time-Slot Analysis (as shown in Section 5.1. You should specify the number of timeslots, the number of available logical processors for multi-thread, and throughput degradation, e.g., 3600, 64, and 0.9). Throughput degradation could be 0.9, 0.8, 0.7, 0.6, and 0.5 as shown in Section 6. Thus, run the following commands sequentially to analyze the deployment for various degradations:
4848
```
49-
bash time_slot_analysis.sh 3600 64 1
5049
bash time_slot_analysis.sh 3600 64 0.9
5150
bash time_slot_analysis.sh 3600 64 0.8
5251
bash time_slot_analysis.sh 3600 64 0.7
5352
bash time_slot_analysis.sh 3600 64 0.6
5453
bash time_slot_analysis.sh 3600 64 0.5
5554
```
5655

57-
6. Aggregated_deployment (as shown in Section 5.2. You should specify the number of timeslots and throughput degradation, e.g., 3600 and 0.9). Throughput degradation could be like 1, 0.9, 0.8, 0.7, 0.6, 0.5 as shown in Section 6. Thus, run the following commands sequentially to Aggregate the deployment for various degradations: (one minute taken with our hardware)
56+
6. Aggregated_deployment (as shown in Section 5.2. You should specify the number of timeslots and throughput degradation, e.g., 3600 and 0.9). Throughput degradation could be like 0.9, 0.8, 0.7, 0.6, 0.5 as shown in Section 6. Thus, run the following commands sequentially to Aggregate the deployment for various degradations:
5857
```
59-
bash aggregated_deployment.sh 3600 1
6058
bash aggregated_deployment.sh 3600 0.9
6159
bash aggregated_deployment.sh 3600 0.8
6260
bash aggregated_deployment.sh 3600 0.7
@@ -76,19 +74,17 @@ Follow all the seven steps and their shell commands above. **It is better to use
7674

7775
After running step 2, a folder named `starlink_shell_one/sat_lla` will be in your current directory. It contains the satellite position information. This step is related to the experimental setting described in Section V.A of the paper.
7876

79-
8077
After running step 3, folders named `starlink_shell_one/+grid_traffic/link_traffic_data` and `starlink_shell_one/circle_traffic/link_traffic_data` contain the GSL and ISL legal traffic information, as well as satellite, ground station (GS), and blcok connection information. This step is also related to the experimental setting described in Section V.A of the paper.
8178

82-
After running step 4, vital GSes will be generated in each timeslot folder of `starlink_shell_one/+grid_traffic/link_traffic_data` and `starlink_shell_one/circle_traffic/link_traffic_data`. Step four relates to the Analysis Stage (Section IV.C) of the paper.
83-
84-
After running step 5, timeslot analysis results (malicious terminal number, blocks to deploy the malicious terminals, affected traffic and so on) will be generated in each timeslot folder of `starlink_shell_one/+grid_traffic/attack_traffic_data_land_only_bot` and `starlink_shell_one/circle_traffic/attack_traffic_data_land_only_bot`. Step five also relates to the Analysis Stage (Section IV.C) of the paper.
79+
After running step 4, vital GSes will be generated in each timeslot folder of `starlink_shell_one/+grid_traffic/link_traffic_data` and `starlink_shell_one/circle_traffic/link_traffic_data`. Step four relates to the Analysis Methodology (Section IV.C) of the paper.
8580

86-
After running step 6, aggregated deployment results (malicious terminal number, blocks to deploy the malicious terminals, affected traffic, and so on) will be generated in each folder of `starlink_shell_one/+grid_traffic/attack_traffic_data_land_only_bot` and `starlink_shell_one/circle_traffic/attack_traffic_data_land_only_bot`. Step six also relates to the Analysis Stage (Section IV.C) of the paper.
81+
After running step 5, timeslot analysis results (malicious terminal number, blocks to deploy the malicious terminals, affected traffic and so on) will be generated in each timeslot folder of `starlink_shell_one/+grid_traffic/attack_traffic_data_land_only_bot` and `starlink_shell_one/circle_traffic/attack_traffic_data_land_only_bot`. Step five also relates to the Analysis Methodology (Section IV.C) of the paper.
8782

83+
After running step 6, aggregated deployment results (malicious terminal number, blocks to deploy the malicious terminals, affected traffic, and so on) will be generated in each folder of `starlink_shell_one/+grid_traffic/attack_traffic_data_land_only_bot` and `starlink_shell_one/circle_traffic/attack_traffic_data_land_only_bot`. Step six also relates to the Analysis Methodology (Section IV.C) of the paper.
8884

89-
After running step 7, reproduced results and figures will be in `starlink_shell_one/results`.
85+
After running step 7, reproduced results will be in `starlink_shell_one/results`.
9086

91-
If running such a reproduction is a burden, all the reproduced data is already available in [`starlink_shell_one-3600-backup`](https://drive.google.com/file/d/1VauMH0Dm6CLrvr9cGfB6mLm6YlLt9QQf/view?usp=sharing). The storage for the datasets is 3.3GB.
87+
If running such a reproduction is a burden, all the reproduced data is already available in [`starlink_shell_one-3600-backup`](https://drive.google.com/file/d/1rTuCinLNDnB9q8lyPyZgIaXxpHscX5my/view?usp=drive_link). The storage for the datasets is 3.3GB.
9288

9389

9490
## How to run a small demo?
@@ -114,19 +110,17 @@ To run the demo, everything else could be kept the same except the parameter of
114110
bash find_vital_GS.sh 100 8
115111
```
116112

117-
5. **(Make the timeslots and the number of logical processors smaller, such as 100 and 8)** Time-Slot Analysis (as shown in Section 5.1. You should specify the number of timeslots, the number of available logical processors for multi-thread, and throughput degradation, e.g. 100 8 0.9). Throughput degradation could be like 1, 0.9, 0.8, 0.7, 0.6, 0.5 as shown in Section 6. Thus, run the following commands sequentially to analyze the deployment for various degradations: (ten to thirty minutes taken with our hardware)
113+
5. **(Make the timeslots and the number of logical processors smaller, such as 100 and 8)** Time-Slot Analysis (as shown in Section 5.1. You should specify the number of timeslots, the number of available logical processors for multi-thread, and throughput degradation, e.g. 100 8 0.9). Throughput degradation could be like 0.9, 0.8, 0.7, 0.6, 0.5 as shown in Section 6. Thus, run the following commands sequentially to analyze the deployment for various degradations:
118114
```
119-
bash time_slot_analysis.sh 100 8 1
120115
bash time_slot_analysis.sh 100 8 0.9
121116
bash time_slot_analysis.sh 100 8 0.8
122117
bash time_slot_analysis.sh 100 8 0.7
123118
bash time_slot_analysis.sh 100 8 0.6
124119
bash time_slot_analysis.sh 100 8 0.5
125120
```
126121

127-
6. **(Make the timeslots smaller, such as 100)** Aggregated_deployment (as shown in Section 5.2. You should specify the number of timeslots and throughput degradation, e.g. 100 0.9). Throughput degradation could be like 1, 0.9, 0.8, 0.7, 0.6, 0.5 as shown in Section 6. Thus, run the following commands sequentially to Aggregate the deployment for various degradations: (one minute taken with our hardware)
122+
6. **(Make the timeslots smaller, such as 100)** Aggregated_deployment (as shown in Section 5.2. You should specify the number of timeslots and throughput degradation, e.g. 100 0.9). Throughput degradation could be like 0.9, 0.8, 0.7, 0.6, 0.5 as shown in Section 6. Thus, run the following commands sequentially to Aggregate the deployment for various degradations:
128123
```
129-
bash aggregated_deployment.sh 100 1
130124
bash aggregated_deployment.sh 100 0.9
131125
bash aggregated_deployment.sh 100 0.8
132126
bash aggregated_deployment.sh 100 0.7
@@ -139,18 +133,19 @@ To run the demo, everything else could be kept the same except the parameter of
139133
bash get_results.sh
140134
```
141135

142-
If running such a reproduction is a burden, all the reproduced data is already available in [`starlink_shell_one-100-backup`](https://drive.google.com/file/d/1py1jELENHA4I_RcOwxnMk4lYSNEdhu92/view?usp=sharing). The storage for the dataset is 95MB.
136+
If running such a reproduction is a burden, all the reproduced data is already available in [`starlink_shell_one-100-backup`](https://drive.google.com/file/d/1eNZg-OF8xsjjjJNGbJ_8kE_j0x-MtSFR/view?usp=drive_link). The storage for the dataset is 104MB.
143137

144138
## Results
145139
Running the above seven steps allows you to get the reproduced results and figures in `starlink_shell_one/results`.
146140

147141
### Better Performance of SKYFALL’ Distributed Botnet
148-
SKYFALL is able to exploit the time-varying bottleneck and achieve good flooding attack performances. We compare it with a baseline, where both are given the same number of bot terminals. We then compare the throughput (ratio) of affected background traffic and number of affected GSLs over time. The results are shown in Figure 9. Under `starlink_shell_one/results/`, `fig-9a`, `fig-9b`, and `fig-9c` contain the corresponding throughput (ratio) data for each timeslot. `fig-10a` contains the number of attacked GSLs for each timeslot, while `fig-10b` documents the maximum, minimum, and average numbers.
142+
SKYFALL is able to exploit the time-varying bottleneck and achieve good flooding attack performances. We compare it with a baseline, where both are given the same number of bot terminals. We then compare the throughput (ratio) of affected background traffic and number of affected GSLs over time. The results are shown in Figure 10. Under `starlink_shell_one/results/`, `fig-10a`, `fig-10b`, and `fig-10c` contain the corresponding throughput (ratio) data for each timeslot. `fig-11a` contains the CDF of the number of congested GSLs, while `fig-11b` documents the box-plot.
149143

150-
### Cost Analysis
151-
To achieve the same throughput degradation as the baseline approach, SKYFALL is able to leverage a smaller number of malicious terminals (botnet size) for both +Grid and Circular topologies. The results are shown in Figure 12. `fig-12a`, and `fig-12b` under `starlink_shell_one/results/` contain the number of malicious terminals under various degradations for both topologies respectively.
144+
### Variability Analysis
145+
The analysis results depend primarily on factors, such as the number and regions of available compromised UTs, which determines the malicious traffic volume. The results are shown in Figure 12 and Figure 13. `fig-12a`, and `fig-12b` under `starlink_shell_one/results/` contain the hroughput degradation with varying numbers of compromised UTs. `fig-13a`, and `fig-13b` under `starlink_shell_one/results/` contain the throughput degradation with varying numbers of
146+
regional blocks.
152147

153-
### Detectability Analysis
148+
### Stealthiness Analysis
154149
During SKYFALL's attack, the total malicious traffic of each satellite from all accessed malicious terminals is small. It quantifies the detectability of SKYFALL. The results are shown in Figure 14. Only a small number of satellites are accessed with malicious traffic. Under `starlink_shell_one/results/`, `fig-14a`,and `fig-14b` contain the throughput data of maliciousc traffic for each satellite in ascending order under various throughput
155150
degradations.
156151

skyfall/aggregated_deployment_circle.py

Lines changed: 35 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -77,14 +77,33 @@ def block_to_xy(block_id):
7777
target_sat = user_connect_sat[max_block]
7878
for i in range(len(user_connect_sat)):
7979
if user_connect_sat[i] == target_sat:
80-
total_bot += int(average_block[i] * 1.62)
80+
total_bot += average_block[i]
8181
average_block[i] = 0
8282

8383
bot_total_num += math.ceil(total_bot)
8484
bot_block.append(max_block)
8585
bot_block_bot_num_per_block.append(math.ceil(total_bot))
8686

87-
# replace the blocks with a few malicious terminals to neaby blocks
87+
bot_block = np.array(bot_block, dtype=int)
88+
np.savetxt('../' + cons_name + '/circle_data/attack_traffic_data_land_only_bot/' + str(ratio) + "-" +
89+
str(traffic_thre) + "-" + str(sat_per_cycle) + "-" +
90+
str(GSL_capacity) + "-" + str(unit_traffic) + '/bot_block.txt',
91+
bot_block,
92+
fmt='%d')
93+
bot_block_bot_num_per_block = np.array(
94+
bot_block_bot_num_per_block, dtype=int)
95+
np.savetxt('../' + cons_name + '/circle_data/attack_traffic_data_land_only_bot/' + str(ratio) + "-" +
96+
str(traffic_thre) + "-" + str(sat_per_cycle) + "-" +
97+
str(GSL_capacity) + "-" + str(unit_traffic) + '/bot_block_bot_num_per_block.txt',
98+
bot_block_bot_num_per_block,
99+
fmt='%d')
100+
bot_num = np.array([bot_total_num], dtype=int)
101+
np.savetxt('../' + cons_name + '/circle_data/attack_traffic_data_land_only_bot/' + str(ratio) + "-" +
102+
str(traffic_thre) + "-" + str(sat_per_cycle) + "-" +
103+
str(GSL_capacity) + "-" + str(unit_traffic) + '/bot_num.txt',
104+
bot_num,
105+
fmt='%d')
106+
88107
bot_block_trim = []
89108
bot_block_bot_num_per_block_trim = []
90109
bot_block_trim_pos = []
@@ -96,36 +115,17 @@ def block_to_xy(block_id):
96115
lat, lon = block_to_xy(bot_block[block_index])
97116
bot_block_trim_pos.append(cir_to_car_np(lat, lon))
98117

118+
# replace the blocks with a few malicious terminals to neaby blocks
99119
for block_index in range(len(bot_block_bot_num_per_block)):
100-
if bot_block_bot_num_per_block[block_index] == 1:
101-
lat, lon = block_to_xy(bot_block[block_index])
102-
dis = np.sqrt(
103-
np.sum(np.square(bot_block_trim_pos - cir_to_car_np(lat, lon)),
104-
axis=1))
105-
target_block_index = np.argmin(dis)
106-
bot_block_bot_num_per_block_trim[target_block_index] += 1
107-
if bot_block_bot_num_per_block[block_index] == 2:
108-
lat, lon = block_to_xy(bot_block[block_index])
109-
dis = np.sqrt(
110-
np.sum(np.square(bot_block_trim_pos - cir_to_car_np(lat, lon)),
111-
axis=1))
112-
target_block_index = np.argmin(dis)
113-
bot_block_bot_num_per_block_trim[target_block_index] += 2
114-
if bot_block_bot_num_per_block[block_index] == 3:
115-
lat, lon = block_to_xy(bot_block[block_index])
116-
dis = np.sqrt(
117-
np.sum(np.square(bot_block_trim_pos - cir_to_car_np(lat, lon)),
118-
axis=1))
119-
target_block_index = np.argmin(dis)
120-
bot_block_bot_num_per_block_trim[target_block_index] += 3
121-
if bot_block_bot_num_per_block[block_index] == 4:
122-
lat, lon = block_to_xy(bot_block[block_index])
123-
dis = np.sqrt(
124-
np.sum(np.square(bot_block_trim_pos - cir_to_car_np(lat, lon)),
125-
axis=1))
126-
target_block_index = np.argmin(dis)
127-
bot_block_bot_num_per_block_trim[target_block_index] += 4
128-
120+
for small_bot_num in [1, 2, 3, 4]:
121+
if bot_block_bot_num_per_block[block_index] == small_bot_num:
122+
lat, lon = block_to_xy(bot_block[block_index])
123+
dis = np.sqrt(
124+
np.sum(np.square(bot_block_trim_pos - cir_to_car_np(lat, lon)),
125+
axis=1))
126+
target_block_index = np.argmin(dis)
127+
bot_block_bot_num_per_block_trim[target_block_index] += small_bot_num
128+
129129
# replace the blocks with too many malicious terminals to neaby blocks
130130
loop_times = 0
131131
while True:
@@ -153,7 +153,7 @@ def block_to_xy(block_id):
153153
bot_block_bot_num_per_block_trim.append(20)
154154
bot_block_trim.append(bot_block_trim[index + additional_block_num] + 1)
155155
bot_block_bot_num_per_block_trim.append(bot_num % 20)
156-
bot_block_trim.append(bot_block_trim[index] - 1)
156+
bot_block_trim.append(bot_block_trim[index] - 1)
157157

158158
bot_block_trim = np.array(bot_block_trim, dtype=int)
159159
np.savetxt('../' + cons_name + '/circle_data/attack_traffic_data_land_only_bot/' + str(ratio) + "-" +
@@ -168,10 +168,10 @@ def block_to_xy(block_id):
168168
str(GSL_capacity) + "-" + str(unit_traffic) + '/bot_block_bot_num_per_block_trim.txt',
169169
bot_block_bot_num_per_block_trim,
170170
fmt='%d')
171-
bot_num = np.array([bot_total_num], dtype=int)
171+
bot_num_trim = np.array([bot_total_num], dtype=int)
172172
np.savetxt('../' + cons_name + '/circle_data/attack_traffic_data_land_only_bot/' + str(ratio) + "-" +
173173
str(traffic_thre) + "-" + str(sat_per_cycle) + "-" +
174-
str(GSL_capacity) + "-" + str(unit_traffic) + '/bot_num.txt',
175-
bot_num,
174+
str(GSL_capacity) + "-" + str(unit_traffic) + '/bot_num_trim.txt',
175+
bot_num_trim,
176176
fmt='%d')
177177
print("Finished aggregating for circle!")

0 commit comments

Comments
 (0)