Compare commits

..

37 Commits

Author SHA1 Message Date
sijie.sun
d0a3a40a0f fix bugs
add timeout for wss try_accept

public server should show stats

use default values for flags

bump version to 2.0.0
2024-09-29 17:49:14 +08:00
sijie.sun
ff5ee8a05e support forward foreign network packet between peers 2024-09-29 10:31:29 +08:00
Hs_Yeah
a50bcf3087 Fix IP address display in the status page of GUI
Signed-off-by: Hs_Yeah <bYeahq@gmail.com>
2024-09-27 15:58:02 +08:00
sijie.sun
e0b364d3e2 use ubuntu 24.04 apt source
github action upgraded the ubuntu-latest to 24.04

https://github.com/actions/runner-images/pull/10687
2024-09-27 11:05:52 +08:00
sijie.sun
2496cf51c3 fix connection loss when traffic is huge 2024-09-26 23:49:01 +08:00
sijie.sun
7b4a01e7fb fix ring buffer stuck when using multi thread runtime 2024-09-26 14:34:33 +08:00
Hs_Yeah
3f9a1d8f2e Get dev_name from the global_ctx of each instance 2024-09-24 16:52:38 +08:00
Hs_Yeah
0b927bcc91 Add TUN device name setting support to easytier-gui 2024-09-24 16:52:38 +08:00
Hs_Yeah
92397bf7b6 Set Category of the TUN device's network profile to 1 in Windows Registry 2024-09-24 14:23:42 +08:00
sijie.sun
d1e2e1db2b fix ospf foreign network info version 2024-09-23 13:42:25 +08:00
sijie.sun
783ba50c9e add cli command for global foreign network info 2024-09-23 00:03:57 +08:00
sijie.sun
aca9a0e35b use ospf route to propogate foreign network info 2024-09-22 22:12:18 +08:00
liyang
fb8d262554 Fix spelling errors 2024-09-22 20:58:37 +08:00
sijie.sun
bd60cfc2a0 add feature flag to ospf route 2024-09-21 20:54:19 +08:00
sijie.sun
06afd221d5 make ping more smart 2024-09-21 18:00:52 +08:00
sijie.sun
0171fb35a4 fix upload oss 2024-09-21 00:24:58 +08:00
Jiangqiu Shen
99c47813c3 add the options to enable latency first or not
in the old behavior, the flags is not set, and it will be generated as default value in the first read. so the default value for the latency_first will be set to true according to the Default settings to Flag.

so the Vue code init the latency first to true.
2024-09-19 20:09:17 +08:00
sijie.sun
82f5dfd569 show nodes version correctly 2024-09-18 23:15:08 +08:00
sijie.sun
6d7edcd486 fix connect failed after setup one of sockets fails 2024-09-18 23:15:08 +08:00
M2kar
9f273dc887 modify compile command (#333)
* modify compile command

* fix(READMD.md): compile from git

* Update README_CN.md
2024-09-18 21:57:25 +08:00
Jiangqiu Shen
ac9cfa5040 making cli parse code more ergonomic by remove some copy and unwrap (#347)
1. remove some unessesary copy in cli parse code of string
2. make some member function into non-member function to avoid taking the self reference.
3. use if let Some(..) instead of if xxx.is_some() to avoid copy and unwrap
2024-09-18 21:57:12 +08:00
Sijie.Sun
1b03223537 use customized rpc implementation, remove Tarpc & Tonic (#348)
This patch removes Tarpc & Tonic GRPC and implements a customized rpc framework, which can be used by peer rpc and cli interface.

web config server can also use this rpc framework.

moreover, rewrite the public server logic, use ospf route to implement public server based networking. this make public server mesh possible.
2024-09-18 21:55:28 +08:00
m1m1sha
0467b0a3dc Merge pull request #342 from EasyTier/ci/issue-template
🐎 ci: Modify Text
2024-09-15 22:39:11 +08:00
m1m1sha
ba75167238 🐎 ci: Modify Text 2024-09-15 22:38:06 +08:00
m1m1sha
51e7daa26f Merge pull request #341 from EasyTier/ci/github-issue-template
🐎 ci: github issue template
2024-09-15 22:30:49 +08:00
m1m1sha
2ff653cc6f 🐎 ci: github issue template 2024-09-15 22:28:55 +08:00
m1m1sha
cfe4d080d5 🐞 fix: GUI relay display error (#335) 2024-09-14 11:41:38 +08:00
M2kar
9b28ecde8e fix compile error due to rust version format (#332) 2024-09-14 11:40:46 +08:00
Sijie.Sun
096ed39d23 fix udp proxy disconn unexpectedly (#321) 2024-09-11 23:46:26 +08:00
m1m1sha
6ea3adcef8 feat: show version & local node (#318)
*  feat: version

Add display version information, incompatible with lower versions

* 🎈 perf: unknown

Unknown when there is no version number displayed

*  feat: Display local nodes

Display local nodes, incompatible with lower versions
2024-09-11 15:58:13 +08:00
m1m1sha
4342be29d7 Perf/front page (#316)
* 🐳 chore: dependencies

* 🐞 fix: minor style issues

fixed background white patches in dark mode
fixed the line height of the status label, which resulted in a bloated appearance

* 🌈 style: lint

*  feat: about
2024-09-11 09:13:00 +08:00
Sijie.Sun
1609c97574 fix panic when wireguard tunnel encounter udp recv error (#299) 2024-09-02 09:37:34 +08:00
Sijie.Sun
f07b3ee9c6 fix punching task leak (#298)
the punching task creator doesn't check if the task is already
running, and may create many punching task to same peer node.

this patch also improve hole punching by checking hole punch packet
even if punch rpc is failed.
2024-08-31 14:37:34 +08:00
Sijie.Sun
2058dbc470 fix wg client hang after some time (#297)
wg portal doesn't know client disconnect causing msg overstocked in queue, make
entire peer packet process pipeline hang.
2024-08-31 12:44:12 +08:00
3RDNature
6964fb71fc Add a setting "disable_udp_hole_punch" to disable UDP hole punch function (#291)
It can solve #289 tentative.

Co-authored-by: 3rdnature <root@natureblog.net>
2024-08-29 11:34:30 +08:00
Jiangqiu Shen
a8bb4ee7e5 Update Cargo.toml (#290)
fix compile error metioned in #286
2024-08-29 09:06:48 +08:00
严浩
3fcd74ce4e fix: Different network methods server URL display (#283)
Co-authored-by: 严浩 <i@oo1.dev>
2024-08-27 10:09:46 +08:00
122 changed files with 8358 additions and 4964 deletions

53
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View File

@@ -0,0 +1,53 @@
# Copyright 2024-present Easytier Programme within The Commons Conservancy
# SPDX-License-Identifier: Apache-2.0
name: 🐞 问题报告 / Bug Report
title: '[bug] '
description: 报告一个问题 / Report a bug
labels: ['type: bug', 'status: needs triage']
body:
- type: markdown
attributes:
value: |
## 在提交问题之前 / First of all
1. 请先搜索有关此问题的 [现有问题](https://github.com/EasyTier/EasyTier/issues?q=is%3Aissue)。
1. Please search for [existing issues](https://github.com/EasyTier/EasyTier/issues?q=is%3Aissue) about this problem first.
2. 请确保所使用的 Easytier 版本都是最新的。
2. Make sure that all Easytier versions are up-to-date.
3. 请确保这是 EasyTier 的问题,而不是你正在使用的其他内容引起的问题。
3. Make sure it's an issue with EasyTier and not something else you are using.
4. 请记得遵守我们的社区准则并保持友好态度。
4. Remember to follow our community guidelines and be friendly.
- type: textarea
id: description
attributes:
label: 描述问题 / Describe the bug
description: 对 bug 的明确描述。如果条件允许,请包括屏幕截图。 / A clear description of what the bug is. Include screenshots if applicable.
placeholder: 问题描述 / Bug description
validations:
required: true
- type: textarea
id: reproduction
attributes:
label: 重现步骤 / Reproduction
description: 能够重现行为的步骤或指向能够复现的存储库链接。 / A link to a reproduction repo or steps to reproduce the behaviour.
placeholder: |
请提供一个最小化的复现示例或复现步骤,请参考这个指南 https://stackoverflow.com/help/minimal-reproducible-example
Please provide a minimal reproduction or steps to reproduce, see this guide https://stackoverflow.com/help/minimal-reproducible-example
为什么需要重现(问题)?请参阅这篇文章 https://antfu.me/posts/why-reproductions-are-required
Why reproduction is required? see this article https://antfu.me/posts/why-reproductions-are-required
- type: textarea
id: expected-behavior
attributes:
label: 预期结果 / Expected behavior
description: 清楚地描述您期望发生的事情。 / A clear description of what you expected to happen.
- type: textarea
id: context
attributes:
label: 额外上下文 / Additional context
description: 在这里添加关于问题的任何其他上下文。 / Add any other context about the problem here.

View File

@@ -0,0 +1,38 @@
# Copyright 2024-present Easytier Programme within The Commons Conservancy
# SPDX-License-Identifier: Apache-2.0
name: 💡 新功能请求 / Feature Request
title: '[feat] '
description: 提出一个想法 / Suggest an idea
labels: ['type: feature request']
body:
- type: textarea
id: problem
attributes:
label: 描述问题 / Describe the problem
description: 明确描述此功能将解决的问题 / A clear description of the problem this feature would solve
placeholder: "我总是在...感觉困惑 / I'm always frustrated when..."
validations:
required: true
- type: textarea
id: solution
attributes:
label: "描述您想要的解决方案 / Describe the solution you'd like"
description: 明确说明您希望做出的改变 / A clear description of what change you would like
placeholder: '我希望... / I would like to...'
validations:
required: true
- type: textarea
id: alternatives
attributes:
label: 替代方案 / Alternatives considered
description: "您考虑过的任何替代解决方案 / Any alternative solutions you've considered"
- type: textarea
id: context
attributes:
label: 额外上下文 / Additional context
description: 在此处添加有关问题的任何其他上下文。 / Add any other context about the problem here.

View File

@@ -2,7 +2,7 @@ name: EasyTier Core
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,16 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
# do not skip push on branch starts with releases/
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", ".github/workflows/core.yml", ".github/workflows/install_rust.sh"]'
build:
strategy:
@@ -86,6 +88,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- name: Cargo cache
uses: actions/cache@v4
with:
@@ -196,7 +202,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
core-result:

View File

@@ -2,7 +2,7 @@ name: EasyTier GUI
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,15 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", ".github/workflows/gui.yml", ".github/workflows/install_rust.sh"]'
build-gui:
strategy:
@@ -69,6 +70,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-node@v4
with:
node-version: 21
@@ -118,33 +123,31 @@ jobs:
if: ${{ matrix.TARGET == 'aarch64-unknown-linux-musl' }}
run: |
# see https://tauri.app/v1/guides/building/linux/
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble main restricted" | sudo tee /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://archive.ubuntu.com/ubuntu/ noble-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=amd64] http://security.ubuntu.com/ubuntu/ noble-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports jammy-security multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-updates multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-backports main restricted universe multiverse" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security main restricted" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security universe" | sudo tee -a /etc/apt/sources.list
echo "deb [arch=armhf,arm64] http://ports.ubuntu.com/ubuntu-ports noble-security multiverse" | sudo tee -a /etc/apt/sources.list
sudo dpkg --add-architecture arm64
sudo apt-get update && sudo apt-get upgrade -y
sudo apt install gcc-aarch64-linux-gnu
sudo apt install libwebkit2gtk-4.1-dev:arm64
sudo apt install libssl-dev:arm64
sudo apt install -f -o Dpkg::Options::="--force-overwrite" libwebkit2gtk-4.1-dev:arm64 libssl-dev:arm64 gcc-aarch64-linux-gnu
echo "PKG_CONFIG_SYSROOT_DIR=/usr/aarch64-linux-gnu/" >> "$GITHUB_ENV"
echo "PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig/" >> "$GITHUB_ENV"
@@ -197,7 +200,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/gui
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-gui-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
gui-result:

View File

@@ -2,7 +2,7 @@ name: EasyTier Mobile
on:
push:
branches: ["develop", "main"]
branches: ["develop", "main", "releases/**"]
pull_request:
branches: ["develop", "main"]
@@ -20,14 +20,15 @@ jobs:
runs-on: ubuntu-latest
# Map a step output to a job output
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_skip: ${{ steps.skip_check.outputs.should_skip == 'true' && !startsWith(github.ref_name, 'releases/') }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
# All of these options are optional, so you can remove them if you are happy with the defaults
concurrent_skipping: 'never'
concurrent_skipping: 'same_content_newer'
skip_after_successful_duplicate: 'true'
cancel_others: 'true'
paths: '["Cargo.toml", "Cargo.lock", "easytier/**", "easytier-gui/**", "tauri-plugin-vpnservice/**", ".github/workflows/mobile.yml", ".github/workflows/install_rust.sh"]'
build-mobile:
strategy:
@@ -48,6 +49,10 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Set current ref as env variable
run: |
echo "GIT_DESC=$(git log -1 --format=%cd.%h --date=format:%Y-%m-%d_%H:%M:%S)" >> $GITHUB_ENV
- uses: actions/setup-java@v4
with:
distribution: 'oracle'
@@ -150,7 +155,7 @@ jobs:
endpoint: ${{ secrets.ALIYUN_OSS_ENDPOINT }}
bucket: ${{ secrets.ALIYUN_OSS_BUCKET }}
local-path: ./artifacts/
remote-path: /easytier-releases/${{ github.sha }}/mobile
remote-path: /easytier-releases/${{env.GIT_DESC}}/easytier-gui-${{ matrix.ARTIFACT_NAME }}
no-delete-remote-files: true
retry: 5
mobile-result:

View File

@@ -21,7 +21,7 @@ on:
version:
description: 'Version for this release'
type: string
default: 'v1.2.3'
default: 'v2.0.0'
required: true
make_latest:
description: 'Mark this release as latest'

343
Cargo.lock generated
View File

@@ -289,6 +289,16 @@ dependencies = [
"syn 2.0.74",
]
[[package]]
name = "async-ringbuf"
version = "0.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32690af15155711360e74119b99605416c9e4dfd45b0859bd9af795a50693bec"
dependencies = [
"futures",
"ringbuf",
]
[[package]]
name = "async-signal"
version = "0.2.10"
@@ -369,15 +379,6 @@ dependencies = [
"system-deps",
]
[[package]]
name = "atomic-polyfill"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8cf2bce30dfe09ef0bfaef228b9d414faaf7e563035494d7fe092dba54b300f4"
dependencies = [
"critical-section",
]
[[package]]
name = "atomic-shim"
version = "0.2.0"
@@ -427,53 +428,6 @@ version = "1.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c4b4d0bd25bd0b74681c0ad21497610ce1b7c91b1022cd21c80c6fbdd9476b0"
[[package]]
name = "axum"
version = "0.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3a6c9af12842a67734c9a2e355436e5d03b22383ed60cf13cd0c18fbfe3dcbcf"
dependencies = [
"async-trait",
"axum-core",
"bytes",
"futures-util",
"http 1.1.0",
"http-body 1.0.1",
"http-body-util",
"itoa 1.0.11",
"matchit",
"memchr",
"mime",
"percent-encoding",
"pin-project-lite",
"rustversion",
"serde",
"sync_wrapper 1.0.1",
"tower",
"tower-layer",
"tower-service",
]
[[package]]
name = "axum-core"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a15c63fd72d41492dc4f497196f5da1fb04fb7529e631d73630d1b491e47a2e3"
dependencies = [
"async-trait",
"bytes",
"futures-util",
"http 1.1.0",
"http-body 1.0.1",
"http-body-util",
"mime",
"pin-project-lite",
"rustversion",
"sync_wrapper 0.1.2",
"tower-layer",
"tower-service",
]
[[package]]
name = "backtrace"
version = "0.3.73"
@@ -960,12 +914,6 @@ dependencies = [
"error-code",
]
[[package]]
name = "cobs"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "67ba02a97a2bd10f4b59b25c7973101c79642302776489e030cd13cdab09ed15"
[[package]]
name = "cocoa"
version = "0.25.0"
@@ -1176,12 +1124,6 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "critical-section"
version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7059fff8937831a9ae6f0fe4d658ffabf58f2ca96aa9dec1c889f936f705f216"
[[package]]
name = "crossbeam"
version = "0.8.4"
@@ -1597,11 +1539,12 @@ checksum = "0d6ef0072f8a535281e4876be788938b528e9a1d43900b82c2569af7da799125"
[[package]]
name = "easytier"
version = "1.2.3"
version = "2.0.0"
dependencies = [
"aes-gcm",
"anyhow",
"async-recursion",
"async-ringbuf",
"async-stream",
"async-trait",
"atomic-shim",
@@ -1623,7 +1566,9 @@ dependencies = [
"derivative",
"encoding",
"futures",
"futures-util",
"gethostname 0.5.0",
"git-version",
"globwalk",
"http 1.1.0",
"humansize",
@@ -1637,18 +1582,22 @@ dependencies = [
"petgraph",
"pin-project-lite",
"pnet",
"postcard",
"prost",
"prost-build",
"prost-types",
"quinn",
"rand 0.8.5",
"rcgen",
"regex",
"reqwest 0.11.27",
"ring 0.17.8",
"ringbuf",
"rpc_build",
"rstest",
"rust-i18n",
"rustls",
"serde",
"serde_json",
"serial_test",
"smoltcp",
"socket2",
@@ -1656,7 +1605,6 @@ dependencies = [
"sys-locale",
"tabled",
"tachyonix",
"tarpc",
"thiserror",
"time",
"timedmap",
@@ -1667,7 +1615,6 @@ dependencies = [
"tokio-util",
"tokio-websockets",
"toml 0.8.19",
"tonic",
"tonic-build",
"tracing",
"tracing-appender",
@@ -1684,7 +1631,7 @@ dependencies = [
[[package]]
name = "easytier-gui"
version = "1.2.3"
version = "2.0.0"
dependencies = [
"anyhow",
"chrono",
@@ -1735,12 +1682,6 @@ version = "1.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ef6b89e5b37196644d8796de5268852ff179b44e96276cf4290264843743bb7"
[[package]]
name = "embedded-io"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef1a6892d9eef45c8fa6b9e0086428a2cca8491aca8f787c534a3d6d0bcb3ced"
[[package]]
name = "encoding"
version = "0.2.33"
@@ -2372,6 +2313,26 @@ dependencies = [
"winapi",
]
[[package]]
name = "git-version"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1ad568aa3db0fcbc81f2f116137f263d7304f512a1209b35b85150d3ef88ad19"
dependencies = [
"git-version-macro",
]
[[package]]
name = "git-version-macro"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "53010ccb100b96a67bc32c0175f0ed1426b31b655d562898e57325f81c023ac0"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.74",
]
[[package]]
name = "glib"
version = "0.18.5"
@@ -2531,25 +2492,6 @@ dependencies = [
"tracing",
]
[[package]]
name = "h2"
version = "0.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa82e28a107a8cc405f0839610bdc9b15f1e25ec7d696aa5cf173edbcb1486ab"
dependencies = [
"atomic-waker",
"bytes",
"fnv",
"futures-core",
"futures-sink",
"http 1.1.0",
"indexmap 2.4.0",
"slab",
"tokio",
"tokio-util",
"tracing",
]
[[package]]
name = "half"
version = "2.4.1"
@@ -2560,15 +2502,6 @@ dependencies = [
"crunchy",
]
[[package]]
name = "hash32"
version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b0c35f58762feb77d74ebe43bdbc3210f09be9fe6742234d573bacc26ed92b67"
dependencies = [
"byteorder",
]
[[package]]
name = "hash32"
version = "0.3.1"
@@ -2590,27 +2523,13 @@ version = "0.14.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5274423e17b7c9fc20b6e7e208532f9b19825d82dfd615708b70edd83df41f1"
[[package]]
name = "heapless"
version = "0.7.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cdc6457c0eb62c71aac4bc17216026d8410337c4126773b9c5daba343f17964f"
dependencies = [
"atomic-polyfill",
"hash32 0.2.1",
"rustc_version",
"serde",
"spin 0.9.8",
"stable_deref_trait",
]
[[package]]
name = "heapless"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0bfb9eb618601c89945a70e254898da93b13be0388091d42117462b265bb3fad"
dependencies = [
"hash32 0.3.1",
"hash32",
"stable_deref_trait",
]
@@ -2753,12 +2672,6 @@ dependencies = [
"libm",
]
[[package]]
name = "humantime"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9a3a5bfb195931eeb336b2a7b4d761daec841b97f947d34394601737a7bba5e4"
[[package]]
name = "hyper"
version = "0.14.30"
@@ -2769,7 +2682,7 @@ dependencies = [
"futures-channel",
"futures-core",
"futures-util",
"h2 0.3.26",
"h2",
"http 0.2.12",
"http-body 0.4.6",
"httparse",
@@ -2792,11 +2705,9 @@ dependencies = [
"bytes",
"futures-channel",
"futures-util",
"h2 0.4.5",
"http 1.1.0",
"http-body 1.0.1",
"httparse",
"httpdate",
"itoa 1.0.11",
"pin-project-lite",
"smallvec",
@@ -2804,19 +2715,6 @@ dependencies = [
"want",
]
[[package]]
name = "hyper-timeout"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3203a961e5c83b6f5498933e78b6b263e208c197b63e9c6c53cc82ffd3f63793"
dependencies = [
"hyper 1.4.1",
"hyper-util",
"pin-project-lite",
"tokio",
"tower-service",
]
[[package]]
name = "hyper-tls"
version = "0.5.0"
@@ -3379,12 +3277,6 @@ version = "0.1.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2532096657941c2fea9c289d370a250971c689d4f143798ff67113ec042024a5"
[[package]]
name = "matchit"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e7465ac9959cc2b1404e8e2367b43684a6d13790fe23056cc8c6c5a6b7bcb94"
[[package]]
name = "md5"
version = "0.7.0"
@@ -3953,25 +3845,6 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "opentelemetry"
version = "0.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6105e89802af13fdf48c49d7646d3b533a70e536d818aae7e78ba0433d01acb8"
dependencies = [
"async-trait",
"crossbeam-channel",
"futures-channel",
"futures-executor",
"futures-util",
"js-sys",
"lazy_static",
"percent-encoding",
"pin-project",
"rand 0.8.5",
"thiserror",
]
[[package]]
name = "option-ext"
version = "0.2.0"
@@ -4481,18 +4354,6 @@ dependencies = [
"universal-hash",
]
[[package]]
name = "postcard"
version = "1.0.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a55c51ee6c0db07e68448e336cf8ea4131a620edefebf9893e759b2d793420f8"
dependencies = [
"cobs",
"embedded-io",
"heapless 0.7.17",
"serde",
]
[[package]]
name = "powerfmt"
version = "0.2.0"
@@ -4605,9 +4466,9 @@ dependencies = [
[[package]]
name = "prost"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e13db3d3fde688c61e2446b4d843bc27a7e8af269a69440c0308021dc92333cc"
checksum = "3b2ecbe40f08db5c006b5764a2645f7f3f141ce756412ac9e1dd6087e6d32995"
dependencies = [
"bytes",
"prost-derive",
@@ -4615,9 +4476,9 @@ dependencies = [
[[package]]
name = "prost-build"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5bb182580f71dd070f88d01ce3de9f4da5021db7115d2e1c3605a754153b77c1"
checksum = "f8650aabb6c35b860610e9cff5dc1af886c9e25073b7b1712a68972af4281302"
dependencies = [
"bytes",
"heck 0.5.0",
@@ -4636,9 +4497,9 @@ dependencies = [
[[package]]
name = "prost-derive"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18bec9b0adc4eba778b33684b7ba3e7137789434769ee3ce3930463ef904cfca"
checksum = "acf0c195eebb4af52c752bec4f52f645da98b6e92077a04110c7f349477ae5ac"
dependencies = [
"anyhow",
"itertools 0.13.0",
@@ -4649,9 +4510,9 @@ dependencies = [
[[package]]
name = "prost-types"
version = "0.13.1"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cee5168b05f49d4b0ca581206eb14a7b22fafd963efe729ac48eb03266e25cc2"
checksum = "60caa6738c7369b940c3d49246a8d1749323674c65cb13010134f5c9bad5b519"
dependencies = [
"prost",
]
@@ -4938,7 +4799,7 @@ dependencies = [
"encoding_rs",
"futures-core",
"futures-util",
"h2 0.3.26",
"h2",
"http 0.2.12",
"http-body 0.4.6",
"hyper 0.14.30",
@@ -5034,6 +4895,23 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "ringbuf"
version = "0.4.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fb0d14419487131a897031a7e81c3b23d092296984fac4eb6df48cc4e3b2f3c5"
dependencies = [
"crossbeam-utils",
]
[[package]]
name = "rpc_build"
version = "0.1.0"
dependencies = [
"heck 0.5.0",
"prost-build",
]
[[package]]
name = "rstest"
version = "0.18.2"
@@ -5666,7 +5544,7 @@ dependencies = [
"byteorder",
"cfg-if",
"defmt",
"heapless 0.8.0",
"heapless",
"managed",
]
@@ -5999,40 +5877,6 @@ version = "0.12.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "61c41af27dd6d1e27b1b16b489db798443478cef1f06a660c96db617ba5de3b1"
[[package]]
name = "tarpc"
version = "0.32.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f07cb5fb67b0a90ea954b5ffd2fac9944ffef5937c801b987d3f8913f0c37348"
dependencies = [
"anyhow",
"fnv",
"futures",
"humantime",
"opentelemetry",
"pin-project",
"rand 0.8.5",
"serde",
"static_assertions",
"tarpc-plugins",
"thiserror",
"tokio",
"tokio-util",
"tracing",
"tracing-opentelemetry",
]
[[package]]
name = "tarpc-plugins"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ee42b4e559f17bce0385ebf511a7beb67d5cc33c12c96b7f4e9789919d9c10f"
dependencies = [
"proc-macro2",
"quote",
"syn 1.0.109",
]
[[package]]
name = "tauri"
version = "2.0.0-rc.2"
@@ -6589,7 +6433,6 @@ dependencies = [
"futures-core",
"futures-sink",
"pin-project-lite",
"slab",
"tokio",
]
@@ -6695,36 +6538,6 @@ dependencies = [
"winnow 0.6.18",
]
[[package]]
name = "tonic"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38659f4a91aba8598d27821589f5db7dddd94601e7a01b1e485a50e5484c7401"
dependencies = [
"async-stream",
"async-trait",
"axum",
"base64 0.22.1",
"bytes",
"h2 0.4.5",
"http 1.1.0",
"http-body 1.0.1",
"http-body-util",
"hyper 1.4.1",
"hyper-timeout",
"hyper-util",
"percent-encoding",
"pin-project",
"prost",
"socket2",
"tokio",
"tokio-stream",
"tower",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
name = "tonic-build"
version = "0.12.1"
@@ -6746,16 +6559,11 @@ checksum = "b8fa9be0de6cf49e536ce1851f987bd21a43b771b09473c3549a6c853db37c1c"
dependencies = [
"futures-core",
"futures-util",
"indexmap 1.9.3",
"pin-project",
"pin-project-lite",
"rand 0.8.5",
"slab",
"tokio",
"tokio-util",
"tower-layer",
"tower-service",
"tracing",
]
[[package]]
@@ -6826,19 +6634,6 @@ dependencies = [
"tracing-core",
]
[[package]]
name = "tracing-opentelemetry"
version = "0.17.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fbbe89715c1dbbb790059e2565353978564924ee85017b5fff365c872ff6721f"
dependencies = [
"once_cell",
"opentelemetry",
"tracing",
"tracing-core",
"tracing-subscriber",
]
[[package]]
name = "tracing-subscriber"
version = "0.3.18"

View File

@@ -10,4 +10,3 @@ panic = "unwind"
panic = "abort"
lto = true
codegen-units = 1
strip = true

View File

@@ -11,7 +11,7 @@
}
],
"settings": {
"eslint.experimental.useFlatConfig": true,
"eslint.useFlatConfig": true,
"prettier.enable": false,
"editor.formatOnSave": false,
"editor.codeActionsOnSave": {

View File

@@ -47,7 +47,7 @@ EasyTier is a simple, safe and decentralized VPN networking solution implemented
3. **Install from source code**
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
4. **Install by Docker Compose**

View File

@@ -47,7 +47,7 @@
3. **通过源码安装**
```sh
cargo install --git https://github.com/EasyTier/EasyTier.git
cargo install --git https://github.com/EasyTier/EasyTier.git easytier
```
4. **通过Docker Compose安装**

View File

@@ -14,6 +14,11 @@ npm install -g pnpm
### For Desktop (Win/Mac/Linux)
```
cd ../tauri-plugin-vpnservice
pnpm install
pnpm build
cd ../easytier-gui
pnpm install
pnpm tauri build
```
@@ -34,7 +39,6 @@ rustup target add aarch64-linux-android
install java 20
```
Java version depend on gradle version specified in (easytier-gui\src-tauri\gen\android\build.gradle.kts)
See [Gradle compatibility matrix](https://docs.gradle.org/current/userguide/compatibility.html) for detail .
@@ -43,4 +47,4 @@ See [Gradle compatibility matrix](https://docs.gradle.org/current/userguide/comp
pnpm install
pnpm tauri android init
pnpm tauri android build
```
```

View File

@@ -13,6 +13,7 @@ proxy_cidrs: 子网代理CIDR
enable_vpn_portal: 启用VPN门户
vpn_portal_listen_port: 监听端口
vpn_portal_client_network: 客户端子网
dev_name: TUN接口名称
advanced_settings: 高级设置
basic_settings: 基础设置
listener_urls: 监听地址
@@ -45,11 +46,13 @@ enable_auto_launch: 开启开机自启
exit: 退出
chips_placeholder: 例如: {0}, 按回车添加
hostname_placeholder: '留空默认为主机名: {0}'
dev_name_placeholder: 注意当多个网络同时使用相同的TUN接口名称时将会在设置TUN的IP时产生冲突留空以自动生成随机名称
off_text: 点击关闭
on_text: 点击开启
show_config: 显示配置
close: 关闭
use_latency_first: 延迟优先模式
my_node_info: 当前节点信息
peer_count: 已连接
upload: 上传
@@ -66,6 +69,10 @@ upload_bytes: 上传
download_bytes: 下载
loss_rate: 丢包率
status:
version: 内核版本
local: 本机
run_network: 运行网络
stop_network: 停止网络
network_running: 运行中
@@ -75,3 +82,12 @@ dhcp_experimental_warning: 实验性警告使用DHCP时如果组网环境中
tray:
show: 显示 / 隐藏
exit: 退出
about:
title: 关于
version: 版本
author: 作者
homepage: 主页
license: 许可证
description: 一个简单、安全、去中心化的内网穿透 VPN 组网方案,使用 Rust 语言和 Tokio 框架实现。
check_update: 检查更新

View File

@@ -13,6 +13,7 @@ proxy_cidrs: Subnet Proxy CIDRs
enable_vpn_portal: Enable VPN Portal
vpn_portal_listen_port: VPN Portal Listen Port
vpn_portal_client_network: Client Sub Network
dev_name: TUN interface name
advanced_settings: Advanced Settings
basic_settings: Basic Settings
listener_urls: Listener URLs
@@ -43,9 +44,10 @@ logging_copy_dir: Copy Log Path
disable_auto_launch: Disable Launch on Reboot
enable_auto_launch: Enable Launch on Reboot
exit: Exit
use_latency_first: Latency First Mode
chips_placeholder: 'e.g: {0}, press Enter to add'
hostname_placeholder: 'Leave blank and default to host name: {0}'
dev_name_placeholder: 'Note: When multiple networks use the same TUN interface name at the same time, there will be a conflict when setting the TUN''s IP. Leave blank to automatically generate a random name.'
off_text: Press to disable
on_text: Press to enable
show_config: Show Config
@@ -66,6 +68,10 @@ upload_bytes: Upload
download_bytes: Download
loss_rate: Loss Rate
status:
version: Version
local: Local
run_network: Run Network
stop_network: Stop Network
network_running: running
@@ -75,3 +81,12 @@ dhcp_experimental_warning: Experimental warning! if there is an IP conflict in t
tray:
show: Show / Hide
exit: Exit
about:
title: About
version: Version
author: Author
homepage: Homepage
license: License
description: 'EasyTier is a simple, safe and decentralized VPN networking solution implemented with the Rust language and Tokio framework.'
check_update: Check Update

View File

@@ -1,7 +1,7 @@
{
"name": "easytier-gui",
"type": "module",
"version": "1.2.3",
"version": "2.0.0",
"private": true,
"scripts": {
"dev": "vite",
@@ -12,50 +12,50 @@
"lint:fix": "eslint . --ignore-pattern src-tauri --fix"
},
"dependencies": {
"@primevue/themes": "^4.0.4",
"@tauri-apps/plugin-autostart": "2.0.0-rc.0",
"@tauri-apps/plugin-clipboard-manager": "2.0.0-rc.0",
"@tauri-apps/plugin-os": "2.0.0-rc.0",
"@tauri-apps/plugin-process": "2.0.0-rc.0",
"@tauri-apps/plugin-shell": "2.0.0-rc.0",
"@primevue/themes": "^4.0.5",
"@tauri-apps/plugin-autostart": "2.0.0-rc.1",
"@tauri-apps/plugin-clipboard-manager": "2.0.0-rc.1",
"@tauri-apps/plugin-os": "2.0.0-rc.1",
"@tauri-apps/plugin-process": "2.0.0-rc.1",
"@tauri-apps/plugin-shell": "2.0.0-rc.1",
"aura": "link:@primevue/themes/aura",
"pinia": "^2.2.1",
"ip-num": "1.5.1",
"pinia": "^2.2.2",
"primeflex": "^3.3.1",
"primeicons": "^7.0.0",
"primevue": "^4.0.4",
"primevue": "^4.0.5",
"tauri-plugin-vpnservice-api": "link:../tauri-plugin-vpnservice",
"vue": "^3.4.38",
"vue-i18n": "^9.13.1",
"vue": "^3.5.3",
"vue-i18n": "^10.0.0",
"vue-router": "^4.4.3"
},
"devDependencies": {
"@antfu/eslint-config": "^2.25.1",
"@intlify/unplugin-vue-i18n": "^4.0.0",
"@primevue/auto-import-resolver": "^4.0.4",
"@sveltejs/vite-plugin-svelte": "^3.1.1",
"@antfu/eslint-config": "^3.5.0",
"@intlify/unplugin-vue-i18n": "^5.0.0",
"@primevue/auto-import-resolver": "^4.0.5",
"@tauri-apps/api": "2.0.0-rc.0",
"@tauri-apps/cli": "2.0.0-rc.3",
"@types/node": "^20.14.15",
"@types/uuid": "^9.0.8",
"@vitejs/plugin-vue": "^5.1.2",
"@vue-macros/volar": "^0.19.1",
"@types/node": "^22.5.4",
"@types/uuid": "^10.0.0",
"@vitejs/plugin-vue": "^5.1.3",
"@vue-macros/volar": "^0.29.1",
"autoprefixer": "^10.4.20",
"eslint": "^9.9.0",
"eslint": "^9.10.0",
"eslint-plugin-format": "^0.1.2",
"internal-ip": "^8.0.0",
"postcss": "^8.4.41",
"postcss": "^8.4.45",
"tailwindcss": "^3.4.10",
"typescript": "^5.5.4",
"unplugin-auto-import": "^0.17.8",
"typescript": "^5.6.2",
"unplugin-auto-import": "^0.18.2",
"unplugin-vue-components": "^0.27.4",
"unplugin-vue-macros": "^2.11.5",
"unplugin-vue-macros": "^2.11.11",
"unplugin-vue-markdown": "^0.26.2",
"unplugin-vue-router": "^0.8.8",
"uuid": "^9.0.1",
"vite": "^5.4.1",
"vite-plugin-vue-devtools": "^7.3.8",
"unplugin-vue-router": "^0.10.8",
"uuid": "^10.0.0",
"vite": "^5.4.3",
"vite-plugin-vue-devtools": "^7.4.4",
"vite-plugin-vue-layouts": "^0.11.0",
"vue-i18n": "^9.13.1",
"vue-tsc": "^2.0.29"
"vue-i18n": "^10.0.0",
"vue-tsc": "^2.1.6"
}
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[package]
name = "easytier-gui"
version = "1.2.3"
version = "2.0.0"
description = "EasyTier GUI"
authors = ["you"]
edition = "2021"

View File

@@ -7,7 +7,7 @@ use anyhow::Context;
use dashmap::DashMap;
use easytier::{
common::config::{
ConfigLoader, FileLoggerConfig, NetworkIdentity, PeerConfig, TomlConfigLoader,
ConfigLoader, FileLoggerConfig, Flags, NetworkIdentity, PeerConfig, TomlConfigLoader,
VpnPortalConfig,
},
launcher::{NetworkInstance, NetworkInstanceRunningInfo},
@@ -60,6 +60,9 @@ struct NetworkConfig {
listener_urls: Vec<String>,
rpc_port: i32,
latency_first: bool,
dev_name: String,
}
impl NetworkConfig {
@@ -136,7 +139,7 @@ impl NetworkConfig {
}
cfg.set_rpc_portal(
format!("127.0.0.1:{}", self.rpc_port)
format!("0.0.0.0:{}", self.rpc_port)
.parse()
.with_context(|| format!("failed to parse rpc portal port: {}", self.rpc_port))?,
);
@@ -160,7 +163,10 @@ impl NetworkConfig {
})?,
});
}
let mut flags = Flags::default();
flags.latency_first = self.latency_first;
flags.dev_name = self.dev_name.clone();
cfg.set_flags(flags);
Ok(cfg)
}
}
@@ -171,6 +177,11 @@ static INSTANCE_MAP: once_cell::sync::Lazy<DashMap<String, NetworkInstance>> =
static mut LOGGER_LEVEL_SENDER: once_cell::sync::Lazy<Option<NewFilterSender>> =
once_cell::sync::Lazy::new(Default::default);
#[tauri::command]
fn easytier_version() -> Result<String, String> {
Ok(easytier::VERSION.to_string())
}
#[tauri::command]
fn is_autostart() -> Result<bool, String> {
let args: Vec<String> = std::env::args().collect();
@@ -365,7 +376,8 @@ pub fn run() {
get_os_hostname,
set_logging_level,
set_tun_fd,
is_autostart
is_autostart,
easytier_version
])
.on_window_event(|_win, event| match event {
#[cfg(not(target_os = "android"))]

View File

@@ -17,7 +17,7 @@
"createUpdaterArtifacts": false
},
"productName": "easytier-gui",
"version": "1.2.3",
"version": "2.0.0",
"identifier": "com.kkrainbow.easytier",
"plugins": {},
"app": {

View File

@@ -24,6 +24,7 @@ declare global {
const getActivePinia: typeof import('pinia')['getActivePinia']
const getCurrentInstance: typeof import('vue')['getCurrentInstance']
const getCurrentScope: typeof import('vue')['getCurrentScope']
const getEasytierVersion: typeof import('./composables/network')['getEasytierVersion']
const getOsHostname: typeof import('./composables/network')['getOsHostname']
const h: typeof import('vue')['h']
const initMobileService: typeof import('./composables/mobile_vpn')['initMobileService']
@@ -44,8 +45,8 @@ declare global {
const nextTick: typeof import('vue')['nextTick']
const onActivated: typeof import('vue')['onActivated']
const onBeforeMount: typeof import('vue')['onBeforeMount']
const onBeforeRouteLeave: typeof import('vue-router/auto')['onBeforeRouteLeave']
const onBeforeRouteUpdate: typeof import('vue-router/auto')['onBeforeRouteUpdate']
const onBeforeRouteLeave: typeof import('vue-router')['onBeforeRouteLeave']
const onBeforeRouteUpdate: typeof import('vue-router')['onBeforeRouteUpdate']
const onBeforeUnmount: typeof import('vue')['onBeforeUnmount']
const onBeforeUpdate: typeof import('vue')['onBeforeUpdate']
const onDeactivated: typeof import('vue')['onDeactivated']
@@ -90,8 +91,8 @@ declare global {
const useI18n: typeof import('vue-i18n')['useI18n']
const useLink: typeof import('vue-router/auto')['useLink']
const useNetworkStore: typeof import('./stores/network')['useNetworkStore']
const useRoute: typeof import('vue-router/auto')['useRoute']
const useRouter: typeof import('vue-router/auto')['useRouter']
const useRoute: typeof import('vue-router')['useRoute']
const useRouter: typeof import('vue-router')['useRouter']
const useSlots: typeof import('vue')['useSlots']
const useTray: typeof import('./composables/tray')['useTray']
const watch: typeof import('vue')['watch']
@@ -121,13 +122,13 @@ declare module 'vue' {
readonly customRef: UnwrapRef<typeof import('vue')['customRef']>
readonly defineAsyncComponent: UnwrapRef<typeof import('vue')['defineAsyncComponent']>
readonly defineComponent: UnwrapRef<typeof import('vue')['defineComponent']>
readonly definePage: UnwrapRef<typeof import('unplugin-vue-router/runtime')['definePage']>
readonly defineStore: UnwrapRef<typeof import('pinia')['defineStore']>
readonly effectScope: UnwrapRef<typeof import('vue')['effectScope']>
readonly generateMenuItem: UnwrapRef<typeof import('./composables/tray')['generateMenuItem']>
readonly getActivePinia: UnwrapRef<typeof import('pinia')['getActivePinia']>
readonly getCurrentInstance: UnwrapRef<typeof import('vue')['getCurrentInstance']>
readonly getCurrentScope: UnwrapRef<typeof import('vue')['getCurrentScope']>
readonly getEasytierVersion: UnwrapRef<typeof import('./composables/network')['getEasytierVersion']>
readonly getOsHostname: UnwrapRef<typeof import('./composables/network')['getOsHostname']>
readonly h: UnwrapRef<typeof import('vue')['h']>
readonly initMobileVpnService: UnwrapRef<typeof import('./composables/mobile_vpn')['initMobileVpnService']>
@@ -146,8 +147,8 @@ declare module 'vue' {
readonly nextTick: UnwrapRef<typeof import('vue')['nextTick']>
readonly onActivated: UnwrapRef<typeof import('vue')['onActivated']>
readonly onBeforeMount: UnwrapRef<typeof import('vue')['onBeforeMount']>
readonly onBeforeRouteLeave: UnwrapRef<typeof import('vue-router/auto')['onBeforeRouteLeave']>
readonly onBeforeRouteUpdate: UnwrapRef<typeof import('vue-router/auto')['onBeforeRouteUpdate']>
readonly onBeforeRouteLeave: UnwrapRef<typeof import('vue-router')['onBeforeRouteLeave']>
readonly onBeforeRouteUpdate: UnwrapRef<typeof import('vue-router')['onBeforeRouteUpdate']>
readonly onBeforeUnmount: UnwrapRef<typeof import('vue')['onBeforeUnmount']>
readonly onBeforeUpdate: UnwrapRef<typeof import('vue')['onBeforeUpdate']>
readonly onDeactivated: UnwrapRef<typeof import('vue')['onDeactivated']>
@@ -191,8 +192,8 @@ declare module 'vue' {
readonly useI18n: UnwrapRef<typeof import('vue-i18n')['useI18n']>
readonly useLink: UnwrapRef<typeof import('vue-router/auto')['useLink']>
readonly useNetworkStore: UnwrapRef<typeof import('./stores/network')['useNetworkStore']>
readonly useRoute: UnwrapRef<typeof import('vue-router/auto')['useRoute']>
readonly useRouter: UnwrapRef<typeof import('vue-router/auto')['useRouter']>
readonly useRoute: UnwrapRef<typeof import('vue-router')['useRoute']>
readonly useRouter: UnwrapRef<typeof import('vue-router')['useRouter']>
readonly useSlots: UnwrapRef<typeof import('vue')['useSlots']>
readonly useTray: UnwrapRef<typeof import('./composables/tray')['useTray']>
readonly watch: UnwrapRef<typeof import('vue')['watch']>

View File

@@ -0,0 +1,27 @@
<script setup lang="ts">
import { getEasytierVersion } from '~/composables/network'
const { t } = useI18n()
const etVersion = ref('')
onMounted(async () => {
etVersion.value = await getEasytierVersion()
})
</script>
<template>
<Card>
<template #title>
Easytier - {{ t('about.version') }}: {{ etVersion }}
</template>
<template #content>
<p class="mb-1">
{{ t('about.description') }}
</p>
</template>
</Card>
</template>
<style scoped lang="postcss">
</style>

View File

@@ -1,11 +1,10 @@
<script setup lang="ts">
import InputGroup from 'primevue/inputgroup'
import InputGroupAddon from 'primevue/inputgroupaddon'
import { getOsHostname } from '~/composables/network'
import { NetworkingMethod } from '~/types/network'
const { t } = useI18n()
import { ping } from 'tauri-plugin-vpnservice-api'
import { getOsHostname } from '~/composables/network'
import { NetworkingMethod } from '~/types/network'
const props = defineProps<{
configInvalid?: boolean
@@ -14,6 +13,8 @@ const props = defineProps<{
defineEmits(['runNetwork'])
const { t } = useI18n()
const networking_methods = ref([
{ value: NetworkingMethod.PublicServer, label: () => t('public_server') },
{ value: NetworkingMethod.Manual, label: () => t('manual') },
@@ -32,24 +33,26 @@ const curNetwork = computed(() => {
return networkStore.curNetwork
})
const protos:{ [proto: string] : number; } = {'tcp': 11010, 'udp': 11010, 'wg':11011, 'ws': 11011, 'wss': 11012}
const protos: { [proto: string]: number } = { tcp: 11010, udp: 11010, wg: 11011, ws: 11011, wss: 11012 }
function searchUrlSuggestions(e: { query: string }): string[] {
const query = e.query
let ret = []
const ret = []
// if query match "^\w+:.*", then no proto prefix
if (query.match(/^\w+:.*/)) {
// if query is a valid url, then add to suggestions
try {
new URL(query)
ret.push(query)
} catch (e) {}
} else {
for (let proto in protos) {
let item = proto + '://' + query
}
catch (e) {}
}
else {
for (const proto in protos) {
let item = `${proto}://${query}`
// if query match ":\d+$", then no port suffix
if (!query.match(/:\d+$/)) {
item += ':' + protos[proto]
item += `:${protos[proto]}`
}
ret.push(item)
}
@@ -58,45 +61,45 @@ function searchUrlSuggestions(e: { query: string }): string[] {
return ret
}
const publicServerSuggestions = ref([''])
const searchPresetPublicServers = (e: { query: string }) => {
const presetPublicServers = [
'tcp://easytier.public.kkrainbow.top:11010',
]
function searchPresetPublicServers(e: { query: string }) {
const presetPublicServers = [
'tcp://easytier.public.kkrainbow.top:11010',
]
let query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter((item) => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
const query = e.query
// if query is sub string of presetPublicServers, add to suggestions
let ret = presetPublicServers.filter(item => item.includes(query))
// add additional suggestions
if (query.length > 0) {
ret = ret.concat(searchUrlSuggestions(e))
}
publicServerSuggestions.value = ret
publicServerSuggestions.value = ret
}
const peerSuggestions = ref([''])
const searchPeerSuggestions = (e: { query: string }) => {
function searchPeerSuggestions(e: { query: string }) {
peerSuggestions.value = searchUrlSuggestions(e)
}
const listenerSuggestions = ref([''])
const searchListenerSuggestiong = (e: { query: string }) => {
let ret = []
function searchListenerSuggestiong(e: { query: string }) {
const ret = []
for (let proto in protos) {
let item = proto + '://0.0.0.0:';
for (const proto in protos) {
let item = `${proto}://0.0.0.0:`
// if query is a number, use it as port
if (e.query.match(/^\d+$/)) {
item += e.query
} else {
}
else {
item += protos[proto]
}
if (item.includes(e.query)) {
ret.push(item)
}
@@ -112,7 +115,7 @@ const searchListenerSuggestiong = (e: { query: string }) => {
function validateHostname() {
if (curNetwork.value.hostname) {
// eslint no-useless-escape
let name = curNetwork.value.hostname!.replaceAll(/[^\u4E00-\u9FA5a-zA-Z0-9\-]*/g, '')
let name = curNetwork.value.hostname!.replaceAll(/[^\u4E00-\u9FA5a-z0-9\-]*/gi, '')
if (name.length > 32)
name = name.substring(0, 32)
@@ -132,7 +135,7 @@ onMounted(async () => {
<template>
<div class="flex flex-column h-full">
<div class="flex flex-column">
<div class="w-10/12 self-center ">
<div class="w-10/12 self-center mb-3">
<Message severity="warn">
{{ t('dhcp_experimental_warning') }}
</Message>
@@ -151,8 +154,10 @@ onMounted(async () => {
</label>
</div>
<InputGroup>
<InputText id="virtual_ip" v-model="curNetwork.virtual_ipv4" :disabled="curNetwork.dhcp"
aria-describedby="virtual_ipv4-help" />
<InputText
id="virtual_ip" v-model="curNetwork.virtual_ipv4" :disabled="curNetwork.dhcp"
aria-describedby="virtual_ipv4-help"
/>
<InputGroupAddon>
<span>/24</span>
</InputGroupAddon>
@@ -167,23 +172,29 @@ onMounted(async () => {
</div>
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="network_secret">{{ t('network_secret') }}</label>
<InputText id="network_secret" v-model="curNetwork.network_secret"
aria-describedby=" network_secret-help" />
<InputText
id="network_secret" v-model="curNetwork.network_secret"
aria-describedby="network_secret-help"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="nm">{{ t('networking_method') }}</label>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" :option-label="(v) => v.label()" option-value="value"></SelectButton>
<SelectButton v-model="curNetwork.networking_method" :options="networking_methods" :option-label="(v) => v.label()" option-value="value" />
<div class="items-center flex flex-row p-fluid gap-x-1">
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
<AutoComplete
v-if="curNetwork.networking_method === NetworkingMethod.Manual" id="chips"
v-model="curNetwork.peer_urls" :placeholder="t('chips_placeholder', ['tcp://8.8.8.8:11010'])"
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions"/>
class="grow" multiple fluid :suggestions="peerSuggestions" @complete="searchPeerSuggestions"
/>
<AutoComplete v-if="curNetwork.networking_method === NetworkingMethod.PublicServer" :suggestions="publicServerSuggestions"
:virtualScrollerOptions="{ itemSize: 38 }" class="grow" dropdown @complete="searchPresetPublicServers" :completeOnFocus="true"
v-model="curNetwork.public_server_url"/>
<AutoComplete
v-if="curNetwork.networking_method === NetworkingMethod.PublicServer" v-model="curNetwork.public_server_url"
:suggestions="publicServerSuggestions" :virtual-scroller-options="{ itemSize: 38 }" class="grow" dropdown :complete-on-focus="true"
@complete="searchPresetPublicServers"
/>
</div>
</div>
</div>
@@ -194,67 +205,102 @@ onMounted(async () => {
<Panel :header="t('advanced_settings')" toggleable collapsed>
<div class="flex flex-column gap-y-2">
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<div class="flex align-items-center">
<Checkbox v-model="curNetwork.latency_first" input-id="use_latency_first" :binary="true" />
<label for="use_latency_first" class="ml-2"> {{ t('use_latency_first') }} </label>
</div>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="hostname">{{ t('hostname') }}</label>
<InputText id="hostname" v-model="curNetwork.hostname" aria-describedby="hostname-help" :format="true"
:placeholder="t('hostname_placeholder', [osHostname])" @blur="validateHostname" />
<InputText
id="hostname" v-model="curNetwork.hostname" aria-describedby="hostname-help" :format="true"
:placeholder="t('hostname_placeholder', [osHostname])" @blur="validateHostname"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap w-full">
<div class="flex flex-column gap-2 grow p-fluid">
<label for="username">{{ t('proxy_cidrs') }}</label>
<Chips id="chips" v-model="curNetwork.proxy_cidrs"
:placeholder="t('chips_placeholder', ['10.0.0.0/24'])" separator=" " class="w-full" />
<Chips
id="chips" v-model="curNetwork.proxy_cidrs"
:placeholder="t('chips_placeholder', ['10.0.0.0/24'])" separator=" " class="w-full"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap ">
<div class="flex flex-column gap-2 grow">
<label for="username">VPN Portal</label>
<ToggleButton v-model="curNetwork.enable_vpn_portal" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('off_text')" :off-label="t('on_text')" class="w-48"/>
<div class="items-center flex flex-row gap-x-4" v-if="curNetwork.enable_vpn_portal">
<div class="min-w-64">
<InputGroup>
<InputText v-model="curNetwork.vpn_portal_client_network_addr"
:placeholder="t('vpn_portal_client_network')" />
<InputGroupAddon>
<span>/{{ curNetwork.vpn_portal_client_network_len }}</span>
</InputGroupAddon>
</InputGroup>
<ToggleButton
v-model="curNetwork.enable_vpn_portal" on-icon="pi pi-check" off-icon="pi pi-times"
:on-label="t('off_text')" :off-label="t('on_text')" class="w-48"
/>
<div v-if="curNetwork.enable_vpn_portal" class="items-center flex flex-row gap-x-4">
<div class="min-w-64">
<InputGroup>
<InputText
v-model="curNetwork.vpn_portal_client_network_addr"
:placeholder="t('vpn_portal_client_network')"
/>
<InputGroupAddon>
<span>/{{ curNetwork.vpn_portal_client_network_len }}</span>
</InputGroupAddon>
</InputGroup>
<InputNumber v-model="curNetwork.vpn_portal_listen_port" :allow-empty="false"
:format="false" :min="0" :max="65535" class="w-8" fluid/>
</div>
<InputNumber
v-model="curNetwork.vpn_portal_listen_port" :allow-empty="false"
:format="false" :min="0" :max="65535" class="w-8" fluid
/>
</div>
</div>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 grow p-fluid">
<label for="listener_urls">{{ t('listener_urls') }}</label>
<AutoComplete id="listener_urls" :suggestions="listenerSuggestions"
class="w-full" dropdown @complete="searchListenerSuggestiong" :completeOnFocus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])"
v-model="curNetwork.listener_urls" multiple/>
<AutoComplete
id="listener_urls" v-model="curNetwork.listener_urls"
:suggestions="listenerSuggestions" class="w-full" dropdown :complete-on-focus="true"
:placeholder="t('chips_placeholder', ['tcp://1.1.1.1:11010'])"
multiple @complete="searchListenerSuggestiong"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="rpc_port">{{ t('rpc_port') }}</label>
<InputNumber id="rpc_port" v-model="curNetwork.rpc_port" aria-describedby="username-help"
:format="false" :min="0" :max="65535" />
<InputNumber
id="rpc_port" v-model="curNetwork.rpc_port" aria-describedby="rpc_port-help"
:format="false" :min="0" :max="65535"
/>
</div>
</div>
<div class="flex flex-row gap-x-9 flex-wrap">
<div class="flex flex-column gap-2 basis-5/12 grow">
<label for="dev_name">{{ t('dev_name') }}</label>
<InputText
id="dev_name" v-model="curNetwork.dev_name" aria-describedby="dev_name-help" :format="true"
:placeholder="t('dev_name_placeholder')"
/>
</div>
</div>
</div>
</Panel>
<div class="flex pt-4 justify-content-center">
<Button :label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)" />
<Button
:label="t('run_network')" icon="pi pi-arrow-right" icon-pos="right" :disabled="configInvalid"
@click="$emit('runNetwork', curNetwork)"
/>
</div>
</div>
</div>

View File

@@ -1,11 +1,13 @@
<script setup lang="ts">
import type { NodeInfo } from '~/types/network'
const { t } = useI18n()
import { IPv4, IPv6 } from 'ip-num/IPNumber'
import type { NodeInfo, PeerRoutePair } from '~/types/network'
const props = defineProps<{
instanceId?: string
}>()
const { t } = useI18n()
const networkStore = useNetworkStore()
const curNetwork = computed(() => {
@@ -24,8 +26,16 @@ const curNetworkInst = computed(() => {
})
const peerRouteInfos = computed(() => {
if (curNetworkInst.value)
return curNetworkInst.value.detail?.peer_route_pairs || []
if (curNetworkInst.value) {
const my_node_info = curNetworkInst.value.detail?.my_node_info
return [{
route: {
ipv4_addr: my_node_info?.virtual_ipv4,
hostname: my_node_info?.hostname,
version: my_node_info?.version,
},
}, ...(curNetworkInst.value.detail?.peer_route_pairs || [])]
}
return []
})
@@ -33,8 +43,9 @@ const peerRouteInfos = computed(() => {
function routeCost(info: any) {
if (info.route) {
const cost = info.route.cost
return cost === 1 ? 'p2p' : `relay(${cost})`
return cost ? cost === 1 ? 'p2p' : `relay(${cost})` : t('status.local')
}
return '?'
}
@@ -73,29 +84,33 @@ function humanFileSize(bytes: number, si = false, dp = 1) {
return `${bytes.toFixed(dp)} ${units[u]}`
}
function latencyMs(info: any) {
function latencyMs(info: PeerRoutePair) {
let lat_us_sum = statsCommon(info, 'stats.latency_us')
if (lat_us_sum === undefined)
return ''
lat_us_sum = lat_us_sum / 1000 / info.peer.conns.length
lat_us_sum = lat_us_sum / 1000 / info.peer!.conns.length
return `${lat_us_sum % 1 > 0 ? Math.round(lat_us_sum) + 1 : Math.round(lat_us_sum)}ms`
}
function txBytes(info: any) {
function txBytes(info: PeerRoutePair) {
const tx = statsCommon(info, 'stats.tx_bytes')
return tx ? humanFileSize(tx) : ''
}
function rxBytes(info: any) {
function rxBytes(info: PeerRoutePair) {
const rx = statsCommon(info, 'stats.rx_bytes')
return rx ? humanFileSize(rx) : ''
}
function lossRate(info: any) {
function lossRate(info: PeerRoutePair) {
const lossRate = statsCommon(info, 'loss_rate')
return lossRate !== undefined ? `${Math.round(lossRate * 100)}%` : ''
}
function version(info: PeerRoutePair) {
return info.route.version === '' ? 'unknown' : info.route.version
}
const myNodeInfo = computed(() => {
if (!curNetworkInst.value)
return {} as NodeInfo
@@ -117,8 +132,16 @@ const myNodeInfoChips = computed(() => {
if (!my_node_info)
return chips
// virtual ipv4
// TUN Device Name
const dev_name = curNetworkInst.value.detail?.dev_name
if (dev_name) {
chips.push({
label: `TUN Device Name: ${dev_name}`,
icon: '',
} as Chip)
}
// virtual ipv4
chips.push({
label: `Virtual IPv4: ${my_node_info.virtual_ipv4}`,
icon: '',
@@ -128,7 +151,7 @@ const myNodeInfoChips = computed(() => {
const local_ipv4s = my_node_info.ips?.interface_ipv4s
for (const [idx, ip] of local_ipv4s?.entries()) {
chips.push({
label: `Local IPv4 ${idx}: ${ip}`,
label: `Local IPv4 ${idx}: ${IPv4.fromNumber(ip.addr)}`,
icon: '',
} as Chip)
}
@@ -137,7 +160,11 @@ const myNodeInfoChips = computed(() => {
const local_ipv6s = my_node_info.ips?.interface_ipv6s
for (const [idx, ip] of local_ipv6s?.entries()) {
chips.push({
label: `Local IPv6 ${idx}: ${ip}`,
label: `Local IPv6 ${idx}: ${IPv6.fromBigInt((BigInt(ip.part1) << BigInt(96))
+ (BigInt(ip.part2) << BigInt(64))
+ (BigInt(ip.part3) << BigInt(32))
+ BigInt(ip.part4),
)}`,
icon: '',
} as Chip)
}
@@ -146,7 +173,19 @@ const myNodeInfoChips = computed(() => {
const public_ip = my_node_info.ips?.public_ipv4
if (public_ip) {
chips.push({
label: `Public IP: ${public_ip}`,
label: `Public IP: ${IPv4.fromNumber(public_ip.addr)}`,
icon: '',
} as Chip)
}
const public_ipv6 = my_node_info.ips?.public_ipv6
if (public_ipv6) {
chips.push({
label: `Public IPv6: ${IPv6.fromBigInt((BigInt(public_ipv6.part1) << BigInt(96))
+ (BigInt(public_ipv6.part2) << BigInt(64))
+ (BigInt(public_ipv6.part3) << BigInt(32))
+ BigInt(public_ipv6.part4),
)}`,
icon: '',
} as Chip)
}
@@ -373,6 +412,7 @@ function showEventLogs() {
<Column :field="txBytes" style="width: 80px;" :header="t('upload_bytes')" />
<Column :field="rxBytes" style="width: 80px;" :header="t('download_bytes')" />
<Column :field="lossRate" style="width: 100px;" :header="t('loss_rate')" />
<Column :field="version" style="width: 100px;" :header="t('status.version')" />
</DataTable>
</template>
</Card>

View File

@@ -1,183 +1,184 @@
import { addPluginListener } from '@tauri-apps/api/core';
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api';
import { Route } from '~/types/network';
import { addPluginListener } from '@tauri-apps/api/core'
import { prepare_vpn, start_vpn, stop_vpn } from 'tauri-plugin-vpnservice-api'
import type { Route } from '~/types/network'
const networkStore = useNetworkStore()
interface vpnStatus {
running: boolean
ipv4Addr: string | null | undefined
ipv4Cidr: number | null | undefined
routes: string[]
running: boolean
ipv4Addr: string | null | undefined
ipv4Cidr: number | null | undefined
routes: string[]
}
var curVpnStatus: vpnStatus = {
running: false,
ipv4Addr: undefined,
ipv4Cidr: undefined,
routes: []
const curVpnStatus: vpnStatus = {
running: false,
ipv4Addr: undefined,
ipv4Cidr: undefined,
routes: [],
}
async function waitVpnStatus(target_status: boolean, timeout_sec: number) {
let start_time = Date.now()
while (curVpnStatus.running !== target_status) {
if (Date.now() - start_time > timeout_sec * 1000) {
throw new Error('wait vpn status timeout')
}
await new Promise(r => setTimeout(r, 50))
const start_time = Date.now()
while (curVpnStatus.running !== target_status) {
if (Date.now() - start_time > timeout_sec * 1000) {
throw new Error('wait vpn status timeout')
}
await new Promise(r => setTimeout(r, 50))
}
}
async function doStopVpn() {
if (!curVpnStatus.running) {
return
}
console.log('stop vpn')
let stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
if (!curVpnStatus.running) {
return
}
console.log('stop vpn')
const stop_ret = await stop_vpn()
console.log('stop vpn', JSON.stringify((stop_ret)))
await waitVpnStatus(false, 3)
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
curVpnStatus.ipv4Addr = undefined
curVpnStatus.routes = []
}
async function doStartVpn(ipv4Addr: string, cidr: number, routes: string[]) {
if (curVpnStatus.running) {
return
}
if (curVpnStatus.running) {
return
}
console.log('start vpn')
let start_ret = await start_vpn({
"ipv4Addr": ipv4Addr + '/' + cidr,
"routes": routes,
"disallowedApplications": ["com.kkrainbow.easytier"],
"mtu": 1300,
});
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
console.log('start vpn')
const start_ret = await start_vpn({
ipv4Addr: `${ipv4Addr}/${cidr}`,
routes,
disallowedApplications: ['com.kkrainbow.easytier'],
mtu: 1300,
})
if (start_ret?.errorMsg?.length) {
throw new Error(start_ret.errorMsg)
}
await waitVpnStatus(true, 3)
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.routes = routes
curVpnStatus.ipv4Addr = ipv4Addr
curVpnStatus.routes = routes
}
async function onVpnServiceStart(payload: any) {
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(networkStore.networkInstanceIds[0], payload.fd)
}
console.log('vpn service start', JSON.stringify(payload))
curVpnStatus.running = true
if (payload.fd) {
setTunFd(networkStore.networkInstanceIds[0], payload.fd)
}
}
async function onVpnServiceStop(payload: any) {
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
console.log('vpn service stop', JSON.stringify(payload))
curVpnStatus.running = false
}
async function registerVpnServiceListener() {
console.log('register vpn service listener')
await addPluginListener(
'vpnservice',
'vpn_service_start',
onVpnServiceStart
)
console.log('register vpn service listener')
await addPluginListener(
'vpnservice',
'vpn_service_start',
onVpnServiceStart,
)
await addPluginListener(
'vpnservice',
'vpn_service_stop',
onVpnServiceStop
)
await addPluginListener(
'vpnservice',
'vpn_service_stop',
onVpnServiceStop,
)
}
function getRoutesForVpn(routes: Route[]): string[] {
if (!routes) {
return []
}
if (!routes) {
return []
}
let ret = []
for (let r of routes) {
for (let cidr of r.proxy_cidrs) {
if (cidr.indexOf('/') === -1) {
cidr += '/32'
}
ret.push(cidr)
}
const ret = []
for (const r of routes) {
for (let cidr of r.proxy_cidrs) {
if (!cidr.includes('/')) {
cidr += '/32'
}
ret.push(cidr)
}
}
// sort and dedup
return Array.from(new Set(ret)).sort()
// sort and dedup
return Array.from(new Set(ret)).sort()
}
async function onNetworkInstanceChange() {
let insts = networkStore.networkInstanceIds
if (!insts) {
await doStopVpn()
return
const insts = networkStore.networkInstanceIds
if (!insts) {
await doStopVpn()
return
}
const curNetworkInfo = networkStore.networkInfos[insts[0]]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
await doStopVpn()
return
}
const virtual_ip = curNetworkInfo?.node_info?.virtual_ipv4
if (!virtual_ip || !virtual_ip.length) {
await doStopVpn()
return
}
const routes = getRoutesForVpn(curNetworkInfo?.routes)
const ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
const routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
if (ipChanged || routesChanged) {
console.log('virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
}
catch (e) {
console.error(e)
}
const curNetworkInfo = networkStore.networkInfos[insts[0]]
if (!curNetworkInfo || curNetworkInfo?.error_msg?.length) {
await doStopVpn()
return
try {
await doStartVpn(virtual_ip, 24, routes)
}
const virtual_ip = curNetworkInfo?.node_info?.virtual_ipv4
if (!virtual_ip || !virtual_ip.length) {
await doStopVpn()
return
}
const routes = getRoutesForVpn(curNetworkInfo?.routes)
var ipChanged = virtual_ip !== curVpnStatus.ipv4Addr
var routesChanged = JSON.stringify(routes) !== JSON.stringify(curVpnStatus.routes)
if (ipChanged || routesChanged) {
console.log('virtual ip changed', JSON.stringify(curVpnStatus), virtual_ip)
try {
await doStopVpn()
} catch (e) {
console.error(e)
}
try {
await doStartVpn(virtual_ip, 24, routes)
} catch (e) {
console.error("start vpn failed, clear all network insts.", e)
networkStore.clearNetworkInstances()
await retainNetworkInstance(networkStore.networkInstanceIds)
}
return
catch (e) {
console.error('start vpn failed, clear all network insts.', e)
networkStore.clearNetworkInstances()
await retainNetworkInstance(networkStore.networkInstanceIds)
}
}
}
async function watchNetworkInstance() {
var subscribe_running = false
networkStore.$subscribe(async () => {
if (subscribe_running) {
return
}
subscribe_running = true
try {
await onNetworkInstanceChange()
} catch (_) {
}
subscribe_running = false
})
let subscribe_running = false
networkStore.$subscribe(async () => {
if (subscribe_running) {
return
}
subscribe_running = true
try {
await onNetworkInstanceChange()
}
catch (_) {
}
subscribe_running = false
})
}
export async function initMobileVpnService() {
await registerVpnServiceListener()
await watchNetworkInstance()
await registerVpnServiceListener()
await watchNetworkInstance()
}
export async function prepareVpnService() {
console.log('prepare vpn')
let prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
console.log('prepare vpn')
const prepare_ret = await prepare_vpn()
console.log('prepare vpn', JSON.stringify((prepare_ret)))
if (prepare_ret?.errorMsg?.length) {
throw new Error(prepare_ret.errorMsg)
}
}

View File

@@ -1,4 +1,4 @@
import { invoke } from "@tauri-apps/api/core"
import { invoke } from '@tauri-apps/api/core'
import type { NetworkConfig, NetworkInstanceRunningInfo } from '~/types/network'
@@ -33,3 +33,7 @@ export async function setLoggingLevel(level: string) {
export async function setTunFd(instanceId: string, fd: number) {
return await invoke('set_tun_fd', { instanceId, fd })
}
export async function getEasytierVersion() {
return await invoke<string>('easytier_version')
}

View File

@@ -1,6 +1,6 @@
import { getCurrentWindow } from '@tauri-apps/api/window'
import { Menu, MenuItem, PredefinedMenuItem } from '@tauri-apps/api/menu'
import { TrayIcon } from '@tauri-apps/api/tray'
import { getCurrentWindow } from '@tauri-apps/api/window'
import pkg from '~/../package.json'
const DEFAULT_TRAY_NAME = 'main'
@@ -8,14 +8,15 @@ const DEFAULT_TRAY_NAME = 'main'
async function toggleVisibility() {
if (await getCurrentWindow().isVisible()) {
await getCurrentWindow().hide()
} else {
}
else {
await getCurrentWindow().show()
await getCurrentWindow().setFocus()
}
}
export async function useTray(init: boolean = false) {
let tray;
let tray
try {
tray = await TrayIcon.getById(DEFAULT_TRAY_NAME)
if (!tray) {
@@ -29,17 +30,18 @@ export async function useTray(init: boolean = false) {
}),
action: async () => {
toggleVisibility()
}
},
})
}
} catch (error) {
}
catch (error) {
console.warn('Error while creating tray icon:', error)
return null
}
if (init) {
tray.setTooltip(`EasyTier\n${pkg.version}`)
tray.setMenuOnLeftClick(false);
tray.setMenuOnLeftClick(false)
tray.setMenu(await Menu.new({
id: 'main',
items: await generateMenuItem(),
@@ -59,7 +61,7 @@ export async function generateMenuItem() {
export async function MenuItemExit(text: string) {
return await PredefinedMenuItem.new({
text: text,
text,
item: 'Quit',
})
}
@@ -69,14 +71,15 @@ export async function MenuItemShow(text: string) {
id: 'show',
text,
action: async () => {
await toggleVisibility();
await toggleVisibility()
},
})
}
export async function setTrayMenu(items: (MenuItem | PredefinedMenuItem)[] | undefined = undefined) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
const menu = await Menu.new({
id: 'main',
items: items || await generateMenuItem(),
@@ -86,15 +89,17 @@ export async function setTrayMenu(items: (MenuItem | PredefinedMenuItem)[] | und
export async function setTrayRunState(isRunning: boolean = false) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
tray.setIcon(isRunning ? 'icons/icon-inactive.ico' : 'icons/icon.ico')
}
export async function setTrayTooltip(tooltip: string) {
if (tooltip) {
const tray = await useTray()
if (!tray) return
if (!tray)
return
tray.setTooltip(`EasyTier\n${pkg.version}\n${tooltip}`)
tray.setTitle(`EasyTier\n${pkg.version}\n${tooltip}`)
}
}
}

View File

@@ -1,16 +1,16 @@
import { setupLayouts } from 'virtual:generated-layouts'
import { createRouter, createWebHistory } from 'vue-router/auto'
import Aura from '@primevue/themes/aura'
import PrimeVue from 'primevue/config'
import ToastService from 'primevue/toastservice'
import App from '~/App.vue'
import { createRouter, createWebHistory } from 'vue-router/auto'
import { routes } from 'vue-router/auto-routes'
import App from '~/App.vue'
import { i18n, loadLanguageAsync } from '~/modules/i18n'
import { getAutoLaunchStatusAsync, loadAutoLaunchStatusAsync } from './modules/auto_launch'
import '~/styles.css'
import Aura from '@primevue/themes/aura'
import 'primeicons/primeicons.css'
import 'primeflex/primeflex.css'
import { i18n, loadLanguageAsync } from '~/modules/i18n'
import { loadAutoLaunchStatusAsync, getAutoLaunchStatusAsync } from './modules/auto_launch'
if (import.meta.env.PROD) {
document.addEventListener('keydown', (event) => {
@@ -18,8 +18,9 @@ if (import.meta.env.PROD) {
event.key === 'F5'
|| (event.ctrlKey && event.key === 'r')
|| (event.metaKey && event.key === 'r')
)
) {
event.preventDefault()
}
})
document.addEventListener('contextmenu', (event) => {
@@ -35,7 +36,7 @@ async function main() {
const router = createRouter({
history: createWebHistory(),
extendRoutes: routes => setupLayouts(routes),
routes,
})
app.use(router)
@@ -45,11 +46,12 @@ async function main() {
theme: {
preset: Aura,
options: {
prefix: 'p',
darkModeSelector: 'system',
cssLayer: false
}
}})
prefix: 'p',
darkModeSelector: 'system',
cssLayer: false,
},
},
})
app.use(ToastService)
app.mount('#app')
}

View File

@@ -1,17 +1,17 @@
import { disable, enable, isEnabled } from '@tauri-apps/plugin-autostart'
export async function loadAutoLaunchStatusAsync(target_enable: boolean): Promise<boolean> {
try {
target_enable ? await enable() : await disable()
localStorage.setItem('auto_launch', JSON.stringify(await isEnabled()))
return isEnabled()
}
catch (e) {
console.error(e)
return false
}
try {
target_enable ? await enable() : await disable()
localStorage.setItem('auto_launch', JSON.stringify(await isEnabled()))
return isEnabled()
}
catch (e) {
console.error(e)
return false
}
}
export function getAutoLaunchStatusAsync(): boolean {
return localStorage.getItem('auto_launch') === 'true'
return localStorage.getItem('auto_launch') === 'true'
}

View File

@@ -1,5 +1,5 @@
import type { Locale } from 'vue-i18n'
import { createI18n } from 'vue-i18n'
import type { Locale } from 'vue-i18n'
// Import i18n resources
// https://vitejs.dev/guide/features.html#glob-import

View File

@@ -1,24 +1,25 @@
<script setup lang="ts">
import { useToast } from 'primevue/usetoast'
import { exit } from '@tauri-apps/plugin-process'
import TieredMenu from 'primevue/tieredmenu'
import { open } from '@tauri-apps/plugin-shell'
import { appLogDir } from '@tauri-apps/api/path'
import { getCurrentWindow } from '@tauri-apps/api/window'
import { writeText } from '@tauri-apps/plugin-clipboard-manager'
import { type } from '@tauri-apps/plugin-os'
import { exit } from '@tauri-apps/plugin-process'
import { open } from '@tauri-apps/plugin-shell'
import TieredMenu from 'primevue/tieredmenu'
import { useToast } from 'primevue/usetoast'
import Config from '~/components/Config.vue'
import Status from '~/components/Status.vue'
import type { NetworkConfig } from '~/types/network'
import { loadLanguageAsync } from '~/modules/i18n'
import { getAutoLaunchStatusAsync as getAutoLaunchStatus, loadAutoLaunchStatusAsync } from '~/modules/auto_launch'
import Status from '~/components/Status.vue'
import { isAutostart, setLoggingLevel } from '~/composables/network'
import { useTray } from '~/composables/tray'
import { getCurrentWindow } from '@tauri-apps/api/window'
import { getAutoLaunchStatusAsync as getAutoLaunchStatus, loadAutoLaunchStatusAsync } from '~/modules/auto_launch'
import { loadLanguageAsync } from '~/modules/i18n'
import { type NetworkConfig, NetworkingMethod } from '~/types/network'
const { t, locale } = useI18n()
const visible = ref(false)
const aboutVisible = ref(false)
const tomlConfig = ref('')
useTray(true)
@@ -85,7 +86,8 @@ async function runNetworkCb(cfg: NetworkConfig, cb: () => void) {
if (type() === 'android') {
await prepareVpnService()
networkStore.clearNetworkInstances()
} else {
}
else {
networkStore.removeNetworkInstance(cfg.instance_id)
}
@@ -146,7 +148,7 @@ const setting_menu_items = ref([
await loadLanguageAsync((locale.value === 'en' ? 'cn' : 'en'))
await setTrayMenu([
await MenuItemExit(t('tray.exit')),
await MenuItemShow(t('tray.show'))
await MenuItemShow(t('tray.show')),
])
},
},
@@ -193,6 +195,13 @@ const setting_menu_items = ref([
return items
})(),
},
{
label: () => t('about.title'),
icon: 'pi pi-at',
command: async () => {
aboutVisible.value = true
},
},
{
label: () => t('exit'),
icon: 'pi pi-power-off',
@@ -244,11 +253,15 @@ function isRunning(id: string) {
</ScrollPanel>
</Panel>
<Divider />
<div class="flex justify-content-end gap-2">
<div class="flex gap-2 justify-content-end">
<Button type="button" :label="t('close')" @click="visible = false" />
</div>
</Dialog>
<Dialog v-model:visible="aboutVisible" modal :header="t('about.title')" :style="{ width: '70%' }">
<About />
</Dialog>
<div>
<Toolbar>
<template #start>
@@ -259,29 +272,44 @@ function isRunning(id: string) {
<template #center>
<div class="min-w-40">
<Dropdown v-model="networkStore.curNetwork" :options="networkStore.networkList" :highlight-on-select="false"
:placeholder="t('select_network')" class="w-full">
<Dropdown
v-model="networkStore.curNetwork" :options="networkStore.networkList" :highlight-on-select="false"
:placeholder="t('select_network')" class="w-full"
>
<template #value="slotProps">
<div class="flex items-start content-center">
<div class="mr-3 flex-column">
<span>{{ slotProps.value.network_name }}</span>
</div>
<Tag class="my-auto" :severity="isRunning(slotProps.value.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.value.instance_id) ? 'network_running' : 'network_stopped')" />
<Tag
class="my-auto leading-3" :severity="isRunning(slotProps.value.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.value.instance_id) ? 'network_running' : 'network_stopped')"
/>
</div>
</template>
<template #option="slotProps">
<div class="flex flex-col items-start content-center">
<div class="flex flex-col items-start content-center max-w-full">
<div class="flex">
<div class="mr-3">
{{ t('network_name') }}: {{ slotProps.option.network_name }}
</div>
<Tag class="my-auto" :severity="isRunning(slotProps.option.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.option.instance_id) ? 'network_running' : 'network_stopped')" />
<Tag
class="my-auto leading-3"
:severity="isRunning(slotProps.option.instance_id) ? 'success' : 'info'"
:value="t(isRunning(slotProps.option.instance_id) ? 'network_running' : 'network_stopped')"
/>
</div>
<div>{{ slotProps.option.public_server_url }}</div>
<div
v-if="isRunning(slotProps.option.instance_id) && networkStore.instances[slotProps.option.instance_id].detail && (networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 !== '')">
v-if="slotProps.option.networking_method !== NetworkingMethod.Standalone"
class="max-w-full overflow-hidden text-ellipsis"
>
{{ slotProps.option.networking_method === NetworkingMethod.Manual
? slotProps.option.peer_urls.join(', ')
: slotProps.option.public_server_url }}
</div>
<div
v-if="isRunning(slotProps.option.instance_id) && networkStore.instances[slotProps.option.instance_id].detail && (networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 !== '')"
>
{{ networkStore.instances[slotProps.option.instance_id].detail
? networkStore.instances[slotProps.option.instance_id].detail?.my_node_info.virtual_ipv4 : '' }}
</div>
@@ -292,8 +320,10 @@ function isRunning(id: string) {
</template>
<template #end>
<Button icon="pi pi-cog" severity="secondary" aria-haspopup="true" :label="t('settings')"
aria-controls="overlay_setting_menu" @click="toggle_setting_menu" />
<Button
icon="pi pi-cog" severity="secondary" aria-haspopup="true" :label="t('settings')"
aria-controls="overlay_setting_menu" @click="toggle_setting_menu"
/>
<TieredMenu id="overlay_setting_menu" ref="setting_menu" :model="setting_menu_items" :popup="true" />
</template>
</Toolbar>
@@ -311,16 +341,20 @@ function isRunning(id: string) {
</StepList>
<StepPanels value="1">
<StepPanel v-slot="{ activateCallback = (s: string) => { } } = {}" value="1">
<Config :instance-id="networkStore.curNetworkId" :config-invalid="messageBarSeverity !== Severity.None"
@run-network="runNetworkCb($event, () => activateCallback('2'))" />
<Config
:instance-id="networkStore.curNetworkId" :config-invalid="messageBarSeverity !== Severity.None"
@run-network="runNetworkCb($event, () => activateCallback('2'))"
/>
</StepPanel>
<StepPanel v-slot="{ activateCallback = (s: string) => { } } = {}" value="2">
<div class="flex flex-column">
<Status :instance-id="networkStore.curNetworkId" />
</div>
<div class="flex pt-4 justify-content-center">
<Button :label="t('stop_network')" severity="danger" icon="pi pi-arrow-left"
@click="stopNetworkCb(networkStore.curNetwork, () => activateCallback('1'))" />
<Button
:label="t('stop_network')" severity="danger" icon="pi pi-arrow-left"
@click="stopNetworkCb(networkStore.curNetwork, () => activateCallback('1'))"
/>
</div>
</StepPanel>
</StepPanels>
@@ -360,6 +394,10 @@ body {
margin: 0;
}
.p-select-overlay {
max-width: calc(100% - 2rem);
}
/*
.p-tabview-panel {

View File

@@ -108,7 +108,8 @@ export const useNetworkStore = defineStore('networkStore', {
loadAutoStartInstIdsFromLocalStorage() {
try {
this.autoStartInstIds = JSON.parse(localStorage.getItem('autoStartInstIds') || '[]')
} catch (e) {
}
catch (e) {
console.error(e)
this.autoStartInstIds = []
}

View File

@@ -16,7 +16,6 @@
font-weight: 400;
color: #0f0f0f;
background-color: white;
font-synthesis: none;
text-rendering: optimizeLegibility;

View File

@@ -12,7 +12,7 @@ declare module 'vue-router/auto-routes' {
ParamValueOneOrMore,
ParamValueZeroOrMore,
ParamValueZeroOrOne,
} from 'unplugin-vue-router/types'
} from 'vue-router'
/**
* Route name map generated by unplugin-vue-router

View File

@@ -31,6 +31,9 @@ export interface NetworkConfig {
listener_urls: string[]
rpc_port: number
latency_first: boolean
dev_name: string
}
export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
@@ -62,6 +65,8 @@ export function DEFAULT_NETWORK_CONFIG(): NetworkConfig {
'wg://0.0.0.0:11011',
],
rpc_port: 0,
latency_first: true,
dev_name: '',
}
}
@@ -75,6 +80,7 @@ export interface NetworkInstance {
}
export interface NetworkInstanceRunningInfo {
dev_name: string
my_node_info: NodeInfo
events: Record<string, any>
node_info: NodeInfo
@@ -85,13 +91,26 @@ export interface NetworkInstanceRunningInfo {
error_msg?: string
}
export interface Ipv4Addr {
addr: number
}
export interface Ipv6Addr {
part1: number
part2: number
part3: number
part4: number
}
export interface NodeInfo {
virtual_ipv4: string
hostname: string
version: string
ips: {
public_ipv4: string
interface_ipv4s: string[]
public_ipv6: string
interface_ipv6s: string[]
public_ipv4: Ipv4Addr
interface_ipv4s: Ipv4Addr[]
public_ipv6: Ipv6Addr
interface_ipv6s: Ipv6Addr[]
listeners: {
serialization: string
scheme_end: number
@@ -125,6 +144,7 @@ export interface Route {
hostname: string
stun_info?: StunInfo
inst_id: string
version: string
}
export interface PeerInfo {

View File

@@ -1,19 +1,19 @@
import path from 'node:path'
import { defineConfig } from 'vite'
import Vue from '@vitejs/plugin-vue'
import Layouts from 'vite-plugin-vue-layouts'
import Components from 'unplugin-vue-components/vite'
import AutoImport from 'unplugin-auto-import/vite'
import VueMacros from 'unplugin-vue-macros/vite'
import process from 'node:process'
import VueI18n from '@intlify/unplugin-vue-i18n/vite'
import VueDevTools from 'vite-plugin-vue-devtools'
import VueRouter from 'unplugin-vue-router/vite'
import { PrimeVueResolver } from '@primevue/auto-import-resolver'
import Vue from '@vitejs/plugin-vue'
import { internalIpV4Sync } from 'internal-ip'
import AutoImport from 'unplugin-auto-import/vite'
import Components from 'unplugin-vue-components/vite'
import VueMacros from 'unplugin-vue-macros/vite'
import { VueRouterAutoImports } from 'unplugin-vue-router'
import { PrimeVueResolver } from '@primevue/auto-import-resolver';
import { svelte } from '@sveltejs/vite-plugin-svelte';
import { internalIpV4Sync } from 'internal-ip';
import VueRouter from 'unplugin-vue-router/vite'
import { defineConfig } from 'vite'
import VueDevTools from 'vite-plugin-vue-devtools'
import Layouts from 'vite-plugin-vue-layouts'
const host = process.env.TAURI_DEV_HOST;
const host = process.env.TAURI_DEV_HOST
// https://vitejs.dev/config/
export default defineConfig(async () => ({
@@ -23,7 +23,6 @@ export default defineConfig(async () => ({
},
},
plugins: [
svelte(),
VueMacros({
plugins: {
vue: Vue({
@@ -100,10 +99,10 @@ export default defineConfig(async () => ({
},
hmr: host
? {
protocol: 'ws',
host: internalIpV4Sync(),
port: 1430,
}
protocol: 'ws',
host: internalIpV4Sync(),
port: 1430,
}
: undefined,
},
}))

View File

@@ -3,12 +3,12 @@ name = "easytier"
description = "A full meshed p2p VPN, connecting all your devices in one network with one command."
homepage = "https://github.com/EasyTier/EasyTier"
repository = "https://github.com/EasyTier/EasyTier"
version = "1.2.3"
version = "2.0.0"
edition = "2021"
authors = ["kkrainbow"]
keywords = ["vpn", "p2p", "network", "easytier"]
categories = ["network-programming", "command-line-utilities"]
rust-version = "1.75"
rust-version = "1.77.0"
license-file = "LICENSE"
readme = "README.md"
@@ -29,6 +29,8 @@ path = "src/lib.rs"
test = false
[dependencies]
git-version = "0.3.9"
tracing = { version = "0.1", features = ["log"] }
tracing-subscriber = { version = "0.3", features = [
"env-filter",
@@ -49,7 +51,7 @@ futures = { version = "0.3", features = ["bilock", "unstable"] }
tokio = { version = "1", features = ["full"] }
tokio-stream = "0.1"
tokio-util = { version = "0.7.9", features = ["codec", "net"] }
tokio-util = { version = "0.7.9", features = ["codec", "net", "io"] }
async-stream = "0.3.5"
async-trait = "0.1.74"
@@ -101,14 +103,10 @@ uuid = { version = "1.5.0", features = [
crossbeam-queue = "0.3"
once_cell = "1.18.0"
# for packet
postcard = { "version" = "1.0.8", features = ["alloc"] }
# for rpc
tonic = "0.12"
prost = "0.13"
prost-types = "0.13"
anyhow = "1.0"
tarpc = { version = "0.32", features = ["tokio1", "serde1"] }
url = { version = "2.5", features = ["serde"] }
percent-encoding = "2.3.1"
@@ -127,6 +125,7 @@ rand = "0.8.5"
serde = { version = "1.0", features = ["derive"] }
pnet = { version = "0.35.0", features = ["serde"] }
serde_json = "1"
clap = { version = "4.4.8", features = [
"string",
@@ -180,6 +179,9 @@ wildmatch = "2.3.4"
rust-i18n = "3"
sys-locale = "0.3"
ringbuf = "0.4.5"
async-ringbuf = "0.3.1"
[target.'cfg(windows)'.dependencies]
windows-sys = { version = "0.52", features = [
"Win32_Networking_WinSock",
@@ -194,6 +196,8 @@ winreg = "0.52"
tonic-build = "0.12"
globwalk = "0.8.1"
regex = "1"
prost-build = "0.13.2"
rpc_build = { path = "src/proto/rpc_build" }
[target.'cfg(windows)'.build-dependencies]
reqwest = { version = "0.11", features = ["blocking"] }
@@ -203,6 +207,7 @@ zip = "0.6.6"
[dev-dependencies]
serial_test = "3.0.0"
rstest = "0.18.2"
futures-util = "0.3.30"
[target.'cfg(target_os = "linux")'.dev-dependencies]
defguard_wireguard_rs = "0.4.2"

View File

@@ -129,14 +129,35 @@ fn main() -> Result<(), Box<dyn std::error::Error>> {
#[cfg(target_os = "windows")]
WindowsBuild::check_for_win();
tonic_build::configure()
.type_attribute(".", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute("cli.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("cli.PeerInfoForGlobalMap", "#[derive(Hash)]")
let proto_files = [
"src/proto/peer_rpc.proto",
"src/proto/common.proto",
"src/proto/error.proto",
"src/proto/tests.proto",
"src/proto/cli.proto",
];
for proto_file in &proto_files {
println!("cargo:rerun-if-changed={}", proto_file);
}
prost_build::Config::new()
.type_attribute(".common", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".error", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(".cli", "#[derive(serde::Serialize, serde::Deserialize)]")
.type_attribute(
"peer_rpc.GetIpListResponse",
"#[derive(serde::Serialize, serde::Deserialize)]",
)
.type_attribute("peer_rpc.DirectConnectedPeerInfo", "#[derive(Hash)]")
.type_attribute("peer_rpc.PeerInfoForGlobalMap", "#[derive(Hash)]")
.type_attribute("peer_rpc.ForeignNetworkRouteInfoKey", "#[derive(Hash, Eq)]")
.type_attribute("common.RpcDescriptor", "#[derive(Hash, Eq)]")
.service_generator(Box::new(rpc_build::ServiceGenerator::new()))
.btree_map(&["."])
.compile(&["proto/cli.proto"], &["proto/"])
.compile_protos(&proto_files, &["src/proto/"])
.unwrap();
// tonic_build::compile_protos("proto/cli.proto")?;
check_locale();
Ok(())
}

View File

@@ -108,6 +108,9 @@ core_clap:
disable_p2p:
en: "disable p2p communication, will only relay packets with peers specified by --peers"
zh-CN: "禁用P2P通信只通过--peers指定的节点转发数据包"
disable_udp_hole_punching:
en: "disable udp hole punching"
zh-CN: "禁用UDP打洞功能"
relay_all_peer_rpc:
en: "relay all peer rpc packets, even if the peer is not in the relay network whitelist. this can help peers not in relay network whitelist to establish p2p connection."
zh-CN: "转发所有对等节点的RPC数据包即使对等节点不在转发网络白名单中。这可以帮助白名单外网络中的对等节点建立P2P连接。"

View File

@@ -72,7 +72,7 @@ pub trait ConfigLoader: Send + Sync {
pub type NetworkSecretDigest = [u8; 32];
#[derive(Debug, Clone, Deserialize, Serialize, Default)]
#[derive(Debug, Clone, Deserialize, Serialize, Default, Eq, Hash)]
pub struct NetworkIdentity {
pub network_name: String,
pub network_secret: Option<String>,
@@ -178,6 +178,8 @@ pub struct Flags {
pub disable_p2p: bool,
#[derivative(Default(value = "false"))]
pub relay_all_peer_rpc: bool,
#[derivative(Default(value = "false"))]
pub disable_udp_hole_punching: bool,
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
@@ -206,7 +208,10 @@ struct Config {
socks5_proxy: Option<url::Url>,
flags: Option<Flags>,
flags: Option<serde_json::Map<String, serde_json::Value>>,
#[serde(skip)]
flags_struct: Option<Flags>,
}
#[derive(Debug, Clone)]
@@ -222,13 +227,15 @@ impl Default for TomlConfigLoader {
impl TomlConfigLoader {
pub fn new_from_str(config_str: &str) -> Result<Self, anyhow::Error> {
let config = toml::de::from_str::<Config>(config_str).with_context(|| {
let mut config = toml::de::from_str::<Config>(config_str).with_context(|| {
format!(
"failed to parse config file: {}\n{}",
config_str, config_str
)
})?;
config.flags_struct = Some(Self::gen_flags(config.flags.clone().unwrap_or_default()));
Ok(TomlConfigLoader {
config: Arc::new(Mutex::new(config)),
})
@@ -246,6 +253,28 @@ impl TomlConfigLoader {
Ok(ret)
}
fn gen_flags(mut flags_hashmap: serde_json::Map<String, serde_json::Value>) -> Flags {
let default_flags_json = serde_json::to_string(&Flags::default()).unwrap();
let default_flags_hashmap =
serde_json::from_str::<serde_json::Map<String, serde_json::Value>>(&default_flags_json)
.unwrap();
tracing::debug!("default_flags_hashmap: {:?}", default_flags_hashmap);
let mut merged_hashmap = serde_json::Map::new();
for (key, value) in default_flags_hashmap {
if let Some(v) = flags_hashmap.remove(&key) {
merged_hashmap.insert(key, v);
} else {
merged_hashmap.insert(key, value);
}
}
tracing::debug!("merged_hashmap: {:?}", merged_hashmap);
serde_json::from_value(serde_json::Value::Object(merged_hashmap)).unwrap()
}
}
impl ConfigLoader for TomlConfigLoader {
@@ -472,13 +501,13 @@ impl ConfigLoader for TomlConfigLoader {
self.config
.lock()
.unwrap()
.flags
.flags_struct
.clone()
.unwrap_or_default()
}
fn set_flags(&self, flags: Flags) {
self.config.lock().unwrap().flags = Some(flags);
self.config.lock().unwrap().flags_struct = Some(flags);
}
fn get_exit_nodes(&self) -> Vec<Ipv4Addr> {

View File

@@ -21,4 +21,13 @@ macro_rules! set_global_var {
define_global_var!(MANUAL_CONNECTOR_RECONNECT_INTERVAL_MS, u64, 1000);
define_global_var!(OSPF_UPDATE_MY_GLOBAL_FOREIGN_NETWORK_INTERVAL_SEC, u64, 10);
pub const UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID: u32 = 2;
pub const EASYTIER_VERSION: &str = git_version::git_version!(
args = ["--abbrev=8", "--always", "--dirty=~"],
prefix = concat!(env!("CARGO_PKG_VERSION"), "-"),
suffix = "",
fallback = env!("CARGO_PKG_VERSION")
);

View File

@@ -31,8 +31,6 @@ pub enum Error {
// RpcListenError(String),
#[error("Rpc connect error: {0}")]
RpcConnectError(String),
#[error("Rpc error: {0}")]
RpcClientError(#[from] tarpc::client::RpcError),
#[error("Timeout error: {0}")]
Timeout(#[from] tokio::time::error::Elapsed),
#[error("url in blacklist")]

View File

@@ -4,7 +4,8 @@ use std::{
sync::{Arc, Mutex},
};
use crate::rpc::PeerConnInfo;
use crate::proto::cli::PeerConnInfo;
use crate::proto::common::PeerFeatureFlag;
use crossbeam::atomic::AtomicCell;
use super::{
@@ -68,6 +69,8 @@ pub struct GlobalCtx {
enable_exit_node: bool,
no_tun: bool,
feature_flags: AtomicCell<PeerFeatureFlag>,
}
impl std::fmt::Debug for GlobalCtx {
@@ -91,7 +94,7 @@ impl GlobalCtx {
let net_ns = NetNS::new(config_fs.get_netns());
let hostname = config_fs.get_hostname();
let (event_bus, _) = tokio::sync::broadcast::channel(100);
let (event_bus, _) = tokio::sync::broadcast::channel(1024);
let stun_info_collection = Arc::new(StunInfoCollector::new_with_default_servers());
@@ -119,6 +122,8 @@ impl GlobalCtx {
enable_exit_node,
no_tun,
feature_flags: AtomicCell::new(PeerFeatureFlag::default()),
}
}
@@ -179,6 +184,10 @@ impl GlobalCtx {
self.config.get_network_identity()
}
pub fn get_network_name(&self) -> String {
self.get_network_identity().network_name
}
pub fn get_ip_collector(&self) -> Arc<IPCollector> {
self.ip_collector.clone()
}
@@ -191,7 +200,6 @@ impl GlobalCtx {
self.stun_info_collection.as_ref()
}
#[cfg(test)]
pub fn replace_stun_info_collector(&self, collector: Box<dyn StunInfoCollectorTrait>) {
// force replace the stun_info_collection without mut and drop the old one
let ptr = &self.stun_info_collection as *const Box<dyn StunInfoCollectorTrait>;
@@ -219,6 +227,10 @@ impl GlobalCtx {
self.config.get_flags()
}
pub fn set_flags(&self, flags: Flags) {
self.config.set_flags(flags);
}
pub fn get_128_key(&self) -> [u8; 16] {
let mut key = [0u8; 16];
let secret = self
@@ -243,6 +255,14 @@ impl GlobalCtx {
pub fn no_tun(&self) -> bool {
self.no_tun
}
pub fn get_feature_flags(&self) -> PeerFeatureFlag {
self.feature_flags.load()
}
pub fn set_feature_flags(&self, flags: PeerFeatureFlag) {
self.feature_flags.store(flags);
}
}
#[cfg(test)]

View File

@@ -14,6 +14,7 @@ pub mod global_ctx;
pub mod ifcfg;
pub mod netns;
pub mod network;
pub mod scoped_task;
pub mod stun;
pub mod stun_codec_ext;

View File

@@ -1,12 +1,13 @@
use std::{net::IpAddr, ops::Deref, sync::Arc};
use crate::rpc::peer::GetIpListResponse;
use pnet::datalink::NetworkInterface;
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
};
use crate::proto::peer_rpc::GetIpListResponse;
use super::{netns::NetNS, stun::StunInfoCollectorTrait};
pub const CACHED_IP_LIST_TIMEOUT_SEC: u64 = 60;
@@ -163,7 +164,7 @@ pub struct IPCollector {
impl IPCollector {
pub fn new<T: StunInfoCollectorTrait + 'static>(net_ns: NetNS, stun_info_collector: T) -> Self {
Self {
cached_ip_list: Arc::new(RwLock::new(GetIpListResponse::new())),
cached_ip_list: Arc::new(RwLock::new(GetIpListResponse::default())),
collect_ip_task: Mutex::new(JoinSet::new()),
net_ns,
stun_info_collector: Arc::new(Box::new(stun_info_collector)),
@@ -195,14 +196,18 @@ impl IPCollector {
let Ok(ip_addr) = ip.parse::<IpAddr>() else {
continue;
};
if ip_addr.is_ipv4() {
cached_ip_list.write().await.public_ipv4 = ip.clone();
} else {
cached_ip_list.write().await.public_ipv6 = ip.clone();
match ip_addr {
IpAddr::V4(v) => {
cached_ip_list.write().await.public_ipv4 = Some(v.into())
}
IpAddr::V6(v) => {
cached_ip_list.write().await.public_ipv6 = Some(v.into())
}
}
}
let sleep_sec = if !cached_ip_list.read().await.public_ipv4.is_empty() {
let sleep_sec = if !cached_ip_list.read().await.public_ipv4.is_none() {
CACHED_IP_LIST_TIMEOUT_SEC
} else {
3
@@ -236,7 +241,7 @@ impl IPCollector {
#[tracing::instrument(skip(net_ns))]
async fn do_collect_local_ip_addrs(net_ns: NetNS) -> GetIpListResponse {
let mut ret = crate::rpc::peer::GetIpListResponse::new();
let mut ret = GetIpListResponse::default();
let ifaces = Self::collect_interfaces(net_ns.clone()).await;
let _g = net_ns.guard();
@@ -246,25 +251,28 @@ impl IPCollector {
if ip.is_loopback() || ip.is_multicast() {
continue;
}
if ip.is_ipv4() {
ret.interface_ipv4s.push(ip.to_string());
} else if ip.is_ipv6() {
ret.interface_ipv6s.push(ip.to_string());
match ip {
std::net::IpAddr::V4(v4) => {
ret.interface_ipv4s.push(v4.into());
}
std::net::IpAddr::V6(v6) => {
ret.interface_ipv6s.push(v6.into());
}
}
}
}
if let Ok(v4_addr) = local_ipv4().await {
tracing::trace!("got local ipv4: {}", v4_addr);
if !ret.interface_ipv4s.contains(&v4_addr.to_string()) {
ret.interface_ipv4s.push(v4_addr.to_string());
if !ret.interface_ipv4s.contains(&v4_addr.into()) {
ret.interface_ipv4s.push(v4_addr.into());
}
}
if let Ok(v6_addr) = local_ipv6().await {
tracing::trace!("got local ipv6: {}", v6_addr);
if !ret.interface_ipv6s.contains(&v6_addr.to_string()) {
ret.interface_ipv6s.push(v6_addr.to_string());
if !ret.interface_ipv6s.contains(&v6_addr.into()) {
ret.interface_ipv6s.push(v6_addr.into());
}
}

View File

@@ -0,0 +1,134 @@
//! This crate provides a wrapper type of Tokio's JoinHandle: `ScopedTask`, which aborts the task when it's dropped.
//! `ScopedTask` can still be awaited to join the child-task, and abort-on-drop will still trigger while it is being awaited.
//!
//! For example, if task A spawned task B but is doing something else, and task B is waiting for task C to join,
//! aborting A will also abort both B and C.
use std::future::Future;
use std::ops::Deref;
use std::pin::Pin;
use std::task::{Context, Poll};
use tokio::task::JoinHandle;
#[derive(Debug)]
pub struct ScopedTask<T> {
inner: JoinHandle<T>,
}
impl<T> Drop for ScopedTask<T> {
fn drop(&mut self) {
self.inner.abort()
}
}
impl<T> Future for ScopedTask<T> {
type Output = <JoinHandle<T> as Future>::Output;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
Pin::new(&mut self.inner).poll(cx)
}
}
impl<T> From<JoinHandle<T>> for ScopedTask<T> {
fn from(inner: JoinHandle<T>) -> Self {
Self { inner }
}
}
impl<T> Deref for ScopedTask<T> {
type Target = JoinHandle<T>;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
#[cfg(test)]
mod tests {
use super::ScopedTask;
use futures_util::future::pending;
use std::sync::{Arc, RwLock};
use tokio::task::yield_now;
struct Sentry(Arc<RwLock<bool>>);
impl Drop for Sentry {
fn drop(&mut self) {
*self.0.write().unwrap() = true
}
}
#[tokio::test]
async fn drop_while_not_waiting_for_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
drop(task);
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn drop_while_waiting_for_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let handle = tokio::spawn(async move {
ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}))
.await
.unwrap()
});
yield_now().await;
assert!(!*dropped.read().unwrap());
handle.abort();
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn no_drop_only_join() {
assert_eq!(
ScopedTask::from(tokio::spawn(async {
yield_now().await;
5
}))
.await
.unwrap(),
5
)
}
#[tokio::test]
async fn manually_abort_before_drop() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
task.abort();
yield_now().await;
assert!(*dropped.read().unwrap());
}
#[tokio::test]
async fn manually_abort_then_join() {
let dropped = Arc::new(RwLock::new(false));
let sentry = Sentry(dropped.clone());
let task = ScopedTask::from(tokio::spawn(async move {
let _sentry = sentry;
pending::<()>().await
}));
yield_now().await;
assert!(!*dropped.read().unwrap());
task.abort();
yield_now().await;
assert!(task.await.is_err());
}
}

View File

@@ -1,9 +1,10 @@
use std::collections::BTreeSet;
use std::net::{IpAddr, SocketAddr};
use std::sync::atomic::AtomicBool;
use std::sync::{Arc, RwLock};
use std::time::{Duration, Instant};
use crate::rpc::{NatType, StunInfo};
use crate::proto::common::{NatType, StunInfo};
use anyhow::Context;
use chrono::Local;
use crossbeam::atomic::AtomicCell;
@@ -161,7 +162,7 @@ impl StunClient {
continue;
};
tracing::debug!(b = ?&udp_buf[..len], ?tids, ?remote_addr, ?stun_host, "recv stun response, msg: {:#?}", msg);
tracing::trace!(b = ?&udp_buf[..len], ?tids, ?remote_addr, ?stun_host, "recv stun response, msg: {:#?}", msg);
if msg.class() != MessageClass::SuccessResponse
|| msg.method() != BINDING
@@ -216,7 +217,7 @@ impl StunClient {
changed_addr
}
#[tracing::instrument(ret, err, level = Level::DEBUG)]
#[tracing::instrument(ret, level = Level::TRACE)]
pub async fn bind_request(
self,
change_ip: bool,
@@ -243,7 +244,7 @@ impl StunClient {
.encode_into_bytes(message.clone())
.with_context(|| "encode stun message")?;
tids.push(tid as u128);
tracing::debug!(?message, ?msg, tid, "send stun request");
tracing::trace!(?message, ?msg, tid, "send stun request");
self.socket
.send_to(msg.as_slice().into(), &stun_host)
.await?;
@@ -276,7 +277,7 @@ impl StunClient {
latency_us: now.elapsed().as_micros() as u32,
};
tracing::debug!(
tracing::trace!(
?stun_host,
?recv_addr,
?changed_socket_addr,
@@ -303,14 +304,14 @@ impl StunClientBuilder {
task_set.spawn(
async move {
let mut buf = [0; 1620];
tracing::info!("start stun packet listener");
tracing::trace!("start stun packet listener");
loop {
let Ok((len, addr)) = udp_clone.recv_from(&mut buf).await else {
tracing::error!("udp recv_from error");
break;
};
let data = buf[..len].to_vec();
tracing::debug!(?addr, ?data, "recv udp stun packet");
tracing::trace!(?addr, ?data, "recv udp stun packet");
let _ = stun_packet_sender_clone.send(StunPacket { data, addr });
}
}
@@ -552,12 +553,15 @@ pub struct StunInfoCollector {
udp_nat_test_result: Arc<RwLock<Option<UdpNatTypeDetectResult>>>,
nat_test_result_time: Arc<AtomicCell<chrono::DateTime<Local>>>,
redetect_notify: Arc<tokio::sync::Notify>,
tasks: JoinSet<()>,
tasks: std::sync::Mutex<JoinSet<()>>,
started: AtomicBool,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for StunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
self.start_stun_routine();
let Some(result) = self.udp_nat_test_result.read().unwrap().clone() else {
return Default::default();
};
@@ -572,6 +576,8 @@ impl StunInfoCollectorTrait for StunInfoCollector {
}
async fn get_udp_port_mapping(&self, local_port: u16) -> Result<SocketAddr, Error> {
self.start_stun_routine();
let stun_servers = self
.udp_nat_test_result
.read()
@@ -605,17 +611,14 @@ impl StunInfoCollectorTrait for StunInfoCollector {
impl StunInfoCollector {
pub fn new(stun_servers: Vec<String>) -> Self {
let mut ret = Self {
Self {
stun_servers: Arc::new(RwLock::new(stun_servers)),
udp_nat_test_result: Arc::new(RwLock::new(None)),
nat_test_result_time: Arc::new(AtomicCell::new(Local::now())),
redetect_notify: Arc::new(tokio::sync::Notify::new()),
tasks: JoinSet::new(),
};
ret.start_stun_routine();
ret
tasks: std::sync::Mutex::new(JoinSet::new()),
started: AtomicBool::new(false),
}
}
pub fn new_with_default_servers() -> Self {
@@ -648,12 +651,18 @@ impl StunInfoCollector {
.collect()
}
fn start_stun_routine(&mut self) {
fn start_stun_routine(&self) {
if self.started.load(std::sync::atomic::Ordering::Relaxed) {
return;
}
self.started
.store(true, std::sync::atomic::Ordering::Relaxed);
let stun_servers = self.stun_servers.clone();
let udp_nat_test_result = self.udp_nat_test_result.clone();
let udp_test_time = self.nat_test_result_time.clone();
let redetect_notify = self.redetect_notify.clone();
self.tasks.spawn(async move {
self.tasks.lock().unwrap().spawn(async move {
loop {
let servers = stun_servers.read().unwrap().clone();
// use first three and random choose one from the rest
@@ -712,6 +721,31 @@ impl StunInfoCollector {
}
}
pub struct MockStunInfoCollector {
pub udp_nat_type: NatType,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for MockStunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
StunInfo {
udp_nat_type: self.udp_nat_type as i32,
tcp_nat_type: NatType::Unknown as i32,
last_update_time: std::time::Instant::now().elapsed().as_secs() as i64,
min_port: 100,
max_port: 200,
..Default::default()
}
}
async fn get_udp_port_mapping(&self, mut port: u16) -> Result<std::net::SocketAddr, Error> {
if port == 0 {
port = 40144;
}
Ok(format!("127.0.0.1:{}", port).parse().unwrap())
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -4,10 +4,21 @@ use std::{net::SocketAddr, sync::Arc};
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, PeerId},
peers::{peer_manager::PeerManager, peer_rpc::PeerRpcManager},
peers::{
peer_manager::PeerManager, peer_rpc::PeerRpcManager,
peer_rpc_service::DirectConnectorManagerRpcServer,
},
proto::{
peer_rpc::{
DirectConnectorRpc, DirectConnectorRpcClientFactory, DirectConnectorRpcServer,
GetIpListRequest, GetIpListResponse,
},
rpc_types::controller::BaseController,
},
};
use crate::rpc::{peer::GetIpListResponse, PeerConnInfo};
use crate::proto::cli::PeerConnInfo;
use anyhow::Context;
use tokio::{task::JoinSet, time::timeout};
use tracing::Instrument;
use url::Host;
@@ -17,11 +28,6 @@ use super::create_connector_by_url;
pub const DIRECT_CONNECTOR_SERVICE_ID: u32 = 1;
pub const DIRECT_CONNECTOR_BLACKLIST_TIMEOUT_SEC: u64 = 300;
#[tarpc::service]
pub trait DirectConnectorRpc {
async fn get_ip_list() -> GetIpListResponse;
}
#[async_trait::async_trait]
pub trait PeerManagerForDirectConnector {
async fn list_peers(&self) -> Vec<PeerId>;
@@ -35,7 +41,10 @@ impl PeerManagerForDirectConnector for PeerManager {
let mut ret = vec![];
let routes = self.list_routes().await;
for r in routes.iter() {
for r in routes
.iter()
.filter(|r| r.feature_flag.map(|r| !r.is_public_server).unwrap_or(true))
{
ret.push(r.peer_id);
}
@@ -51,27 +60,6 @@ impl PeerManagerForDirectConnector for PeerManager {
}
}
#[derive(Clone)]
struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global
global_ctx: ArcGlobalCtx,
}
#[tarpc::server]
impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
async fn get_ip_list(self, _: tarpc::context::Context) -> GetIpListResponse {
let mut ret = self.global_ctx.get_ip_collector().collect_ip_addrs().await;
ret.listeners = self.global_ctx.get_running_listeners();
ret
}
}
impl DirectConnectorManagerRpcServer {
pub fn new(global_ctx: ArcGlobalCtx) -> Self {
Self { global_ctx }
}
}
#[derive(Hash, Eq, PartialEq, Clone)]
struct DstBlackListItem(PeerId, String);
@@ -130,10 +118,17 @@ impl DirectConnectorManager {
}
pub fn run_as_server(&mut self) {
self.data.peer_manager.get_peer_rpc_mgr().run_service(
DIRECT_CONNECTOR_SERVICE_ID,
DirectConnectorManagerRpcServer::new(self.global_ctx.clone()).serve(),
);
self.data
.peer_manager
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
DirectConnectorRpcServer::new(DirectConnectorManagerRpcServer::new(
self.global_ctx.clone(),
)),
&self.data.global_ctx.get_network_name(),
);
}
pub fn run_as_client(&mut self) {
@@ -238,7 +233,8 @@ impl DirectConnectorManager {
let enable_ipv6 = data.global_ctx.get_flags().enable_ipv6;
let available_listeners = ip_list
.listeners
.iter()
.into_iter()
.map(Into::<url::Url>::into)
.filter_map(|l| if l.scheme() != "ring" { Some(l) } else { None })
.filter(|l| l.port().is_some() && l.host().is_some())
.filter(|l| {
@@ -268,7 +264,7 @@ impl DirectConnectorManager {
Some(SocketAddr::V4(_)) => {
ip_list.interface_ipv4s.iter().for_each(|ip| {
let mut addr = (*listener).clone();
if addr.set_host(Some(ip.as_str())).is_ok() {
if addr.set_host(Some(ip.to_string().as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
@@ -277,19 +273,27 @@ impl DirectConnectorManager {
}
});
let mut addr = (*listener).clone();
if addr.set_host(Some(ip_list.public_ipv4.as_str())).is_ok() {
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
if let Some(public_ipv4) = ip_list.public_ipv4 {
let mut addr = (*listener).clone();
if addr
.set_host(Some(public_ipv4.to_string().as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
}
}
}
Some(SocketAddr::V6(_)) => {
ip_list.interface_ipv6s.iter().for_each(|ip| {
let mut addr = (*listener).clone();
if addr.set_host(Some(format!("[{}]", ip).as_str())).is_ok() {
if addr
.set_host(Some(format!("[{}]", ip.to_string()).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
@@ -298,16 +302,18 @@ impl DirectConnectorManager {
}
});
let mut addr = (*listener).clone();
if addr
.set_host(Some(format!("[{}]", ip_list.public_ipv6).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
if let Some(public_ipv6) = ip_list.public_ipv6 {
let mut addr = (*listener).clone();
if addr
.set_host(Some(format!("[{}]", public_ipv6.to_string()).as_str()))
.is_ok()
{
tasks.spawn(Self::try_connect_to_ip(
data.clone(),
dst_peer_id.clone(),
addr.to_string(),
));
}
}
}
p => {
@@ -351,16 +357,21 @@ impl DirectConnectorManager {
tracing::trace!("try direct connect to peer: {}", dst_peer_id);
let ip_list = peer_manager
let rpc_stub = peer_manager
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, dst_peer_id, |c| async {
let client =
DirectConnectorRpcClient::new(tarpc::client::Config::default(), c).spawn();
let ip_list = client.get_ip_list(tarpc::context::current()).await;
tracing::info!(ip_list = ?ip_list, dst_peer_id = ?dst_peer_id, "got ip list");
ip_list
})
.await?;
.rpc_client()
.scoped_client::<DirectConnectorRpcClientFactory<BaseController>>(
peer_manager.my_peer_id(),
dst_peer_id,
data.global_ctx.get_network_name(),
);
let ip_list = rpc_stub
.get_ip_list(BaseController {}, GetIpListRequest {})
.await
.with_context(|| format!("get ip list from peer {}", dst_peer_id))?;
tracing::info!(ip_list = ?ip_list, dst_peer_id = ?dst_peer_id, "got ip list");
Self::do_try_direct_connect_internal(data, dst_peer_id, ip_list).await
}
@@ -380,7 +391,7 @@ mod tests {
connect_peer_manager, create_mock_peer_manager, wait_route_appear,
wait_route_appear_with_cost,
},
rpc::peer::GetIpListResponse,
proto::peer_rpc::GetIpListResponse,
};
#[rstest::rstest]
@@ -436,12 +447,14 @@ mod tests {
p_a.get_global_ctx(),
p_a.clone(),
));
let mut ip_list = GetIpListResponse::new();
let mut ip_list = GetIpListResponse::default();
ip_list
.listeners
.push("tcp://127.0.0.1:10222".parse().unwrap());
ip_list.interface_ipv4s.push("127.0.0.1".to_string());
ip_list
.interface_ipv4s
.push("127.0.0.1".parse::<std::net::Ipv4Addr>().unwrap().into());
DirectConnectorManager::do_try_direct_connect_internal(data.clone(), 1, ip_list.clone())
.await

View File

@@ -11,7 +11,12 @@ use tokio::{
use crate::{
common::PeerId,
peers::peer_conn::PeerConnId,
rpc as easytier_rpc,
proto::{
cli::{
ConnectorManageAction, ListConnectorResponse, ManageConnectorResponse, PeerConnInfo,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{IpVersion, TunnelConnector},
};
@@ -23,9 +28,9 @@ use crate::{
},
connector::set_bind_addr_for_peer_connector,
peers::peer_manager::PeerManager,
rpc::{
connector_manage_rpc_server::ConnectorManageRpc, Connector, ConnectorStatus,
ListConnectorRequest, ManageConnectorRequest,
proto::cli::{
Connector, ConnectorManageRpc, ConnectorStatus, ListConnectorRequest,
ManageConnectorRequest,
},
use_global_var,
};
@@ -105,12 +110,18 @@ impl ManualConnectorManager {
Ok(())
}
pub async fn remove_connector(&self, url: &str) -> Result<(), Error> {
pub async fn remove_connector(&self, url: url::Url) -> Result<(), Error> {
tracing::info!("remove_connector: {}", url);
if !self.list_connectors().await.iter().any(|x| x.url == url) {
let url = url.into();
if !self
.list_connectors()
.await
.iter()
.any(|x| x.url.as_ref() == Some(&url))
{
return Err(Error::NotFound);
}
self.data.removed_conn_urls.insert(url.into());
self.data.removed_conn_urls.insert(url.to_string());
Ok(())
}
@@ -137,7 +148,7 @@ impl ManualConnectorManager {
ret.insert(
0,
Connector {
url: conn_url,
url: Some(conn_url.parse().unwrap()),
status: status.into(),
},
);
@@ -154,7 +165,7 @@ impl ManualConnectorManager {
ret.insert(
0,
Connector {
url: conn_url,
url: Some(conn_url.parse().unwrap()),
status: ConnectorStatus::Connecting.into(),
},
);
@@ -213,14 +224,14 @@ impl ManualConnectorManager {
}
async fn handle_event(event: &GlobalCtxEvent, data: &ConnectorManagerData) {
let need_add_alive = |conn_info: &easytier_rpc::PeerConnInfo| conn_info.is_client;
let need_add_alive = |conn_info: &PeerConnInfo| conn_info.is_client;
match event {
GlobalCtxEvent::PeerConnAdded(conn_info) => {
if !need_add_alive(conn_info) {
return;
}
let addr = conn_info.tunnel.as_ref().unwrap().remote_addr.clone();
data.alive_conn_urls.insert(addr);
data.alive_conn_urls.insert(addr.unwrap().to_string());
tracing::warn!("peer conn added: {:?}", conn_info);
}
@@ -229,7 +240,7 @@ impl ManualConnectorManager {
return;
}
let addr = conn_info.tunnel.as_ref().unwrap().remote_addr.clone();
data.alive_conn_urls.remove(&addr);
data.alive_conn_urls.remove(&addr.unwrap().to_string());
tracing::warn!("peer conn removed: {:?}", conn_info);
}
@@ -303,7 +314,7 @@ impl ManualConnectorManager {
tracing::info!("reconnect get tunnel succ: {:?}", tunnel);
assert_eq!(
dead_url,
tunnel.info().unwrap().remote_addr,
tunnel.info().unwrap().remote_addr.unwrap().to_string(),
"info: {:?}",
tunnel.info()
);
@@ -385,45 +396,43 @@ impl ManualConnectorManager {
}
}
#[derive(Clone)]
pub struct ConnectorManagerRpcService(pub Arc<ManualConnectorManager>);
#[tonic::async_trait]
#[async_trait::async_trait]
impl ConnectorManageRpc for ConnectorManagerRpcService {
type Controller = BaseController;
async fn list_connector(
&self,
_request: tonic::Request<ListConnectorRequest>,
) -> Result<tonic::Response<easytier_rpc::ListConnectorResponse>, tonic::Status> {
let mut ret = easytier_rpc::ListConnectorResponse::default();
_: BaseController,
_request: ListConnectorRequest,
) -> Result<ListConnectorResponse, rpc_types::error::Error> {
let mut ret = ListConnectorResponse::default();
let connectors = self.0.list_connectors().await;
ret.connectors = connectors;
Ok(tonic::Response::new(ret))
Ok(ret)
}
async fn manage_connector(
&self,
request: tonic::Request<ManageConnectorRequest>,
) -> Result<tonic::Response<easytier_rpc::ManageConnectorResponse>, tonic::Status> {
let req = request.into_inner();
let url = url::Url::parse(&req.url)
.map_err(|_| tonic::Status::invalid_argument("invalid url"))?;
if req.action == easytier_rpc::ConnectorManageAction::Remove as i32 {
self.0.remove_connector(url.path()).await.map_err(|e| {
tonic::Status::invalid_argument(format!("remove connector failed: {:?}", e))
})?;
return Ok(tonic::Response::new(
easytier_rpc::ManageConnectorResponse::default(),
));
_: BaseController,
req: ManageConnectorRequest,
) -> Result<ManageConnectorResponse, rpc_types::error::Error> {
let url: url::Url = req.url.ok_or(anyhow::anyhow!("url is empty"))?.into();
if req.action == ConnectorManageAction::Remove as i32 {
self.0
.remove_connector(url.clone())
.await
.with_context(|| format!("remove connector failed: {:?}", url))?;
return Ok(ManageConnectorResponse::default());
} else {
self.0
.add_connector_by_url(url.as_str())
.await
.map_err(|e| {
tonic::Status::invalid_argument(format!("add connector failed: {:?}", e))
})?;
.with_context(|| format!("add connector failed: {:?}", url))?;
}
Ok(tonic::Response::new(
easytier_rpc::ManageConnectorResponse::default(),
))
Ok(ManageConnectorResponse::default())
}
}

View File

@@ -32,14 +32,14 @@ async fn set_bind_addr_for_peer_connector(
if is_ipv4 {
let mut bind_addrs = vec![];
for ipv4 in ips.interface_ipv4s {
let socket_addr = SocketAddrV4::new(ipv4.parse().unwrap(), 0).into();
let socket_addr = SocketAddrV4::new(ipv4.into(), 0).into();
bind_addrs.push(socket_addr);
}
connector.set_bind_addrs(bind_addrs);
} else {
let mut bind_addrs = vec![];
for ipv6 in ips.interface_ipv6s {
let socket_addr = SocketAddrV6::new(ipv6.parse().unwrap(), 0, 0, 0).into();
let socket_addr = SocketAddrV6::new(ipv6.into(), 0, 0, 0).into();
bind_addrs.push(socket_addr);
}
connector.set_bind_addrs(bind_addrs);

View File

@@ -5,6 +5,7 @@ use std::{
Arc,
},
time::Duration,
u16,
};
use anyhow::Context;
@@ -21,12 +22,20 @@ use zerocopy::FromBytes;
use crate::{
common::{
constants, error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS,
stun::StunInfoCollectorTrait, PeerId,
error::Error, global_ctx::ArcGlobalCtx, join_joinset_background, netns::NetNS,
scoped_task::ScopedTask, stun::StunInfoCollectorTrait, PeerId,
},
defer,
peers::peer_manager::PeerManager,
rpc::NatType,
proto::{
common::NatType,
peer_rpc::{
TryPunchHoleRequest, TryPunchHoleResponse, TryPunchSymmetricRequest,
TryPunchSymmetricResponse, UdpHolePunchRpc, UdpHolePunchRpcClientFactory,
UdpHolePunchRpcServer,
},
rpc_types::{self, controller::BaseController},
},
tunnel::{
common::setup_sokcet2,
packet_def::{UDPTunnelHeader, UdpPacketType, UDP_TUNNEL_HEADER_SIZE},
@@ -186,21 +195,6 @@ impl std::fmt::Debug for UdpSocketArray {
}
}
#[tarpc::service]
pub trait UdpHolePunchService {
async fn try_punch_hole(local_mapped_addr: SocketAddr) -> Option<SocketAddr>;
async fn try_punch_symmetric(
listener_addr: SocketAddr,
port: u16,
public_ips: Vec<Ipv4Addr>,
min_port: u16,
max_port: u16,
transaction_id: u32,
round: u32,
last_port_index: usize,
) -> Option<usize>;
}
#[derive(Debug)]
struct UdpHolePunchListener {
socket: Arc<UdpSocket>,
@@ -324,23 +318,34 @@ impl UdpHolePunchConnectorData {
}
#[derive(Clone)]
struct UdpHolePunchRpcServer {
struct UdpHolePunchRpcService {
data: Arc<UdpHolePunchConnectorData>,
tasks: Arc<std::sync::Mutex<JoinSet<()>>>,
}
#[tarpc::server]
impl UdpHolePunchService for UdpHolePunchRpcServer {
#[async_trait::async_trait]
impl UdpHolePunchRpc for UdpHolePunchRpcService {
type Controller = BaseController;
#[tracing::instrument(skip(self))]
async fn try_punch_hole(
self,
_: tarpc::context::Context,
local_mapped_addr: SocketAddr,
) -> Option<SocketAddr> {
&self,
_: BaseController,
request: TryPunchHoleRequest,
) -> Result<TryPunchHoleResponse, rpc_types::error::Error> {
let local_mapped_addr = request.local_mapped_addr.ok_or(anyhow::anyhow!(
"try_punch_hole request missing local_mapped_addr"
))?;
let local_mapped_addr = std::net::SocketAddr::from(local_mapped_addr);
// local mapped addr will be unspecified if peer is symmetric
let peer_is_symmetric = local_mapped_addr.ip().is_unspecified();
let (socket, mapped_addr) = self.select_listener(peer_is_symmetric).await?;
let (socket, mapped_addr) =
self.select_listener(peer_is_symmetric)
.await
.ok_or(anyhow::anyhow!(
"failed to select listener for hole punching"
))?;
tracing::warn!(?local_mapped_addr, ?mapped_addr, "start hole punching");
if !peer_is_symmetric {
@@ -380,32 +385,48 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
}
}
Some(mapped_addr)
Ok(TryPunchHoleResponse {
remote_mapped_addr: Some(mapped_addr.into()),
})
}
#[instrument(skip(self))]
async fn try_punch_symmetric(
self,
_: tarpc::context::Context,
listener_addr: SocketAddr,
port: u16,
public_ips: Vec<Ipv4Addr>,
mut min_port: u16,
mut max_port: u16,
transaction_id: u32,
round: u32,
last_port_index: usize,
) -> Option<usize> {
&self,
_: BaseController,
request: TryPunchSymmetricRequest,
) -> Result<TryPunchSymmetricResponse, rpc_types::error::Error> {
let listener_addr = request.listener_addr.ok_or(anyhow::anyhow!(
"try_punch_symmetric request missing listener_addr"
))?;
let listener_addr = std::net::SocketAddr::from(listener_addr);
let port = request.port as u16;
let public_ips = request
.public_ips
.into_iter()
.map(|ip| std::net::Ipv4Addr::from(ip))
.collect::<Vec<_>>();
let mut min_port = request.min_port as u16;
let mut max_port = request.max_port as u16;
let transaction_id = request.transaction_id;
let round = request.round;
let last_port_index = request.last_port_index as usize;
tracing::info!("try_punch_symmetric start");
let punch_predictablely = self.data.punch_predicablely.load(Ordering::Relaxed);
let punch_randomly = self.data.punch_randomly.load(Ordering::Relaxed);
let total_port_count = self.data.shuffled_port_vec.len();
let listener = self.find_listener(&listener_addr).await?;
let listener = self
.find_listener(&listener_addr)
.await
.ok_or(anyhow::anyhow!(
"try_punch_symmetric failed to find listener"
))?;
let ip_count = public_ips.len();
if ip_count == 0 {
tracing::warn!("try_punch_symmetric got zero len public ip");
return None;
return Err(anyhow::anyhow!("try_punch_symmetric got zero len public ip").into());
}
min_port = std::cmp::max(1, min_port);
@@ -417,12 +438,12 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
}
// send max k1 packets if we are predicting the dst port
let max_k1 = 180;
let max_k1 = 60;
// send max k2 packets if we are sending to random port
let max_k2 = rand::thread_rng().gen_range(600..800);
// this means the NAT is allocating port in a predictable way
if max_port.abs_diff(min_port) <= max_k1 && round <= 6 && punch_predictablely {
if max_port.abs_diff(min_port) <= 3 * max_k1 && round <= 6 && punch_predictablely {
let (min_port, max_port) = {
// round begin from 0. if round is even, we guess port in increasing order
let port_delta = (max_k1 as u32) / ip_count as u32;
@@ -447,7 +468,7 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
&ports,
)
.await
.ok()?;
.with_context(|| "failed to send symmetric hole punch packet predict")?;
}
if punch_randomly {
@@ -461,20 +482,22 @@ impl UdpHolePunchService for UdpHolePunchRpcServer {
&self.data.shuffled_port_vec[start..end],
)
.await
.ok()?;
.with_context(|| "failed to send symmetric hole punch packet randomly")?;
return if end >= self.data.shuffled_port_vec.len() {
Some(1)
Ok(TryPunchSymmetricResponse { last_port_index: 1 })
} else {
Some(end)
Ok(TryPunchSymmetricResponse {
last_port_index: end as u32,
})
};
}
return Some(1);
return Ok(TryPunchSymmetricResponse { last_port_index: 1 });
}
}
impl UdpHolePunchRpcServer {
impl UdpHolePunchRpcService {
pub fn new(data: Arc<UdpHolePunchConnectorData>) -> Self {
let tasks = Arc::new(std::sync::Mutex::new(JoinSet::new()));
join_joinset_background(tasks.clone(), "UdpHolePunchRpcServer".to_owned());
@@ -593,10 +616,15 @@ impl UdpHolePunchConnector {
}
pub async fn run_as_server(&mut self) -> Result<(), Error> {
self.data.peer_mgr.get_peer_rpc_mgr().run_service(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
UdpHolePunchRpcServer::new(self.data.clone()).serve(),
);
self.data
.peer_mgr
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
UdpHolePunchRpcServer::new(UdpHolePunchRpcService::new(self.data.clone())),
&self.data.global_ctx.get_network_name(),
);
Ok(())
}
@@ -605,6 +633,9 @@ impl UdpHolePunchConnector {
if self.data.global_ctx.get_flags().disable_p2p {
return Ok(());
}
if self.data.global_ctx.get_flags().disable_udp_hole_punching {
return Ok(());
}
self.run_as_client().await?;
self.run_as_server().await?;
@@ -733,26 +764,26 @@ impl UdpHolePunchConnector {
.with_context(|| "failed to get udp port mapping")?;
// client -> server: tell server the mapped port, server will return the mapped address of listening port.
let Some(remote_mapped_addr) = data
let rpc_stub = data
.peer_mgr
.get_peer_rpc_mgr()
.do_client_rpc_scoped(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
data.peer_mgr.my_peer_id(),
dst_peer_id,
|c| async {
let client =
UdpHolePunchServiceClient::new(tarpc::client::Config::default(), c).spawn();
let remote_mapped_addr = client
.try_punch_hole(tarpc::context::current(), local_mapped_addr)
.await;
tracing::info!(?remote_mapped_addr, ?dst_peer_id, "got remote mapped addr");
remote_mapped_addr
data.global_ctx.get_network_name(),
);
let remote_mapped_addr = rpc_stub
.try_punch_hole(
BaseController {},
TryPunchHoleRequest {
local_mapped_addr: Some(local_mapped_addr.into()),
},
)
.await?
else {
return Err(anyhow::anyhow!("failed to get remote mapped addr"));
};
.remote_mapped_addr
.ok_or(anyhow::anyhow!("failed to get remote mapped addr"))?;
// server: will send some punching resps, total 10 packets.
// client: use the socket to create UdpTunnel with UdpTunnelConnector
@@ -766,9 +797,11 @@ impl UdpHolePunchConnector {
setup_sokcet2(&socket2_socket, &local_socket_addr)?;
let socket = Arc::new(UdpSocket::from_std(socket2_socket.into())?);
Ok(Self::try_connect_with_socket(socket, remote_mapped_addr)
.await
.with_context(|| "UdpTunnelConnector failed to connect remote")?)
Ok(
Self::try_connect_with_socket(socket, remote_mapped_addr.into())
.await
.with_context(|| "UdpTunnelConnector failed to connect remote")?,
)
}
#[tracing::instrument(err(level = Level::ERROR))]
@@ -780,30 +813,28 @@ impl UdpHolePunchConnector {
return Err(anyhow::anyhow!("udp array not started"));
};
let Some(remote_mapped_addr) = data
let rpc_stub = data
.peer_mgr
.get_peer_rpc_mgr()
.do_client_rpc_scoped(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
.rpc_client()
.scoped_client::<UdpHolePunchRpcClientFactory<BaseController>>(
data.peer_mgr.my_peer_id(),
dst_peer_id,
|c| async {
let client =
UdpHolePunchServiceClient::new(tarpc::client::Config::default(), c).spawn();
let remote_mapped_addr = client
.try_punch_hole(tarpc::context::current(), "0.0.0.0:0".parse().unwrap())
.await;
tracing::debug!(
?remote_mapped_addr,
?dst_peer_id,
"hole punching symmetric got remote mapped addr"
);
remote_mapped_addr
data.global_ctx.get_network_name(),
);
let local_mapped_addr: SocketAddr = "0.0.0.0:0".parse().unwrap();
let remote_mapped_addr = rpc_stub
.try_punch_hole(
BaseController {},
TryPunchHoleRequest {
local_mapped_addr: Some(local_mapped_addr.into()),
},
)
.await?
else {
return Err(anyhow::anyhow!("failed to get remote mapped addr"));
};
.remote_mapped_addr
.ok_or(anyhow::anyhow!("failed to get remote mapped addr"))?
.into();
// try direct connect first
if data.try_direct_connect.load(Ordering::Relaxed) {
@@ -846,41 +877,38 @@ impl UdpHolePunchConnector {
return Err(anyhow::anyhow!("failed to get public ips"));
}
let mut last_port_idx = 0;
let mut last_port_idx = rand::thread_rng().gen_range(0..data.shuffled_port_vec.len());
for round in 0..30 {
let Some(next_last_port_idx) = data
.peer_mgr
.get_peer_rpc_mgr()
.do_client_rpc_scoped(
constants::UDP_HOLE_PUNCH_CONNECTOR_SERVICE_ID,
dst_peer_id,
|c| async {
let client =
UdpHolePunchServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
let last_port_idx = client
.try_punch_symmetric(
tarpc::context::current(),
remote_mapped_addr,
port,
public_ips.clone(),
stun_info.min_port as u16,
stun_info.max_port as u16,
tid,
round,
last_port_idx,
)
.await;
tracing::info!(?last_port_idx, ?dst_peer_id, "punch symmetric return");
last_port_idx
for round in 0..5 {
let ret = rpc_stub
.try_punch_symmetric(
BaseController {},
TryPunchSymmetricRequest {
listener_addr: Some(remote_mapped_addr.into()),
port: port as u32,
public_ips: public_ips.clone().into_iter().map(|x| x.into()).collect(),
min_port: stun_info.min_port as u32,
max_port: stun_info.max_port as u32,
transaction_id: tid,
round,
last_port_index: last_port_idx as u32,
},
)
.await?
else {
return Err(anyhow::anyhow!("failed to get remote mapped addr"));
.await;
tracing::info!(?ret, "punch symmetric return");
let next_last_port_idx = match ret {
Ok(s) => s.last_port_index as usize,
Err(err) => {
tracing::error!(?err, "failed to get remote mapped addr");
rand::thread_rng().gen_range(0..data.shuffled_port_vec.len())
}
};
// wait for some time to increase the chance of receiving hole punching packet
tokio::time::sleep(Duration::from_secs(2)).await;
// no matter what the result is, we should check if we received any hole punching packet
while let Some(socket) = udp_array.try_fetch_punched_socket(tid) {
if let Ok(tunnel) = Self::try_connect_with_socket(socket, remote_mapped_addr).await
{
@@ -898,8 +926,8 @@ impl UdpHolePunchConnector {
data: Arc<UdpHolePunchConnectorData>,
peer_id: PeerId,
) -> Result<(), anyhow::Error> {
const MAX_BACKOFF_TIME: u64 = 600;
let mut backoff_time = vec![15, 15, 30, 30, 60, 120, 300, MAX_BACKOFF_TIME];
const MAX_BACKOFF_TIME: u64 = 300;
let mut backoff_time = vec![15, 15, 30, 30, 60, 120, 180, MAX_BACKOFF_TIME];
let my_nat_type = data.my_nat_type();
loop {
@@ -939,7 +967,7 @@ impl UdpHolePunchConnector {
async fn main_loop(data: Arc<UdpHolePunchConnectorData>) {
type JoinTaskRet = Result<(), anyhow::Error>;
type JoinTask = tokio::task::JoinHandle<JoinTaskRet>;
type JoinTask = ScopedTask<JoinTaskRet>;
let punching_task = Arc::new(DashMap::<(PeerId, NatType), JoinTask>::new());
let mut last_my_nat_type = NatType::Unknown;
@@ -975,23 +1003,27 @@ impl UdpHolePunchConnector {
last_my_nat_type = my_nat_type;
if !peers_to_connect.is_empty() {
let my_nat_type = data.my_nat_type();
if my_nat_type == NatType::Symmetric || my_nat_type == NatType::SymUdpFirewall {
let mut udp_array = data.udp_array.lock().await;
if udp_array.is_none() {
*udp_array = Some(Arc::new(UdpSocketArray::new(
data.udp_array_size.load(Ordering::Relaxed),
data.global_ctx.net_ns.clone(),
)));
}
let udp_array = udp_array.as_ref().unwrap();
udp_array.start().await.unwrap();
}
for item in peers_to_connect {
if punching_task.contains_key(&item) {
continue;
}
let my_nat_type = data.my_nat_type();
if my_nat_type == NatType::Symmetric || my_nat_type == NatType::SymUdpFirewall {
let mut udp_array = data.udp_array.lock().await;
if udp_array.is_none() {
*udp_array = Some(Arc::new(UdpSocketArray::new(
data.udp_array_size.load(Ordering::Relaxed),
data.global_ctx.net_ns.clone(),
)));
}
let udp_array = udp_array.as_ref().unwrap();
udp_array.start().await.unwrap();
}
punching_task.insert(
item,
tokio::spawn(Self::peer_punching_task(data.clone(), item.0)),
tokio::spawn(Self::peer_punching_task(data.clone(), item.0)).into(),
);
}
} else if punching_task.is_empty() {
@@ -1011,11 +1043,11 @@ pub mod tests {
use tokio::net::UdpSocket;
use crate::rpc::{NatType, StunInfo};
use crate::common::stun::MockStunInfoCollector;
use crate::proto::common::NatType;
use crate::tunnel::common::tests::wait_for_condition;
use crate::{
common::{error::Error, stun::StunInfoCollectorTrait},
connector::udp_hole_punch::UdpHolePunchConnector,
peers::{
peer_manager::PeerManager,
@@ -1026,31 +1058,6 @@ pub mod tests {
},
};
struct MockStunInfoCollector {
udp_nat_type: NatType,
}
#[async_trait::async_trait]
impl StunInfoCollectorTrait for MockStunInfoCollector {
fn get_stun_info(&self) -> StunInfo {
StunInfo {
udp_nat_type: self.udp_nat_type as i32,
tcp_nat_type: NatType::Unknown as i32,
last_update_time: std::time::Instant::now().elapsed().as_secs() as i64,
min_port: 100,
max_port: 200,
..Default::default()
}
}
async fn get_udp_port_mapping(&self, mut port: u16) -> Result<std::net::SocketAddr, Error> {
if port == 0 {
port = 40144;
}
Ok(format!("127.0.0.1:{}", port).parse().unwrap())
}
}
pub fn replace_stun_info_collector(peer_mgr: Arc<PeerManager>, udp_nat_type: NatType) {
let collector = Box::new(MockStunInfoCollector { udp_nat_type });
peer_mgr
@@ -1170,9 +1177,9 @@ pub mod tests {
let udp_self = Arc::new(UdpSocket::bind("0.0.0.0:40144").await.unwrap());
let udp_inc = Arc::new(UdpSocket::bind("0.0.0.0:40147").await.unwrap());
let udp_inc2 = Arc::new(UdpSocket::bind("0.0.0.0:40400").await.unwrap());
let udp_inc2 = Arc::new(UdpSocket::bind("0.0.0.0:40200").await.unwrap());
let udp_dec = Arc::new(UdpSocket::bind("0.0.0.0:40140").await.unwrap());
let udp_dec2 = Arc::new(UdpSocket::bind("0.0.0.0:40350").await.unwrap());
let udp_dec2 = Arc::new(UdpSocket::bind("0.0.0.0:40050").await.unwrap());
let udps = vec![udp_self, udp_inc, udp_inc2, udp_dec, udp_dec2];
let counter = Arc::new(AtomicU32::new(0));
@@ -1183,7 +1190,7 @@ pub mod tests {
tokio::spawn(async move {
let mut buf = [0u8; 1024];
let (len, addr) = udp.recv_from(&mut buf).await.unwrap();
println!("{:?} {:?}", len, addr);
println!("{:?} {:?} {:?}", len, addr, udp.local_addr());
counter.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
});
}

View File

@@ -1,26 +1,29 @@
#![allow(dead_code)]
use std::{net::SocketAddr, time::Duration, vec};
use std::{net::SocketAddr, sync::Mutex, time::Duration, vec};
use anyhow::{Context, Ok};
use clap::{command, Args, Parser, Subcommand};
use common::stun::StunInfoCollectorTrait;
use rpc::vpn_portal_rpc_client::VpnPortalRpcClient;
use proto::{
common::NatType,
peer_rpc::{GetGlobalPeerMapRequest, PeerCenterRpc, PeerCenterRpcClientFactory},
rpc_impl::standalone::StandAloneClient,
rpc_types::controller::BaseController,
};
use tokio::time::timeout;
use tunnel::tcp::TcpTunnelConnector;
use utils::{list_peer_route_pair, PeerRoutePair};
mod arch;
mod common;
mod rpc;
mod proto;
mod tunnel;
mod utils;
use crate::{
common::stun::StunInfoCollector,
rpc::{
connector_manage_rpc_client::ConnectorManageRpcClient,
peer_center_rpc_client::PeerCenterRpcClient, peer_manage_rpc_client::PeerManageRpcClient,
*,
},
proto::cli::*,
utils::{cost_to_str, float_to_str},
};
use humansize::format_size;
@@ -69,6 +72,7 @@ enum PeerSubCommand {
Remove,
List(PeerListArgs),
ListForeign,
ListGlobalForeign,
}
#[derive(Args, Debug)]
@@ -114,58 +118,76 @@ struct NodeArgs {
sub_command: Option<NodeSubCommand>,
}
#[derive(thiserror::Error, Debug)]
enum Error {
#[error("tonic transport error")]
TonicTransportError(#[from] tonic::transport::Error),
#[error("tonic rpc error")]
TonicRpcError(#[from] tonic::Status),
#[error("anyhow error")]
Anyhow(#[from] anyhow::Error),
}
type Error = anyhow::Error;
struct CommandHandler {
addr: String,
client: Mutex<RpcClient>,
verbose: bool,
}
type RpcClient = StandAloneClient<TcpTunnelConnector>;
impl CommandHandler {
async fn get_peer_manager_client(
&self,
) -> Result<PeerManageRpcClient<tonic::transport::Channel>, Error> {
Ok(PeerManageRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn PeerManageRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<PeerManageRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get peer manager client")?)
}
async fn get_connector_manager_client(
&self,
) -> Result<ConnectorManageRpcClient<tonic::transport::Channel>, Error> {
Ok(ConnectorManageRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn ConnectorManageRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<ConnectorManageRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get connector manager client")?)
}
async fn get_peer_center_client(
&self,
) -> Result<PeerCenterRpcClient<tonic::transport::Channel>, Error> {
Ok(PeerCenterRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn PeerCenterRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<PeerCenterRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get peer center client")?)
}
async fn get_vpn_portal_client(
&self,
) -> Result<VpnPortalRpcClient<tonic::transport::Channel>, Error> {
Ok(VpnPortalRpcClient::connect(self.addr.clone()).await?)
) -> Result<Box<dyn VpnPortalRpc<Controller = BaseController>>, Error> {
Ok(self
.client
.lock()
.unwrap()
.scoped_client::<VpnPortalRpcClientFactory<BaseController>>("".to_string())
.await
.with_context(|| "failed to get vpn portal client")?)
}
async fn list_peers(&self) -> Result<ListPeerResponse, Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListPeerRequest::default());
let response = client.list_peer(request).await?;
Ok(response.into_inner())
let client = self.get_peer_manager_client().await?;
let request = ListPeerRequest::default();
let response = client.list_peer(BaseController {}, request).await?;
Ok(response)
}
async fn list_routes(&self) -> Result<ListRouteResponse, Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListRouteRequest::default());
let response = client.list_route(request).await?;
Ok(response.into_inner())
let client = self.get_peer_manager_client().await?;
let request = ListRouteRequest::default();
let response = client.list_route(BaseController {}, request).await?;
Ok(response)
}
async fn list_peer_route_pair(&self) -> Result<Vec<PeerRoutePair>, Error> {
@@ -197,6 +219,7 @@ impl CommandHandler {
tunnel_proto: String,
nat_type: String,
id: String,
version: String,
}
impl From<PeerRoutePair> for PeerTableItem {
@@ -212,6 +235,33 @@ impl CommandHandler {
tunnel_proto: p.get_conn_protos().unwrap_or(vec![]).join(",").to_string(),
nat_type: p.get_udp_nat_type(),
id: p.route.peer_id.to_string(),
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
}
}
}
impl From<NodeInfo> for PeerTableItem {
fn from(p: NodeInfo) -> Self {
PeerTableItem {
ipv4: p.ipv4_addr.clone(),
hostname: p.hostname.clone(),
cost: "Local".to_string(),
lat_ms: "-".to_string(),
loss_rate: "-".to_string(),
rx_bytes: "-".to_string(),
tx_bytes: "-".to_string(),
tunnel_proto: "-".to_string(),
nat_type: if let Some(info) = p.stun_info {
info.udp_nat_type().as_str_name().to_string()
} else {
"Unknown".to_string()
},
id: p.peer_id.to_string(),
version: p.version,
}
}
}
@@ -223,6 +273,14 @@ impl CommandHandler {
return Ok(());
}
let client = self.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController {}, ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
items.push(node_info.into());
for p in peer_routes {
items.push(p.into());
}
@@ -236,18 +294,20 @@ impl CommandHandler {
}
async fn handle_route_dump(&self) -> Result<(), Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(DumpRouteRequest::default());
let response = client.dump_route(request).await?;
println!("response: {}", response.into_inner().result);
let client = self.get_peer_manager_client().await?;
let request = DumpRouteRequest::default();
let response = client.dump_route(BaseController {}, request).await?;
println!("response: {}", response.result);
Ok(())
}
async fn handle_foreign_network_list(&self) -> Result<(), Error> {
let mut client = self.get_peer_manager_client().await?;
let request = tonic::Request::new(ListForeignNetworkRequest::default());
let response = client.list_foreign_network(request).await?;
let network_map = response.into_inner();
let client = self.get_peer_manager_client().await?;
let request = ListForeignNetworkRequest::default();
let response = client
.list_foreign_network(BaseController {}, request)
.await?;
let network_map = response;
if self.verbose {
println!("{:#?}", network_map);
return Ok(());
@@ -266,7 +326,7 @@ impl CommandHandler {
"remote_addr: {}, rx_bytes: {}, tx_bytes: {}, latency_us: {}",
conn.tunnel
.as_ref()
.map(|t| t.remote_addr.clone())
.map(|t| t.remote_addr.clone().unwrap_or_default())
.unwrap_or_default(),
conn.stats.as_ref().map(|s| s.rx_bytes).unwrap_or_default(),
conn.stats.as_ref().map(|s| s.tx_bytes).unwrap_or_default(),
@@ -283,6 +343,30 @@ impl CommandHandler {
Ok(())
}
async fn handle_global_foreign_network_list(&self) -> Result<(), Error> {
let client = self.get_peer_manager_client().await?;
let request = ListGlobalForeignNetworkRequest::default();
let response = client
.list_global_foreign_network(BaseController {}, request)
.await?;
if self.verbose {
println!("{:#?}", response);
return Ok(());
}
for (k, v) in response.foreign_networks.iter() {
println!("Peer ID: {}", k);
for n in v.foreign_networks.iter() {
println!(
" Network Name: {}, Last Updated: {}, Version: {}, PeerIds: {:?}",
n.network_name, n.last_updated, n.version, n.peer_ids
);
}
}
Ok(())
}
async fn handle_route_list(&self) -> Result<(), Error> {
#[derive(tabled::Tabled)]
struct RouteTableItem {
@@ -293,9 +377,27 @@ impl CommandHandler {
next_hop_hostname: String,
next_hop_lat: f64,
cost: i32,
version: String,
}
let mut items: Vec<RouteTableItem> = vec![];
let client = self.get_peer_manager_client().await?;
let node_info = client
.show_node_info(BaseController {}, ShowNodeInfoRequest::default())
.await?
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
items.push(RouteTableItem {
ipv4: node_info.ipv4_addr.clone(),
hostname: node_info.hostname.clone(),
proxy_cidrs: node_info.proxy_cidrs.join(", "),
next_hop_ipv4: "-".to_string(),
next_hop_hostname: "Local".to_string(),
next_hop_lat: 0.0,
cost: 0,
version: node_info.version.clone(),
});
let peer_routes = self.list_peer_route_pair().await?;
for p in peer_routes.iter() {
let Some(next_hop_pair) = peer_routes
@@ -314,6 +416,11 @@ impl CommandHandler {
next_hop_hostname: "".to_string(),
next_hop_lat: next_hop_pair.get_latency_ms().unwrap_or(0.0),
cost: p.route.cost,
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
});
} else {
items.push(RouteTableItem {
@@ -324,6 +431,11 @@ impl CommandHandler {
next_hop_hostname: next_hop_pair.route.hostname.clone(),
next_hop_lat: next_hop_pair.get_latency_ms().unwrap_or(0.0),
cost: p.route.cost,
version: if p.route.version.is_empty() {
"unknown".to_string()
} else {
p.route.version.to_string()
},
});
}
}
@@ -337,10 +449,10 @@ impl CommandHandler {
}
async fn handle_connector_list(&self) -> Result<(), Error> {
let mut client = self.get_connector_manager_client().await?;
let request = tonic::Request::new(ListConnectorRequest::default());
let response = client.list_connector(request).await?;
println!("response: {:#?}", response.into_inner());
let client = self.get_connector_manager_client().await?;
let request = ListConnectorRequest::default();
let response = client.list_connector(BaseController {}, request).await?;
println!("response: {:#?}", response);
Ok(())
}
}
@@ -349,8 +461,13 @@ impl CommandHandler {
#[tracing::instrument]
async fn main() -> Result<(), Error> {
let cli = Cli::parse();
let client = RpcClient::new(TcpTunnelConnector::new(
format!("tcp://{}:{}", cli.rpc_portal.ip(), cli.rpc_portal.port())
.parse()
.unwrap(),
));
let handler = CommandHandler {
addr: format!("http://{}:{}", cli.rpc_portal.ip(), cli.rpc_portal.port()),
client: Mutex::new(client),
verbose: cli.verbose,
};
@@ -372,6 +489,9 @@ async fn main() -> Result<(), Error> {
Some(PeerSubCommand::ListForeign) => {
handler.handle_foreign_network_list().await?;
}
Some(PeerSubCommand::ListGlobalForeign) => {
handler.handle_global_foreign_network_list().await?;
}
None => {
handler.handle_peer_list(&peer_args).await?;
}
@@ -410,11 +530,10 @@ async fn main() -> Result<(), Error> {
.unwrap();
}
SubCommand::PeerCenter => {
let mut peer_center_client = handler.get_peer_center_client().await?;
let peer_center_client = handler.get_peer_center_client().await?;
let resp = peer_center_client
.get_global_peer_map(GetGlobalPeerMapRequest::default())
.await?
.into_inner();
.get_global_peer_map(BaseController {}, GetGlobalPeerMapRequest::default())
.await?;
#[derive(tabled::Tabled)]
struct PeerCenterTableItem {
@@ -444,11 +563,10 @@ async fn main() -> Result<(), Error> {
);
}
SubCommand::VpnPortal => {
let mut vpn_portal_client = handler.get_vpn_portal_client().await?;
let vpn_portal_client = handler.get_vpn_portal_client().await?;
let resp = vpn_portal_client
.get_vpn_portal_info(GetVpnPortalInfoRequest::default())
.get_vpn_portal_info(BaseController {}, GetVpnPortalInfoRequest::default())
.await?
.into_inner()
.vpn_portal_info
.unwrap_or_default();
println!("portal_name: {}", resp.vpn_type);
@@ -463,11 +581,10 @@ async fn main() -> Result<(), Error> {
println!("connected_clients:\n{:#?}", resp.connected_clients);
}
SubCommand::Node(sub_cmd) => {
let mut client = handler.get_peer_manager_client().await?;
let client = handler.get_peer_manager_client().await?;
let node_info = client
.show_node_info(ShowNodeInfoRequest::default())
.show_node_info(BaseController {}, ShowNodeInfoRequest::default())
.await?
.into_inner()
.node_info
.ok_or(anyhow::anyhow!("node info not found"))?;
match sub_cmd.sub_command {

View File

@@ -21,7 +21,7 @@ mod gateway;
mod instance;
mod peer_center;
mod peers;
mod rpc;
mod proto;
mod tunnel;
mod utils;
mod vpn_portal;
@@ -266,6 +266,13 @@ struct Cli {
)]
disable_p2p: bool,
#[arg(
long,
help = t!("core_clap.disable_udp_hole_punching").to_string(),
default_value = "false"
)]
disable_udp_hole_punching: bool,
#[arg(
long,
help = t!("core_clap.relay_all_peer_rpc").to_string(),
@@ -284,15 +291,14 @@ struct Cli {
rust_i18n::i18n!("locales", fallback = "en");
impl Cli {
fn parse_listeners(&self) -> Vec<String> {
println!("parsing listeners: {:?}", self.listeners);
fn parse_listeners(no_listener: bool, listeners: Vec<String>) -> Vec<String> {
let proto_port_offset = vec![("tcp", 0), ("udp", 0), ("wg", 1), ("ws", 1), ("wss", 2)];
if self.no_listener || self.listeners.is_empty() {
if no_listener || listeners.is_empty() {
return vec![];
}
let origin_listners = self.listeners.clone();
let origin_listners = listeners;
let mut listeners: Vec<String> = Vec::new();
if origin_listners.len() == 1 {
if let Ok(port) = origin_listners[0].parse::<u16>() {
@@ -333,12 +339,12 @@ impl Cli {
}
fn check_tcp_available(port: u16) -> Option<SocketAddr> {
let s = format!("127.0.0.1:{}", port).parse::<SocketAddr>().unwrap();
let s = format!("0.0.0.0:{}", port).parse::<SocketAddr>().unwrap();
TcpSocket::new_v4().unwrap().bind(s).map(|_| s).ok()
}
fn parse_rpc_portal(&self) -> SocketAddr {
if let Ok(port) = self.rpc_portal.parse::<u16>() {
fn parse_rpc_portal(rpc_portal: String) -> SocketAddr {
if let Ok(port) = rpc_portal.parse::<u16>() {
if port == 0 {
// check tcp 15888 first
for i in 15888..15900 {
@@ -346,12 +352,12 @@ impl Cli {
return s;
}
}
return "127.0.0.1:0".parse().unwrap();
return "0.0.0.0:0".parse().unwrap();
}
return format!("127.0.0.1:{}", port).parse().unwrap();
return format!("0.0.0.0:{}", port).parse().unwrap();
}
self.rpc_portal.parse().unwrap()
rpc_portal.parse().unwrap()
}
}
@@ -369,14 +375,9 @@ impl From<Cli> for TomlConfigLoader {
let cfg = TomlConfigLoader::default();
cfg.set_inst_name(cli.instance_name.clone());
cfg.set_hostname(cli.hostname);
cfg.set_hostname(cli.hostname.clone());
cfg.set_network_identity(NetworkIdentity::new(
cli.network_name.clone(),
cli.network_secret.clone(),
));
cfg.set_network_identity(NetworkIdentity::new(cli.network_name, cli.network_secret));
cfg.set_dhcp(cli.dhcp);
@@ -401,7 +402,7 @@ impl From<Cli> for TomlConfigLoader {
);
cfg.set_listeners(
cli.parse_listeners()
Cli::parse_listeners(cli.no_listener, cli.listeners)
.into_iter()
.map(|s| s.parse().unwrap())
.collect(),
@@ -415,21 +416,15 @@ impl From<Cli> for TomlConfigLoader {
);
}
cfg.set_rpc_portal(cli.parse_rpc_portal());
cfg.set_rpc_portal(Cli::parse_rpc_portal(cli.rpc_portal));
if cli.external_node.is_some() {
if let Some(external_nodes) = cli.external_node {
let mut old_peers = cfg.get_peers();
old_peers.push(PeerConfig {
uri: cli
.external_node
.clone()
.unwrap()
uri: external_nodes
.parse()
.with_context(|| {
format!(
"failed to parse external node uri: {}",
cli.external_node.unwrap()
)
format!("failed to parse external node uri: {}", external_nodes)
})
.unwrap(),
});
@@ -438,7 +433,7 @@ impl From<Cli> for TomlConfigLoader {
if cli.console_log_level.is_some() {
cfg.set_console_logger_config(ConsoleLoggerConfig {
level: cli.console_log_level.clone(),
level: cli.console_log_level,
});
}
@@ -450,18 +445,12 @@ impl From<Cli> for TomlConfigLoader {
});
}
if cli.vpn_portal.is_some() {
let url: url::Url = cli
.vpn_portal
.clone()
.unwrap()
cfg.set_inst_name(cli.instance_name);
if let Some(vpn_portal) = cli.vpn_portal {
let url: url::Url = vpn_portal
.parse()
.with_context(|| {
format!(
"failed to parse vpn portal url: {}",
cli.vpn_portal.unwrap()
)
})
.with_context(|| format!("failed to parse vpn portal url: {}", vpn_portal))
.unwrap();
cfg.set_vpn_portal_config(VpnPortalConfig {
client_cidr: url.path()[1..]
@@ -482,11 +471,9 @@ impl From<Cli> for TomlConfigLoader {
});
}
if cli.manual_routes.is_some() {
if let Some(manual_routes) = cli.manual_routes {
cfg.set_routes(Some(
cli.manual_routes
.clone()
.unwrap()
manual_routes
.iter()
.map(|s| {
s.parse()
@@ -541,7 +528,7 @@ fn print_event(msg: String) {
);
}
fn peer_conn_info_to_string(p: crate::rpc::PeerConnInfo) -> String {
fn peer_conn_info_to_string(p: crate::proto::cli::PeerConnInfo) -> String {
format!(
"my_peer_id: {}, dst_peer_id: {}, tunnel_info: {:?}",
p.my_peer_id, p.peer_id, p.tunnel

View File

@@ -187,10 +187,6 @@ pub enum SocksError {
#[error("Error with reply: {0}.")]
ReplyError(#[from] ReplyError),
#[cfg(feature = "socks4")]
#[error("Error with reply: {0}.")]
ReplySocks4Error(#[from] socks4::ReplyError),
#[error("Argument input error: `{0}`.")]
ArgumentInputError(&'static str),

View File

@@ -4,6 +4,7 @@ use std::{
time::Duration,
};
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use pnet::packet::{
ip::IpNextHeaderProtocols,
@@ -11,12 +12,10 @@ use pnet::packet::{
udp::{self, MutableUdpPacket},
Packet,
};
use tachyonix::{channel, Receiver, Sender, TrySendError};
use tokio::{
net::UdpSocket,
sync::{
mpsc::{unbounded_channel, UnboundedReceiver, UnboundedSender},
Mutex,
},
sync::Mutex,
task::{JoinHandle, JoinSet},
time::timeout,
};
@@ -49,6 +48,7 @@ struct UdpNatEntry {
forward_task: Mutex<Option<JoinHandle<()>>>,
stopped: AtomicBool,
start_time: std::time::Instant,
last_active_time: AtomicCell<std::time::Instant>,
}
impl UdpNatEntry {
@@ -72,6 +72,7 @@ impl UdpNatEntry {
forward_task: Mutex::new(None),
stopped: AtomicBool::new(false),
start_time: std::time::Instant::now(),
last_active_time: AtomicCell::new(std::time::Instant::now()),
})
}
@@ -82,7 +83,7 @@ impl UdpNatEntry {
async fn compose_ipv4_packet(
self: &Arc<Self>,
packet_sender: &mut UnboundedSender<ZCPacket>,
packet_sender: &mut Sender<ZCPacket>,
buf: &mut [u8],
src_v4: &SocketAddrV4,
payload_len: usize,
@@ -119,11 +120,13 @@ impl UdpNatEntry {
p.fill_peer_manager_hdr(self.my_peer_id, self.src_peer_id, PacketType::Data as u8);
p.mut_peer_manager_header().unwrap().set_no_proxy(true);
if let Err(e) = packet_sender.send(p) {
tracing::error!("send icmp packet to peer failed: {:?}, may exiting..", e);
return Err(Error::AnyhowError(e.into()));
match packet_sender.try_send(p) {
Err(TrySendError::Closed(e)) => {
tracing::error!("send icmp packet to peer failed: {:?}, may exiting..", e);
Err(Error::Unknown)
}
_ => Ok(()),
}
Ok(())
},
)?;
@@ -132,7 +135,7 @@ impl UdpNatEntry {
async fn forward_task(
self: Arc<Self>,
mut packet_sender: UnboundedSender<ZCPacket>,
mut packet_sender: Sender<ZCPacket>,
virtual_ipv4: Ipv4Addr,
) {
let mut buf = [0u8; 65536];
@@ -141,7 +144,7 @@ impl UdpNatEntry {
loop {
let (len, src_socket) = match timeout(
Duration::from_secs(30),
Duration::from_secs(120),
self.socket.recv_from(&mut udp_body),
)
.await
@@ -167,6 +170,8 @@ impl UdpNatEntry {
continue;
};
self.mark_active();
if src_v4.ip().is_loopback() {
src_v4.set_ip(virtual_ipv4);
}
@@ -177,7 +182,7 @@ impl UdpNatEntry {
&mut buf,
&src_v4,
len,
1200,
1256,
ip_id,
)
.await
@@ -189,6 +194,14 @@ impl UdpNatEntry {
self.stop();
}
fn mark_active(&self) {
self.last_active_time.store(std::time::Instant::now());
}
fn is_active(&self) -> bool {
self.last_active_time.load().elapsed().as_secs() < 180
}
}
#[derive(Debug)]
@@ -200,8 +213,8 @@ pub struct UdpProxy {
nat_table: Arc<DashMap<UdpNatKey, Arc<UdpNatEntry>>>,
sender: UnboundedSender<ZCPacket>,
receiver: Mutex<Option<UnboundedReceiver<ZCPacket>>>,
sender: Sender<ZCPacket>,
receiver: Mutex<Option<Receiver<ZCPacket>>>,
tasks: Mutex<JoinSet<()>>,
@@ -287,6 +300,8 @@ impl UdpProxy {
)));
}
nat_entry.mark_active();
// TODO: should it be async.
let dst_socket = if Some(ipv4.get_destination()) == self.global_ctx.get_ipv4() {
format!("127.0.0.1:{}", udp_packet.get_destination())
@@ -335,7 +350,7 @@ impl UdpProxy {
peer_manager: Arc<PeerManager>,
) -> Result<Arc<Self>, Error> {
let cidr_set = CidrSet::new(global_ctx.clone());
let (sender, receiver) = unbounded_channel();
let (sender, receiver) = channel(64);
let ret = Self {
global_ctx,
peer_manager,
@@ -360,7 +375,7 @@ impl UdpProxy {
loop {
tokio::time::sleep(Duration::from_secs(15)).await;
nat_table.retain(|_, v| {
if v.start_time.elapsed().as_secs() > 120 {
if !v.is_active() {
tracing::info!(?v, "udp nat table entry removed");
v.stop();
false
@@ -383,7 +398,7 @@ impl UdpProxy {
let mut receiver = self.receiver.lock().await.take().unwrap();
let peer_manager = self.peer_manager.clone();
self.tasks.lock().await.spawn(async move {
while let Some(msg) = receiver.recv().await {
while let Ok(msg) = receiver.recv().await {
let to_peer_id: PeerId = msg.peer_manager_header().unwrap().to_peer_id.get();
tracing::trace!(?msg, ?to_peer_id, "udp nat packet response send");
let ret = peer_manager.send_msg(msg, to_peer_id).await;

View File

@@ -8,8 +8,6 @@ use anyhow::Context;
use cidr::Ipv4Inet;
use tokio::{sync::Mutex, task::JoinSet};
use tonic::transport::server::TcpIncoming;
use tonic::transport::Server;
use crate::common::config::ConfigLoader;
use crate::common::error::Error;
@@ -26,8 +24,13 @@ use crate::peers::peer_conn::PeerConnId;
use crate::peers::peer_manager::{PeerManager, RouteAlgoType};
use crate::peers::rpc_service::PeerManagerRpcService;
use crate::peers::PacketRecvChanReceiver;
use crate::rpc::vpn_portal_rpc_server::VpnPortalRpc;
use crate::rpc::{GetVpnPortalInfoRequest, GetVpnPortalInfoResponse, VpnPortalInfo};
use crate::proto::cli::VpnPortalRpc;
use crate::proto::cli::{GetVpnPortalInfoRequest, GetVpnPortalInfoResponse, VpnPortalInfo};
use crate::proto::peer_rpc::PeerCenterRpcServer;
use crate::proto::rpc_impl::standalone::StandAloneServer;
use crate::proto::rpc_types;
use crate::proto::rpc_types::controller::BaseController;
use crate::tunnel::tcp::TcpTunnelListener;
use crate::vpn_portal::{self, VpnPortal};
use super::listeners::ListenerManager;
@@ -104,8 +107,6 @@ pub struct Instance {
nic_ctx: ArcNicCtx,
tasks: JoinSet<()>,
peer_packet_receiver: Arc<Mutex<PacketRecvChanReceiver>>,
peer_manager: Arc<PeerManager>,
listener_manager: Arc<Mutex<ListenerManager<PeerManager>>>,
@@ -122,6 +123,8 @@ pub struct Instance {
#[cfg(feature = "socks5")]
socks5_server: Arc<Socks5Server>,
rpc_server: Option<StandAloneServer<TcpTunnelListener>>,
global_ctx: ArcGlobalCtx,
}
@@ -170,6 +173,12 @@ impl Instance {
#[cfg(feature = "socks5")]
let socks5_server = Socks5Server::new(global_ctx.clone(), peer_manager.clone(), None);
let rpc_server = global_ctx.config.get_rpc_portal().and_then(|s| {
Some(StandAloneServer::new(TcpTunnelListener::new(
format!("tcp://{}", s).parse().unwrap(),
)))
});
Instance {
inst_name: global_ctx.inst_name.clone(),
id,
@@ -177,7 +186,6 @@ impl Instance {
peer_packet_receiver: Arc::new(Mutex::new(peer_packet_receiver)),
nic_ctx: Arc::new(Mutex::new(None)),
tasks: JoinSet::new(),
peer_manager,
listener_manager,
conn_manager,
@@ -193,6 +201,8 @@ impl Instance {
#[cfg(feature = "socks5")]
socks5_server,
rpc_server,
global_ctx,
}
}
@@ -375,7 +385,7 @@ impl Instance {
self.check_dhcp_ip_conflict();
}
self.run_rpc_server()?;
self.run_rpc_server().await?;
// run after tun device created, so listener can bind to tun device, which may be required by win 10
self.ip_proxy = Some(IpProxy::new(
@@ -441,11 +451,8 @@ impl Instance {
Ok(())
}
pub async fn wait(&mut self) {
while let Some(ret) = self.tasks.join_next().await {
tracing::info!("task finished: {:?}", ret);
ret.unwrap();
}
pub async fn wait(&self) {
self.peer_manager.wait().await;
}
pub fn id(&self) -> uuid::Uuid {
@@ -456,24 +463,28 @@ impl Instance {
self.peer_manager.my_peer_id()
}
fn get_vpn_portal_rpc_service(&self) -> impl VpnPortalRpc {
fn get_vpn_portal_rpc_service(&self) -> impl VpnPortalRpc<Controller = BaseController> + Clone {
#[derive(Clone)]
struct VpnPortalRpcService {
peer_mgr: Weak<PeerManager>,
vpn_portal: Weak<Mutex<Box<dyn VpnPortal>>>,
}
#[tonic::async_trait]
#[async_trait::async_trait]
impl VpnPortalRpc for VpnPortalRpcService {
type Controller = BaseController;
async fn get_vpn_portal_info(
&self,
_request: tonic::Request<GetVpnPortalInfoRequest>,
) -> Result<tonic::Response<GetVpnPortalInfoResponse>, tonic::Status> {
_: BaseController,
_request: GetVpnPortalInfoRequest,
) -> Result<GetVpnPortalInfoResponse, rpc_types::error::Error> {
let Some(vpn_portal) = self.vpn_portal.upgrade() else {
return Err(tonic::Status::unavailable("vpn portal not available"));
return Err(anyhow::anyhow!("vpn portal not available").into());
};
let Some(peer_mgr) = self.peer_mgr.upgrade() else {
return Err(tonic::Status::unavailable("peer manager not available"));
return Err(anyhow::anyhow!("peer manager not available").into());
};
let vpn_portal = vpn_portal.lock().await;
@@ -485,7 +496,7 @@ impl Instance {
}),
};
Ok(tonic::Response::new(ret))
Ok(ret)
}
}
@@ -495,46 +506,36 @@ impl Instance {
}
}
fn run_rpc_server(&mut self) -> Result<(), Error> {
let Some(addr) = self.global_ctx.config.get_rpc_portal() else {
async fn run_rpc_server(&mut self) -> Result<(), Error> {
let Some(_) = self.global_ctx.config.get_rpc_portal() else {
tracing::info!("rpc server not enabled, because rpc_portal is not set.");
return Ok(());
};
use crate::proto::cli::*;
let peer_mgr = self.peer_manager.clone();
let conn_manager = self.conn_manager.clone();
let net_ns = self.global_ctx.net_ns.clone();
let peer_center = self.peer_center.clone();
let vpn_portal_rpc = self.get_vpn_portal_rpc_service();
let incoming = TcpIncoming::new(addr, true, None)
.map_err(|e| anyhow::anyhow!("create rpc server failed. addr: {}, err: {}", addr, e))?;
self.tasks.spawn(async move {
let _g = net_ns.guard();
Server::builder()
.add_service(
crate::rpc::peer_manage_rpc_server::PeerManageRpcServer::new(
PeerManagerRpcService::new(peer_mgr),
),
)
.add_service(
crate::rpc::connector_manage_rpc_server::ConnectorManageRpcServer::new(
ConnectorManagerRpcService(conn_manager.clone()),
),
)
.add_service(
crate::rpc::peer_center_rpc_server::PeerCenterRpcServer::new(
peer_center.get_rpc_service(),
),
)
.add_service(crate::rpc::vpn_portal_rpc_server::VpnPortalRpcServer::new(
vpn_portal_rpc,
))
.serve_with_incoming(incoming)
.await
.with_context(|| format!("rpc server failed. addr: {}", addr))
.unwrap();
});
Ok(())
let s = self.rpc_server.as_mut().unwrap();
s.registry().register(
PeerManageRpcServer::new(PeerManagerRpcService::new(peer_mgr)),
"",
);
s.registry().register(
ConnectorManageRpcServer::new(ConnectorManagerRpcService(conn_manager)),
"",
);
s.registry()
.register(PeerCenterRpcServer::new(peer_center.get_rpc_service()), "");
s.registry()
.register(VpnPortalRpcServer::new(vpn_portal_rpc), "");
let _g = self.global_ctx.net_ns.guard();
Ok(s.serve().await.with_context(|| "rpc server start failed")?)
}
pub fn get_global_ctx(&self) -> ArcGlobalCtx {

View File

@@ -159,8 +159,16 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
let tunnel_info = ret.info().unwrap();
global_ctx.issue_event(GlobalCtxEvent::ConnectionAccepted(
tunnel_info.local_addr.clone(),
tunnel_info.remote_addr.clone(),
tunnel_info
.local_addr
.clone()
.unwrap_or_default()
.to_string(),
tunnel_info
.remote_addr
.clone()
.unwrap_or_default()
.to_string(),
));
tracing::info!(ret = ?ret, "conn accepted");
let peer_manager = peer_manager.clone();
@@ -169,8 +177,8 @@ impl<H: TunnelHandlerForListener + Send + Sync + 'static + Debug> ListenerManage
let server_ret = peer_manager.handle_tunnel(ret).await;
if let Err(e) = &server_ret {
global_ctx.issue_event(GlobalCtxEvent::ConnectionError(
tunnel_info.local_addr,
tunnel_info.remote_addr,
tunnel_info.local_addr.unwrap_or_default().to_string(),
tunnel_info.remote_addr.unwrap_or_default().to_string(),
e.to_string(),
));
tracing::error!(error = ?e, "handle conn error");

View File

@@ -242,6 +242,7 @@ pub struct VirtualNic {
ifname: Option<String>,
ifcfg: Box<dyn IfConfiguerTrait + Send + Sync + 'static>,
}
#[cfg(target_os = "windows")]
pub fn checkreg(dev_name: &str) -> io::Result<()> {
use winreg::{enums::HKEY_LOCAL_MACHINE, enums::KEY_ALL_ACCESS, RegKey};
@@ -352,20 +353,26 @@ impl VirtualNic {
Ok(_) => tracing::trace!("delete successful!"),
Err(e) => tracing::error!("An error occurred: {}", e),
}
use rand::distributions::Distribution as _;
let c = crate::arch::windows::interface_count()?;
let mut rng = rand::thread_rng();
let s: String = rand::distributions::Alphanumeric
.sample_iter(&mut rng)
.take(4)
.map(char::from)
.collect::<String>()
.to_lowercase();
if !dev_name.is_empty() {
config.tun_name(format!("{}", dev_name));
} else {
config.tun_name(format!("et_{}_{}", c, s));
use rand::distributions::Distribution as _;
let c = crate::arch::windows::interface_count()?;
let mut rng = rand::thread_rng();
let s: String = rand::distributions::Alphanumeric
.sample_iter(&mut rng)
.take(4)
.map(char::from)
.collect::<String>()
.to_lowercase();
let random_dev_name = format!("et_{}_{}", c, s);
config.tun_name(random_dev_name.clone());
let mut flags = self.global_ctx.get_flags();
flags.dev_name = random_dev_name.clone();
self.global_ctx.set_flags(flags);
}
config.platform_config(|config| {
@@ -484,6 +491,39 @@ impl VirtualNic {
}
}
#[cfg(target_os = "windows")]
pub fn reg_change_catrgory_in_profile(dev_name: &str) -> io::Result<()> {
use winreg::{enums::HKEY_LOCAL_MACHINE, enums::KEY_ALL_ACCESS, RegKey};
let hklm = RegKey::predef(HKEY_LOCAL_MACHINE);
let profiles_key = hklm.open_subkey_with_flags(
"SOFTWARE\\Microsoft\\Windows NT\\CurrentVersion\\NetworkList\\Profiles",
KEY_ALL_ACCESS,
)?;
for subkey_name in profiles_key.enum_keys().filter_map(Result::ok) {
let subkey = profiles_key.open_subkey_with_flags(&subkey_name, KEY_ALL_ACCESS)?;
match subkey.get_value::<String, _>("ProfileName") {
Ok(profile_name) => {
if !dev_name.is_empty() && dev_name == profile_name
{
match subkey.set_value("Category", &1u32) {
Ok(_) => tracing::trace!("Successfully set Category in registry"),
Err(e) => tracing::error!("Failed to set Category in registry: {}", e),
}
}
}
Err(e) => {
tracing::error!(
"Failed to read ProfileName for subkey {}: {}",
subkey_name,
e
);
}
}
}
Ok(())
}
pub struct NicCtx {
global_ctx: ArcGlobalCtx,
peer_mgr: Weak<PeerManager>,
@@ -558,6 +598,7 @@ impl NicCtx {
}
Self::do_forward_nic_to_peers_ipv4(ret.unwrap(), mgr.as_ref()).await;
}
panic!("nic stream closed");
});
Ok(())
@@ -578,6 +619,7 @@ impl NicCtx {
tracing::error!(?ret, "do_forward_tunnel_to_nic sink error");
}
}
panic!("peer packet receiver closed");
});
}
@@ -673,6 +715,13 @@ impl NicCtx {
let mut nic = self.nic.lock().await;
match nic.create_dev().await {
Ok(ret) => {
#[cfg(target_os = "windows")]
{
let dev_name = self.global_ctx.get_flags().dev_name;
let _ = reg_change_catrgory_in_profile(&dev_name);
}
self.global_ctx
.issue_event(GlobalCtxEvent::TunDeviceReady(nic.ifname().to_string()));
ret

View File

@@ -6,14 +6,16 @@ use std::{
use crate::{
common::{
config::{ConfigLoader, TomlConfigLoader},
constants::EASYTIER_VERSION,
global_ctx::GlobalCtxEvent,
stun::StunInfoCollectorTrait,
},
instance::instance::Instance,
peers::rpc_service::PeerManagerRpcService,
rpc::{
cli::{PeerInfo, Route, StunInfo},
peer::GetIpListResponse,
proto::{
cli::{PeerInfo, Route},
common::StunInfo,
peer_rpc::GetIpListResponse,
},
utils::{list_peer_route_pair, PeerRoutePair},
};
@@ -24,6 +26,8 @@ use tokio::task::JoinSet;
#[derive(Default, Clone, Debug, Serialize, Deserialize)]
pub struct MyNodeInfo {
pub virtual_ipv4: String,
pub hostname: String,
pub version: String,
pub ips: GetIpListResponse,
pub stun_info: StunInfo,
pub listeners: Vec<String>,
@@ -37,6 +41,7 @@ struct EasyTierData {
routes: Arc<RwLock<Vec<Route>>>,
peers: Arc<RwLock<Vec<PeerInfo>>>,
tun_fd: Arc<RwLock<Option<i32>>>,
tun_dev_name: Arc<RwLock<String>>,
}
pub struct EasyTierLauncher {
@@ -132,11 +137,17 @@ impl EasyTierLauncher {
let vpn_portal = instance.get_vpn_portal_inst();
tasks.spawn(async move {
loop {
// Update TUN Device Name
*data_c.tun_dev_name.write().unwrap() = global_ctx_c.get_flags().dev_name.clone();
let node_info = MyNodeInfo {
virtual_ipv4: global_ctx_c
.get_ipv4()
.map(|x| x.to_string())
.unwrap_or_default(),
hostname: global_ctx_c.get_hostname(),
version: EASYTIER_VERSION.to_string(),
ips: global_ctx_c.get_ip_collector().collect_ip_addrs().await,
stun_info: global_ctx_c.get_stun_info_collector().get_stun_info(),
listeners: global_ctx_c
@@ -229,6 +240,10 @@ impl EasyTierLauncher {
.load(std::sync::atomic::Ordering::Relaxed)
}
pub fn get_dev_name(&self) -> String {
self.data.tun_dev_name.read().unwrap().clone()
}
pub fn get_events(&self) -> Vec<(DateTime<Local>, GlobalCtxEvent)> {
let events = self.data.events.read().unwrap();
events.iter().cloned().collect()
@@ -261,6 +276,7 @@ impl Drop for EasyTierLauncher {
#[derive(Deserialize, Serialize, Debug)]
pub struct NetworkInstanceRunningInfo {
pub dev_name: String,
pub my_node_info: MyNodeInfo,
pub events: Vec<(DateTime<Local>, GlobalCtxEvent)>,
pub node_info: MyNodeInfo,
@@ -300,6 +316,7 @@ impl NetworkInstance {
let peer_route_pairs = list_peer_route_pair(peers.clone(), routes.clone());
Some(NetworkInstanceRunningInfo {
dev_name: launcher.get_dev_name(),
my_node_info: launcher.get_node_info(),
events: launcher.get_events(),
node_info: launcher.get_node_info(),

View File

@@ -6,10 +6,12 @@ mod gateway;
mod instance;
mod peer_center;
mod peers;
mod proto;
mod vpn_portal;
pub mod common;
pub mod launcher;
pub mod rpc;
pub mod tunnel;
pub mod utils;
pub const VERSION: &str = common::constants::EASYTIER_VERSION;

View File

@@ -1,7 +1,7 @@
use std::{
collections::BTreeSet,
sync::Arc,
time::{Duration, Instant, SystemTime},
time::{Duration, Instant},
};
use crossbeam::atomic::AtomicCell;
@@ -18,14 +18,17 @@ use crate::{
route_trait::{RouteCostCalculator, RouteCostCalculatorInterface},
rpc_service::PeerManagerRpcService,
},
rpc::{GetGlobalPeerMapRequest, GetGlobalPeerMapResponse},
proto::{
peer_rpc::{
GetGlobalPeerMapRequest, GetGlobalPeerMapResponse, GlobalPeerMap, PeerCenterRpc,
PeerCenterRpcClientFactory, PeerCenterRpcServer, PeerInfoForGlobalMap,
ReportPeersRequest, ReportPeersResponse,
},
rpc_types::{self, controller::BaseController},
},
};
use super::{
server::PeerCenterServer,
service::{GlobalPeerMap, PeerCenterService, PeerCenterServiceClient, PeerInfoForGlobalMap},
Digest, Error,
};
use super::{server::PeerCenterServer, Digest, Error};
struct PeerCenterBase {
peer_mgr: Arc<PeerManager>,
@@ -44,11 +47,14 @@ struct PeridicJobCtx<T> {
impl PeerCenterBase {
pub async fn init(&self) -> Result<(), Error> {
self.peer_mgr.get_peer_rpc_mgr().run_service(
SERVICE_ID,
PeerCenterServer::new(self.peer_mgr.my_peer_id()).serve(),
);
self.peer_mgr
.get_peer_rpc_mgr()
.rpc_server()
.registry()
.register(
PeerCenterRpcServer::new(PeerCenterServer::new(self.peer_mgr.my_peer_id())),
&self.peer_mgr.get_global_ctx().get_network_name(),
);
Ok(())
}
@@ -59,7 +65,10 @@ impl PeerCenterBase {
}
// find peer with alphabetical smallest id.
let mut min_peer = peer_mgr.my_peer_id();
for peer in peers.iter() {
for peer in peers
.iter()
.filter(|r| r.feature_flag.map(|r| !r.is_public_server).unwrap_or(true))
{
let peer_id = peer.peer_id;
if peer_id < min_peer {
min_peer = peer_id;
@@ -70,11 +79,17 @@ impl PeerCenterBase {
async fn init_periodic_job<
T: Send + Sync + 'static + Clone,
Fut: Future<Output = Result<u32, tarpc::client::RpcError>> + Send + 'static,
Fut: Future<Output = Result<u32, rpc_types::error::Error>> + Send + 'static,
>(
&self,
job_ctx: T,
job_fn: (impl Fn(PeerCenterServiceClient, Arc<PeridicJobCtx<T>>) -> Fut + Send + Sync + 'static),
job_fn: (impl Fn(
Box<dyn PeerCenterRpc<Controller = BaseController> + Send>,
Arc<PeridicJobCtx<T>>,
) -> Fut
+ Send
+ Sync
+ 'static),
) -> () {
let my_peer_id = self.peer_mgr.my_peer_id();
let peer_mgr = self.peer_mgr.clone();
@@ -96,14 +111,14 @@ impl PeerCenterBase {
tracing::trace!(?center_peer, "run periodic job");
let rpc_mgr = peer_mgr.get_peer_rpc_mgr();
let _g = lock.lock().await;
let ret = rpc_mgr
.do_client_rpc_scoped(SERVICE_ID, center_peer, |c| async {
let client =
PeerCenterServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
job_fn(client, ctx.clone()).await
})
.await;
let stub = rpc_mgr
.rpc_client()
.scoped_client::<PeerCenterRpcClientFactory<BaseController>>(
my_peer_id,
center_peer,
peer_mgr.get_global_ctx().get_network_name(),
);
let ret = job_fn(stub, ctx.clone()).await;
drop(_g);
let Ok(sleep_time_ms) = ret else {
@@ -130,25 +145,34 @@ impl PeerCenterBase {
}
}
#[derive(Clone)]
pub struct PeerCenterInstanceService {
global_peer_map: Arc<RwLock<GlobalPeerMap>>,
global_peer_map_digest: Arc<AtomicCell<Digest>>,
}
#[tonic::async_trait]
impl crate::rpc::cli::peer_center_rpc_server::PeerCenterRpc for PeerCenterInstanceService {
#[async_trait::async_trait]
impl PeerCenterRpc for PeerCenterInstanceService {
type Controller = BaseController;
async fn get_global_peer_map(
&self,
_request: tonic::Request<GetGlobalPeerMapRequest>,
) -> Result<tonic::Response<GetGlobalPeerMapResponse>, tonic::Status> {
let global_peer_map = self.global_peer_map.read().unwrap().clone();
Ok(tonic::Response::new(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map
.map
.into_iter()
.map(|(k, v)| (k, v))
.collect(),
}))
_: BaseController,
_: GetGlobalPeerMapRequest,
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let global_peer_map = self.global_peer_map.read().unwrap();
Ok(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map.map.clone(),
digest: Some(self.global_peer_map_digest.load()),
})
}
async fn report_peers(
&self,
_: BaseController,
_req: ReportPeersRequest,
) -> Result<ReportPeersResponse, rpc_types::error::Error> {
Err(anyhow::anyhow!("not implemented").into())
}
}
@@ -166,7 +190,7 @@ impl PeerCenterInstance {
PeerCenterInstance {
peer_mgr: peer_mgr.clone(),
client: Arc::new(PeerCenterBase::new(peer_mgr.clone())),
global_peer_map: Arc::new(RwLock::new(GlobalPeerMap::new())),
global_peer_map: Arc::new(RwLock::new(GlobalPeerMap::default())),
global_peer_map_digest: Arc::new(AtomicCell::new(Digest::default())),
global_peer_map_update_time: Arc::new(AtomicCell::new(Instant::now())),
}
@@ -193,35 +217,38 @@ impl PeerCenterInstance {
self.client
.init_periodic_job(ctx, |client, ctx| async move {
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(3);
if ctx
.job_ctx
.global_peer_map_update_time
.load()
.elapsed()
.as_secs()
> 60
> 120
{
ctx.job_ctx.global_peer_map_digest.store(Digest::default());
}
let ret = client
.get_global_peer_map(rpc_ctx, ctx.job_ctx.global_peer_map_digest.load())
.await?;
.get_global_peer_map(
BaseController {},
GetGlobalPeerMapRequest {
digest: ctx.job_ctx.global_peer_map_digest.load(),
},
)
.await;
let Ok(resp) = ret else {
tracing::error!(
"get global info from center server got error result: {:?}",
ret
);
return Ok(1000);
return Ok(10000);
};
let Some(resp) = resp else {
return Ok(5000);
};
if resp == GetGlobalPeerMapResponse::default() {
// digest match, no need to update
return Ok(15000);
}
tracing::info!(
"get global info from center server: {:?}, digest: {:?}",
@@ -229,13 +256,17 @@ impl PeerCenterInstance {
resp.digest
);
*ctx.job_ctx.global_peer_map.write().unwrap() = resp.global_peer_map;
ctx.job_ctx.global_peer_map_digest.store(resp.digest);
*ctx.job_ctx.global_peer_map.write().unwrap() = GlobalPeerMap {
map: resp.global_peer_map,
};
ctx.job_ctx
.global_peer_map_digest
.store(resp.digest.unwrap_or_default());
ctx.job_ctx
.global_peer_map_update_time
.store(Instant::now());
Ok(5000)
Ok(15000)
})
.await;
}
@@ -274,12 +305,15 @@ impl PeerCenterInstance {
return Ok(5000);
}
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(3);
let ret = client
.report_peers(rpc_ctx, my_node_id.clone(), peers)
.await?;
.report_peers(
BaseController {},
ReportPeersRequest {
my_peer_id: my_node_id,
peer_infos: Some(peers),
},
)
.await;
if ret.is_ok() {
ctx.job_ctx.last_center_peer.store(ctx.center_peer.load());
@@ -311,15 +345,22 @@ impl PeerCenterInstance {
global_peer_map_update_time: Arc<AtomicCell<Instant>>,
}
impl RouteCostCalculatorInterface for RouteCostCalculatorImpl {
fn calculate_cost(&self, src: PeerId, dst: PeerId) -> i32 {
let ret = self
.global_peer_map_clone
impl RouteCostCalculatorImpl {
fn directed_cost(&self, src: PeerId, dst: PeerId) -> Option<i32> {
self.global_peer_map_clone
.map
.get(&src)
.and_then(|src_peer_info| src_peer_info.direct_peers.get(&dst))
.and_then(|info| Some(info.latency_ms));
ret.unwrap_or(80)
.and_then(|info| Some(info.latency_ms))
}
}
impl RouteCostCalculatorInterface for RouteCostCalculatorImpl {
fn calculate_cost(&self, src: PeerId, dst: PeerId) -> i32 {
if let Some(cost) = self.directed_cost(src, dst) {
return cost;
}
self.directed_cost(dst, src).unwrap_or(100)
}
fn begin_update(&mut self) {
@@ -339,7 +380,7 @@ impl PeerCenterInstance {
Box::new(RouteCostCalculatorImpl {
global_peer_map: self.global_peer_map.clone(),
global_peer_map_clone: GlobalPeerMap::new(),
global_peer_map_clone: GlobalPeerMap::default(),
last_update_time: AtomicCell::new(
self.global_peer_map_update_time.load() - Duration::from_secs(1),
),
@@ -395,7 +436,7 @@ mod tests {
false
}
},
Duration::from_secs(10),
Duration::from_secs(20),
)
.await;
@@ -404,7 +445,7 @@ mod tests {
let rpc_service = pc.get_rpc_service();
wait_for_condition(
|| async { rpc_service.global_peer_map.read().unwrap().map.len() == 3 },
Duration::from_secs(10),
Duration::from_secs(20),
)
.await;

View File

@@ -5,9 +5,13 @@
// peer center is not guaranteed to be stable and can be changed when peer enter or leave.
// it's used to reduce the cost to exchange infos between peers.
use std::collections::BTreeMap;
use crate::proto::cli::PeerInfo;
use crate::proto::peer_rpc::{DirectConnectedPeerInfo, PeerInfoForGlobalMap};
pub mod instance;
mod server;
mod service;
#[derive(thiserror::Error, Debug, serde::Deserialize, serde::Serialize)]
pub enum Error {
@@ -18,3 +22,29 @@ pub enum Error {
}
pub type Digest = u64;
impl From<Vec<PeerInfo>> for PeerInfoForGlobalMap {
fn from(peers: Vec<PeerInfo>) -> Self {
let mut peer_map = BTreeMap::new();
for peer in peers {
let Some(min_lat) = peer
.conns
.iter()
.map(|conn| conn.stats.as_ref().unwrap().latency_us)
.min()
else {
continue;
};
let dp_info = DirectConnectedPeerInfo {
latency_ms: std::cmp::max(1, (min_lat as u32 / 1000) as i32),
};
// sort conn info so hash result is stable
peer_map.insert(peer.peer_id, dp_info);
}
PeerInfoForGlobalMap {
direct_peers: peer_map,
}
}
}

View File

@@ -7,15 +7,22 @@ use std::{
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use once_cell::sync::Lazy;
use tokio::{task::JoinSet};
use tokio::task::JoinSet;
use crate::{common::PeerId, rpc::DirectConnectedPeerInfo};
use super::{
service::{GetGlobalPeerMapResponse, GlobalPeerMap, PeerCenterService, PeerInfoForGlobalMap},
Digest, Error,
use crate::{
common::PeerId,
proto::{
peer_rpc::{
DirectConnectedPeerInfo, GetGlobalPeerMapRequest, GetGlobalPeerMapResponse,
GlobalPeerMap, PeerCenterRpc, PeerInfoForGlobalMap, ReportPeersRequest,
ReportPeersResponse,
},
rpc_types::{self, controller::BaseController},
},
};
use super::Digest;
#[derive(Debug, Clone, PartialEq, PartialOrd, Ord, Eq, Hash)]
pub(crate) struct SrcDstPeerPair {
src: PeerId,
@@ -95,15 +102,19 @@ impl PeerCenterServer {
}
}
#[tarpc::server]
impl PeerCenterService for PeerCenterServer {
#[async_trait::async_trait]
impl PeerCenterRpc for PeerCenterServer {
type Controller = BaseController;
#[tracing::instrument()]
async fn report_peers(
self,
_: tarpc::context::Context,
my_peer_id: PeerId,
peers: PeerInfoForGlobalMap,
) -> Result<(), Error> {
&self,
_: BaseController,
req: ReportPeersRequest,
) -> Result<ReportPeersResponse, rpc_types::error::Error> {
let my_peer_id = req.my_peer_id;
let peers = req.peer_infos.unwrap_or_default();
tracing::debug!("receive report_peers");
let data = get_global_data(self.my_node_id);
@@ -125,20 +136,23 @@ impl PeerCenterService for PeerCenterServer {
data.digest
.store(PeerCenterServer::calc_global_digest(self.my_node_id));
Ok(())
Ok(ReportPeersResponse::default())
}
#[tracing::instrument()]
async fn get_global_peer_map(
self,
_: tarpc::context::Context,
digest: Digest,
) -> Result<Option<GetGlobalPeerMapResponse>, Error> {
&self,
_: BaseController,
req: GetGlobalPeerMapRequest,
) -> Result<GetGlobalPeerMapResponse, rpc_types::error::Error> {
let digest = req.digest;
let data = get_global_data(self.my_node_id);
if digest == data.digest.load() && digest != 0 {
return Ok(None);
return Ok(GetGlobalPeerMapResponse::default());
}
let mut global_peer_map = GlobalPeerMap::new();
let mut global_peer_map = GlobalPeerMap::default();
for item in data.global_peer_map.iter() {
let (pair, entry) = item.pair();
global_peer_map
@@ -151,9 +165,9 @@ impl PeerCenterService for PeerCenterServer {
.insert(pair.dst, entry.info.clone());
}
Ok(Some(GetGlobalPeerMapResponse {
global_peer_map,
digest: data.digest.load(),
}))
Ok(GetGlobalPeerMapResponse {
global_peer_map: global_peer_map.map,
digest: Some(data.digest.load()),
})
}
}

View File

@@ -1,64 +0,0 @@
use std::collections::BTreeMap;
use crate::{common::PeerId, rpc::DirectConnectedPeerInfo};
use super::{Digest, Error};
use crate::rpc::PeerInfo;
pub type PeerInfoForGlobalMap = crate::rpc::cli::PeerInfoForGlobalMap;
impl From<Vec<PeerInfo>> for PeerInfoForGlobalMap {
fn from(peers: Vec<PeerInfo>) -> Self {
let mut peer_map = BTreeMap::new();
for peer in peers {
let Some(min_lat) = peer
.conns
.iter()
.map(|conn| conn.stats.as_ref().unwrap().latency_us)
.min()
else {
continue;
};
let dp_info = DirectConnectedPeerInfo {
latency_ms: std::cmp::max(1, (min_lat as u32 / 1000) as i32),
};
// sort conn info so hash result is stable
peer_map.insert(peer.peer_id, dp_info);
}
PeerInfoForGlobalMap {
direct_peers: peer_map,
}
}
}
// a global peer topology map, peers can use it to find optimal path to other peers
#[derive(Debug, Clone, serde::Serialize, serde::Deserialize)]
pub struct GlobalPeerMap {
pub map: BTreeMap<PeerId, PeerInfoForGlobalMap>,
}
impl GlobalPeerMap {
pub fn new() -> Self {
GlobalPeerMap {
map: BTreeMap::new(),
}
}
}
#[derive(Debug, Clone, serde::Deserialize, serde::Serialize)]
pub struct GetGlobalPeerMapResponse {
pub global_peer_map: GlobalPeerMap,
pub digest: Digest,
}
#[tarpc::service]
pub trait PeerCenterService {
// report center server which peer is directly connected to me
// digest is a hash of current peer map, if digest not match, we need to transfer the whole map
async fn report_peers(my_peer_id: PeerId, peers: PeerInfoForGlobalMap) -> Result<(), Error>;
async fn get_global_peer_map(digest: Digest)
-> Result<Option<GetGlobalPeerMapResponse>, Error>;
}

View File

@@ -1,27 +1,11 @@
use std::{
sync::Arc,
time::{Duration, SystemTime},
};
use dashmap::DashMap;
use tokio::{sync::Mutex, task::JoinSet};
use std::sync::{Arc, Mutex};
use crate::{
common::{
error::Error,
global_ctx::{ArcGlobalCtx, NetworkIdentity},
PeerId,
},
common::{error::Error, global_ctx::ArcGlobalCtx, scoped_task::ScopedTask, PeerId},
tunnel::packet_def::ZCPacket,
};
use super::{
foreign_network_manager::{ForeignNetworkServiceClient, FOREIGN_NETWORK_SERVICE_ID},
peer_conn::PeerConn,
peer_map::PeerMap,
peer_rpc::PeerRpcManager,
PacketRecvChan,
};
use super::{peer_conn::PeerConn, peer_map::PeerMap, peer_rpc::PeerRpcManager, PacketRecvChan};
pub struct ForeignNetworkClient {
global_ctx: ArcGlobalCtx,
@@ -29,9 +13,7 @@ pub struct ForeignNetworkClient {
my_peer_id: PeerId,
peer_map: Arc<PeerMap>,
next_hop: Arc<DashMap<PeerId, PeerId>>,
tasks: Mutex<JoinSet<()>>,
task: Mutex<Option<ScopedTask<()>>>,
}
impl ForeignNetworkClient {
@@ -46,17 +28,13 @@ impl ForeignNetworkClient {
global_ctx.clone(),
my_peer_id,
));
let next_hop = Arc::new(DashMap::new());
Self {
global_ctx,
peer_rpc,
my_peer_id,
peer_map,
next_hop,
tasks: Mutex::new(JoinSet::new()),
task: Mutex::new(None),
}
}
@@ -65,91 +43,19 @@ impl ForeignNetworkClient {
self.peer_map.add_new_peer_conn(peer_conn).await
}
async fn collect_next_hop_in_foreign_network_task(
network_identity: NetworkIdentity,
peer_map: Arc<PeerMap>,
peer_rpc: Arc<PeerRpcManager>,
next_hop: Arc<DashMap<PeerId, PeerId>>,
) {
loop {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
peer_map.clean_peer_without_conn().await;
let new_next_hop = Self::collect_next_hop_in_foreign_network(
network_identity.clone(),
peer_map.clone(),
peer_rpc.clone(),
)
.await;
next_hop.clear();
for (k, v) in new_next_hop.into_iter() {
next_hop.insert(k, v);
}
}
}
async fn collect_next_hop_in_foreign_network(
network_identity: NetworkIdentity,
peer_map: Arc<PeerMap>,
peer_rpc: Arc<PeerRpcManager>,
) -> DashMap<PeerId, PeerId> {
let peers = peer_map.list_peers().await;
let mut tasks = JoinSet::new();
if !peers.is_empty() {
tracing::warn!(?peers, my_peer_id = ?peer_rpc.my_peer_id(), "collect next hop in foreign network");
}
for peer in peers {
let peer_rpc = peer_rpc.clone();
let network_identity = network_identity.clone();
tasks.spawn(async move {
let Ok(Some(peers_in_foreign)) = peer_rpc
.do_client_rpc_scoped(FOREIGN_NETWORK_SERVICE_ID, peer, |c| async {
let c =
ForeignNetworkServiceClient::new(tarpc::client::Config::default(), c)
.spawn();
let mut rpc_ctx = tarpc::context::current();
rpc_ctx.deadline = SystemTime::now() + Duration::from_secs(2);
let ret = c.list_network_peers(rpc_ctx, network_identity).await;
ret
})
.await
else {
return (peer, vec![]);
};
(peer, peers_in_foreign)
});
}
let new_next_hop = DashMap::new();
while let Some(join_ret) = tasks.join_next().await {
let Ok((gateway, peer_ids)) = join_ret else {
tracing::error!(?join_ret, "collect next hop in foreign network failed");
continue;
};
for ret in peer_ids {
new_next_hop.insert(ret, gateway);
}
}
new_next_hop
}
pub fn has_next_hop(&self, peer_id: PeerId) -> bool {
self.get_next_hop(peer_id).is_some()
}
pub fn is_peer_public_node(&self, peer_id: &PeerId) -> bool {
self.peer_map.has_peer(*peer_id)
pub async fn list_public_peers(&self) -> Vec<PeerId> {
self.peer_map.list_peers().await
}
pub fn get_next_hop(&self, peer_id: PeerId) -> Option<PeerId> {
if self.peer_map.has_peer(peer_id) {
return Some(peer_id.clone());
}
self.next_hop.get(&peer_id).map(|v| v.clone())
None
}
pub async fn send_msg(&self, msg: ZCPacket, peer_id: PeerId) -> Result<(), Error> {
@@ -162,40 +68,32 @@ impl ForeignNetworkClient {
?next_hop,
"foreign network client send msg failed"
);
} else {
tracing::info!(
?peer_id,
?next_hop,
"foreign network client send msg success"
);
}
return ret;
}
Err(Error::RouteError(Some("no next hop".to_string())))
}
pub fn list_foreign_peers(&self) -> Vec<PeerId> {
let mut peers = vec![];
for item in self.next_hop.iter() {
if item.key() != &self.my_peer_id {
peers.push(item.key().clone());
}
}
peers
}
pub async fn run(&self) {
self.tasks
.lock()
.await
.spawn(Self::collect_next_hop_in_foreign_network_task(
self.global_ctx.get_network_identity(),
self.peer_map.clone(),
self.peer_rpc.clone(),
self.next_hop.clone(),
));
}
pub fn get_next_hop_table(&self) -> DashMap<PeerId, PeerId> {
let next_hop = DashMap::new();
for item in self.next_hop.iter() {
next_hop.insert(item.key().clone(), item.value().clone());
}
next_hop
let peer_map = Arc::downgrade(&self.peer_map);
*self.task.lock().unwrap() = Some(
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
let Some(peer_map) = peer_map.upgrade() else {
break;
};
peer_map.clean_peer_without_conn().await;
}
})
.into(),
);
}
pub fn get_peer_map(&self) -> Arc<PeerMap> {

File diff suppressed because it is too large Load Diff

View File

@@ -5,8 +5,8 @@ pub mod peer_conn_ping;
pub mod peer_manager;
pub mod peer_map;
pub mod peer_ospf_route;
pub mod peer_rip_route;
pub mod peer_rpc;
pub mod peer_rpc_service;
pub mod route_trait;
pub mod rpc_service;

View File

@@ -11,7 +11,7 @@ use super::{
peer_conn::{PeerConn, PeerConnId},
PacketRecvChan,
};
use crate::rpc::PeerConnInfo;
use crate::proto::cli::PeerConnInfo;
use crate::{
common::{
error::Error,

View File

@@ -8,7 +8,7 @@ use std::{
},
};
use futures::{SinkExt, StreamExt, TryFutureExt};
use futures::{StreamExt, TryFutureExt};
use prost::Message;
@@ -18,23 +18,26 @@ use tokio::{
time::{timeout, Duration},
};
use tokio_util::sync::PollSender;
use tracing::Instrument;
use zerocopy::AsBytes;
use crate::{
common::{
config::{NetworkIdentity, NetworkSecretDigest},
defer,
error::Error,
global_ctx::ArcGlobalCtx,
PeerId,
},
rpc::{HandshakeRequest, PeerConnInfo, PeerConnStats, TunnelInfo},
tunnel::packet_def::PacketType,
proto::{
cli::{PeerConnInfo, PeerConnStats},
common::TunnelInfo,
peer_rpc::HandshakeRequest,
},
tunnel::{
filter::{StatsRecorderTunnelFilter, TunnelFilter, TunnelWithFilter},
mpsc::{MpscTunnel, MpscTunnelSender},
packet_def::ZCPacket,
packet_def::{PacketType, ZCPacket},
stats::{Throughput, WindowLatency},
Tunnel, TunnelError, ZCPacketStream,
},
@@ -100,7 +103,9 @@ impl PeerConn {
my_peer_id,
global_ctx,
tunnel: Arc::new(Mutex::new(Box::new(mpsc_tunnel))),
tunnel: Arc::new(Mutex::new(Box::new(defer::Defer::new(move || {
mpsc_tunnel.close()
})))),
sink,
recv: Arc::new(Mutex::new(Some(recv))),
tunnel_info,
@@ -240,7 +245,7 @@ impl PeerConn {
pub async fn start_recv_loop(&mut self, packet_recv_chan: PacketRecvChan) {
let mut stream = self.recv.lock().await.take().unwrap();
let sink = self.sink.clone();
let mut sender = PollSender::new(packet_recv_chan.clone());
let sender = packet_recv_chan.clone();
let close_event_sender = self.close_event_sender.clone().unwrap();
let conn_id = self.conn_id;
let ctrl_sender = self.ctrl_resp_sender.clone();
@@ -277,7 +282,9 @@ impl PeerConn {
tracing::error!(?e, "peer conn send ctrl resp error");
}
} else {
if sender.send(zc_packet).await.is_err() {
if zc_packet.is_lossy() {
let _ = sender.try_send(zc_packet);
} else if sender.send(zc_packet).await.is_err() {
break;
}
}
@@ -306,6 +313,7 @@ impl PeerConn {
self.ctrl_resp_sender.clone(),
self.latency_stats.clone(),
self.loss_rate_stats.clone(),
self.throughput.clone(),
);
let close_event_sender = self.close_event_sender.clone().unwrap();
@@ -385,6 +393,7 @@ mod tests {
use super::*;
use crate::common::global_ctx::tests::get_mock_global_ctx;
use crate::common::new_peer_id;
use crate::common::scoped_task::ScopedTask;
use crate::tunnel::filter::tests::DropSendTunnelFilter;
use crate::tunnel::filter::PacketRecorderTunnelFilter;
use crate::tunnel::ring::create_ring_tunnel_pair;
@@ -426,13 +435,25 @@ mod tests {
assert_eq!(c_peer.get_network_identity(), NetworkIdentity::default());
}
async fn peer_conn_pingpong_test_common(drop_start: u32, drop_end: u32, conn_closed: bool) {
async fn peer_conn_pingpong_test_common(
drop_start: u32,
drop_end: u32,
conn_closed: bool,
drop_both: bool,
) {
let (c, s) = create_ring_tunnel_pair();
// drop 1-3 packets should not affect pingpong
let c_recorder = Arc::new(DropSendTunnelFilter::new(drop_start, drop_end));
let c = TunnelWithFilter::new(c, c_recorder.clone());
let s = if drop_both {
let s_recorder = Arc::new(DropSendTunnelFilter::new(drop_start, drop_end));
Box::new(TunnelWithFilter::new(s, s_recorder.clone()))
} else {
s
};
let c_peer_id = new_peer_id();
let s_peer_id = new_peer_id();
@@ -459,7 +480,15 @@ mod tests {
.start_recv_loop(tokio::sync::mpsc::channel(200).0)
.await;
// wait 5s, conn should not be disconnected
let throughput = c_peer.throughput.clone();
let _t = ScopedTask::from(tokio::spawn(async move {
// if not drop both, we mock some rx traffic for client peer to test pinger
while !drop_both {
tokio::time::sleep(Duration::from_millis(100)).await;
throughput.record_rx_bytes(3);
}
}));
tokio::time::sleep(Duration::from_secs(15)).await;
if conn_closed {
@@ -470,9 +499,18 @@ mod tests {
}
#[tokio::test]
async fn peer_conn_pingpong_timeout() {
peer_conn_pingpong_test_common(3, 5, false).await;
peer_conn_pingpong_test_common(5, 12, true).await;
async fn peer_conn_pingpong_timeout_not_close() {
peer_conn_pingpong_test_common(3, 5, false, false).await;
}
#[tokio::test]
async fn peer_conn_pingpong_oneside_timeout() {
peer_conn_pingpong_test_common(4, 12, false, false).await;
}
#[tokio::test]
async fn peer_conn_pingpong_bothside_timeout() {
peer_conn_pingpong_test_common(4, 12, true, true).await;
}
#[tokio::test]

View File

@@ -6,18 +6,98 @@ use std::{
time::Duration,
};
use tokio::{sync::broadcast, task::JoinSet, time::timeout};
use rand::{thread_rng, Rng};
use tokio::{
sync::broadcast,
task::JoinSet,
time::{timeout, Interval},
};
use crate::{
common::{error::Error, PeerId},
tunnel::{
mpsc::MpscTunnelSender,
packet_def::{PacketType, ZCPacket},
stats::WindowLatency,
stats::{Throughput, WindowLatency},
TunnelError,
},
};
struct PingIntervalController {
throughput: Arc<Throughput>,
loss_rate_20: Arc<WindowLatency>,
interval: Interval,
logic_time: u64,
last_send_logic_time: u64,
backoff_idx: i32,
max_backoff_idx: i32,
last_throughput: Throughput,
}
impl PingIntervalController {
fn new(throughput: Arc<Throughput>, loss_rate_20: Arc<WindowLatency>) -> Self {
let last_throughput = *throughput;
Self {
throughput,
loss_rate_20,
interval: tokio::time::interval(Duration::from_secs(1)),
logic_time: 0,
last_send_logic_time: 0,
backoff_idx: 0,
max_backoff_idx: 5,
last_throughput,
}
}
async fn tick(&mut self) {
self.interval.tick().await;
self.logic_time += 1;
}
fn tx_increase(&self) -> bool {
self.throughput.tx_packets() > self.last_throughput.tx_packets()
}
fn rx_increase(&self) -> bool {
self.throughput.rx_packets() > self.last_throughput.rx_packets()
}
fn should_send_ping(&mut self) -> bool {
if self.loss_rate_20.get_latency_us::<f64>() > 0.0 {
self.backoff_idx = 0;
} else if self.tx_increase()
&& !self.rx_increase()
&& self.logic_time - self.last_send_logic_time > 2
{
// if tx increase but rx not increase, we should do pingpong more frequently
self.backoff_idx = 0;
}
self.last_throughput = *self.throughput;
if (self.logic_time - self.last_send_logic_time) < (1 << self.backoff_idx) {
return false;
}
self.backoff_idx = std::cmp::min(self.backoff_idx + 1, self.max_backoff_idx);
// use this makes two peers not pingpong at the same time
if self.backoff_idx > self.max_backoff_idx - 2 && thread_rng().gen_bool(0.2) {
self.backoff_idx -= 1;
}
self.last_send_logic_time = self.logic_time;
return true;
}
}
pub struct PeerConnPinger {
my_peer_id: PeerId,
peer_id: PeerId,
@@ -25,6 +105,7 @@ pub struct PeerConnPinger {
ctrl_sender: broadcast::Sender<ZCPacket>,
latency_stats: Arc<WindowLatency>,
loss_rate_stats: Arc<AtomicU32>,
throughput_stats: Arc<Throughput>,
tasks: JoinSet<Result<(), TunnelError>>,
}
@@ -45,6 +126,7 @@ impl PeerConnPinger {
ctrl_sender: broadcast::Sender<ZCPacket>,
latency_stats: Arc<WindowLatency>,
loss_rate_stats: Arc<AtomicU32>,
throughput_stats: Arc<Throughput>,
) -> Self {
Self {
my_peer_id,
@@ -54,6 +136,7 @@ impl PeerConnPinger {
latency_stats,
ctrl_sender,
loss_rate_stats,
throughput_stats,
}
}
@@ -125,17 +208,23 @@ impl PeerConnPinger {
let (ping_res_sender, mut ping_res_receiver) = tokio::sync::mpsc::channel(100);
// one with 1% precision
let loss_rate_stats_1 = WindowLatency::new(100);
// one with 20% precision, so we can fast fail this conn.
let loss_rate_stats_20 = Arc::new(WindowLatency::new(5));
let stopped = Arc::new(AtomicU32::new(0));
// generate a pingpong task every 200ms
let mut pingpong_tasks = JoinSet::new();
let ctrl_resp_sender = self.ctrl_sender.clone();
let stopped_clone = stopped.clone();
let mut controller =
PingIntervalController::new(self.throughput_stats.clone(), loss_rate_stats_20.clone());
self.tasks.spawn(async move {
let mut req_seq = 0;
loop {
let receiver = ctrl_resp_sender.subscribe();
let ping_res_sender = ping_res_sender.clone();
controller.tick().await;
if stopped_clone.load(Ordering::Relaxed) != 0 {
return Ok(());
@@ -145,7 +234,13 @@ impl PeerConnPinger {
pingpong_tasks.join_next().await;
}
if !controller.should_send_ping() {
continue;
}
let mut sink = sink.clone();
let receiver = ctrl_resp_sender.subscribe();
let ping_res_sender = ping_res_sender.clone();
pingpong_tasks.spawn(async move {
let mut receiver = receiver.resubscribe();
let pingpong_once_ret = Self::do_pingpong_once(
@@ -163,16 +258,12 @@ impl PeerConnPinger {
});
req_seq = req_seq.wrapping_add(1);
tokio::time::sleep(Duration::from_millis(1000)).await;
}
});
// one with 1% precision
let loss_rate_stats_1 = WindowLatency::new(100);
// one with 20% precision, so we can fast fail this conn.
let loss_rate_stats_20 = WindowLatency::new(5);
let mut counter: u64 = 0;
let throughput = self.throughput_stats.clone();
let mut last_rx_packets = throughput.rx_packets();
while let Some(ret) = ping_res_receiver.recv().await {
counter += 1;
@@ -199,16 +290,29 @@ impl PeerConnPinger {
);
if (counter > 5 && loss_rate_20 > 0.74) || (counter > 150 && loss_rate_1 > 0.20) {
tracing::warn!(
?ret,
?self,
?loss_rate_1,
?loss_rate_20,
"pingpong loss rate too high, closing"
);
break;
let current_rx_packets = throughput.rx_packets();
let need_close = if last_rx_packets != current_rx_packets {
// if we receive some packet from peers, we should relax the condition
counter > 50 && loss_rate_1 > 0.5
} else {
true
};
if need_close {
tracing::warn!(
?ret,
?self,
?loss_rate_1,
?loss_rate_20,
?last_rx_packets,
?current_rx_packets,
"pingpong loss rate too high, closing"
);
break;
}
}
last_rx_packets = throughput.rx_packets();
self.loss_rate_stats
.store((loss_rate_1 * 100.0) as u32, Ordering::Relaxed);
}

View File

@@ -2,12 +2,13 @@ use std::{
fmt::Debug,
net::Ipv4Addr,
sync::{Arc, Weak},
time::SystemTime,
};
use anyhow::Context;
use async_trait::async_trait;
use futures::StreamExt;
use dashmap::DashMap;
use tokio::{
sync::{
@@ -16,17 +17,28 @@ use tokio::{
},
task::JoinSet,
};
use tokio_stream::wrappers::ReceiverStream;
use tokio_util::bytes::Bytes;
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, stun::StunInfoCollectorTrait, PeerId},
common::{
constants::EASYTIER_VERSION,
error::Error,
global_ctx::{ArcGlobalCtx, NetworkIdentity},
stun::StunInfoCollectorTrait,
PeerId,
},
peers::{
peer_conn::PeerConn,
peer_rpc::PeerRpcManagerTransport,
route_trait::{NextHopPolicy, RouteInterface},
route_trait::{ForeignNetworkRouteInfoMap, NextHopPolicy, RouteInterface},
PeerPacketFilter,
},
proto::{
cli::{
self, list_global_foreign_network_response::OneForeignNetwork,
ListGlobalForeignNetworkResponse,
},
peer_rpc::{ForeignNetworkRouteInfoEntry, ForeignNetworkRouteInfoKey},
},
tunnel::{
self,
packet_def::{PacketType, ZCPacket},
@@ -37,11 +49,10 @@ use crate::{
use super::{
encrypt::{Encryptor, NullCipher},
foreign_network_client::ForeignNetworkClient,
foreign_network_manager::ForeignNetworkManager,
foreign_network_manager::{ForeignNetworkManager, GlobalForeignNetworkAccessor},
peer_conn::PeerConnId,
peer_map::PeerMap,
peer_ospf_route::PeerRoute,
peer_rip_route::BasicRoute,
peer_rpc::PeerRpcManager,
route_trait::{ArcRoute, Route},
BoxNicPacketFilter, BoxPeerPacketFilter, PacketRecvChanReceiver,
@@ -75,7 +86,15 @@ impl PeerRpcManagerTransport for RpcTransport {
.ok_or(Error::Unknown)?;
let peers = self.peers.upgrade().ok_or(Error::Unknown)?;
if let Some(gateway_id) = peers
if foreign_peers.has_next_hop(dst_peer_id) {
// do not encrypt for data sending to public server
tracing::debug!(
?dst_peer_id,
?self.my_peer_id,
"failed to send msg to peer, try foreign network",
);
foreign_peers.send_msg(msg, dst_peer_id).await
} else if let Some(gateway_id) = peers
.get_gateway_peer_id(dst_peer_id, NextHopPolicy::LeastHop)
.await
{
@@ -88,20 +107,11 @@ impl PeerRpcManagerTransport for RpcTransport {
self.encryptor
.encrypt(&mut msg)
.with_context(|| "encrypt failed")?;
peers.send_msg_directly(msg, gateway_id).await
} else if foreign_peers.has_next_hop(dst_peer_id) {
if !foreign_peers.is_peer_public_node(&dst_peer_id) {
// do not encrypt for msg sending to public node
self.encryptor
.encrypt(&mut msg)
.with_context(|| "encrypt failed")?;
if peers.has_peer(gateway_id) {
peers.send_msg_directly(msg, gateway_id).await
} else {
foreign_peers.send_msg(msg, gateway_id).await
}
tracing::debug!(
?dst_peer_id,
?self.my_peer_id,
"failed to send msg to peer, try foreign network",
);
foreign_peers.send_msg(msg, dst_peer_id).await
} else {
Err(Error::RouteError(Some(format!(
"peermgr RpcTransport no route for dst_peer_id: {}",
@@ -120,13 +130,11 @@ impl PeerRpcManagerTransport for RpcTransport {
}
pub enum RouteAlgoType {
Rip,
Ospf,
None,
}
enum RouteAlgoInst {
Rip(Arc<BasicRoute>),
Ospf(Arc<PeerRoute>),
None,
}
@@ -217,9 +225,6 @@ impl PeerManager {
let peer_rpc_mgr = Arc::new(PeerRpcManager::new(rpc_tspt.clone()));
let route_algo_inst = match route_algo {
RouteAlgoType::Rip => {
RouteAlgoInst::Rip(Arc::new(BasicRoute::new(my_peer_id, global_ctx.clone())))
}
RouteAlgoType::Ospf => RouteAlgoInst::Ospf(PeerRoute::new(
my_peer_id,
global_ctx.clone(),
@@ -232,6 +237,7 @@ impl PeerManager {
my_peer_id,
global_ctx.clone(),
packet_send.clone(),
Self::build_foreign_network_manager_accessor(&peers),
));
let foreign_network_client = Arc::new(ForeignNetworkClient::new(
global_ctx.clone(),
@@ -270,6 +276,34 @@ impl PeerManager {
}
}
fn build_foreign_network_manager_accessor(
peer_map: &Arc<PeerMap>,
) -> Box<dyn GlobalForeignNetworkAccessor> {
struct T {
peer_map: Weak<PeerMap>,
}
#[async_trait::async_trait]
impl GlobalForeignNetworkAccessor for T {
async fn list_global_foreign_peer(
&self,
network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
let Some(peer_map) = self.peer_map.upgrade() else {
return vec![];
};
peer_map
.list_peers_own_foreign_network(network_identity)
.await
}
}
Box::new(T {
peer_map: Arc::downgrade(peer_map),
})
}
async fn add_new_peer_conn(&self, peer_conn: PeerConn) -> Result<(), Error> {
if self.global_ctx.get_network_identity() != peer_conn.get_network_identity() {
return Err(Error::SecretKeyError(
@@ -325,20 +359,85 @@ impl PeerManager {
Ok(())
}
async fn try_handle_foreign_network_packet(
packet: ZCPacket,
my_peer_id: PeerId,
peer_map: &PeerMap,
foreign_network_mgr: &ForeignNetworkManager,
) -> Result<(), ZCPacket> {
let pm_header = packet.peer_manager_header().unwrap();
if pm_header.packet_type != PacketType::ForeignNetworkPacket as u8 {
return Err(packet);
}
let from_peer_id = pm_header.from_peer_id.get();
let to_peer_id = pm_header.to_peer_id.get();
let foreign_hdr = packet.foreign_network_hdr().unwrap();
let foreign_network_name = foreign_hdr.get_network_name(packet.payload());
let foreign_peer_id = foreign_hdr.get_dst_peer_id();
if to_peer_id == my_peer_id {
// packet sent from other peer to me, extract the inner packet and forward it
if let Err(e) = foreign_network_mgr
.send_msg_to_peer(
&foreign_network_name,
foreign_peer_id,
packet.foreign_network_packet(),
)
.await
{
tracing::debug!(
?e,
?foreign_network_name,
?foreign_peer_id,
"foreign network mgr send_msg_to_peer failed"
);
}
Ok(())
} else if from_peer_id == my_peer_id {
// packet is generated from foreign network mgr and should be forward to other peer
if let Err(e) = peer_map
.send_msg(packet, to_peer_id, NextHopPolicy::LeastHop)
.await
{
tracing::debug!(
?e,
?to_peer_id,
"send_msg_directly failed when forward local generated foreign network packet"
);
}
Ok(())
} else {
// target is not me, forward it
Err(packet)
}
}
async fn start_peer_recv(&self) {
let mut recv = ReceiverStream::new(self.packet_recv.lock().await.take().unwrap());
let mut recv = self.packet_recv.lock().await.take().unwrap();
let my_peer_id = self.my_peer_id;
let peers = self.peers.clone();
let pipe_line = self.peer_packet_process_pipeline.clone();
let foreign_client = self.foreign_network_client.clone();
let foreign_mgr = self.foreign_network_manager.clone();
let encryptor = self.encryptor.clone();
self.tasks.lock().await.spawn(async move {
tracing::trace!("start_peer_recv");
while let Some(mut ret) = recv.next().await {
while let Some(ret) = recv.recv().await {
let Err(mut ret) =
Self::try_handle_foreign_network_packet(ret, my_peer_id, &peers, &foreign_mgr)
.await
else {
continue;
};
let Some(hdr) = ret.mut_peer_manager_header() else {
tracing::warn!(?ret, "invalid packet, skip");
continue;
};
tracing::trace!(?hdr, "peer recv a packet...");
let from_peer_id = hdr.from_peer_id.get();
let to_peer_id = hdr.to_peer_id.get();
@@ -438,7 +537,10 @@ impl PeerManager {
impl PeerPacketFilter for PeerRpcPacketProcessor {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type == PacketType::TaRpc as u8 {
if hdr.packet_type == PacketType::TaRpc as u8
|| hdr.packet_type == PacketType::RpcReq as u8
|| hdr.packet_type == PacketType::RpcResp as u8
{
self.peer_rpc_tspt_sender.send(packet).unwrap();
None
} else {
@@ -464,6 +566,7 @@ impl PeerManager {
my_peer_id: PeerId,
peers: Weak<PeerMap>,
foreign_network_client: Weak<ForeignNetworkClient>,
foreign_network_manager: Weak<ForeignNetworkManager>,
}
#[async_trait]
@@ -477,36 +580,45 @@ impl PeerManager {
return vec![];
};
let mut peers = foreign_client.list_foreign_peers();
let mut peers = foreign_client.list_public_peers().await;
peers.extend(peer_map.list_peers_with_conn().await);
peers
}
async fn send_route_packet(
&self,
msg: Bytes,
_route_id: u8,
dst_peer_id: PeerId,
) -> Result<(), Error> {
let foreign_client = self
.foreign_network_client
.upgrade()
.ok_or(Error::Unknown)?;
let peer_map = self.peers.upgrade().ok_or(Error::Unknown)?;
let mut zc_packet = ZCPacket::new_with_payload(&msg);
zc_packet.fill_peer_manager_hdr(
self.my_peer_id,
dst_peer_id,
PacketType::Route as u8,
);
if foreign_client.has_next_hop(dst_peer_id) {
foreign_client.send_msg(zc_packet, dst_peer_id).await
} else {
peer_map.send_msg_directly(zc_packet, dst_peer_id).await
}
}
fn my_peer_id(&self) -> PeerId {
self.my_peer_id
}
async fn list_foreign_networks(&self) -> ForeignNetworkRouteInfoMap {
let ret = DashMap::new();
let Some(foreign_mgr) = self.foreign_network_manager.upgrade() else {
return ret;
};
let networks = foreign_mgr.list_foreign_networks().await;
for (network_name, info) in networks.foreign_networks.iter() {
if info.peers.is_empty() {
continue;
}
let last_update = foreign_mgr
.get_foreign_network_last_update(network_name)
.unwrap_or(SystemTime::now());
ret.insert(
ForeignNetworkRouteInfoKey {
peer_id: self.my_peer_id,
network_name: network_name.clone(),
},
ForeignNetworkRouteInfoEntry {
foreign_peer_ids: info.peers.iter().map(|x| x.peer_id).collect(),
last_update: Some(last_update.into()),
version: 0,
network_secret_digest: info.network_secret_digest.clone(),
},
);
}
ret
}
}
let my_peer_id = self.my_peer_id;
@@ -515,6 +627,7 @@ impl PeerManager {
my_peer_id,
peers: Arc::downgrade(&self.peers),
foreign_network_client: Arc::downgrade(&self.foreign_network_client),
foreign_network_manager: Arc::downgrade(&self.foreign_network_manager),
}))
.await
.unwrap();
@@ -525,13 +638,12 @@ impl PeerManager {
pub fn get_route(&self) -> Box<dyn Route + Send + Sync + 'static> {
match &self.route_algo_inst {
RouteAlgoInst::Rip(route) => Box::new(route.clone()),
RouteAlgoInst::Ospf(route) => Box::new(route.clone()),
RouteAlgoInst::None => panic!("no route"),
}
}
pub async fn list_routes(&self) -> Vec<crate::rpc::Route> {
pub async fn list_routes(&self) -> Vec<cli::Route> {
self.get_route().list_routes().await
}
@@ -539,6 +651,28 @@ impl PeerManager {
self.get_route().dump().await
}
pub async fn list_global_foreign_network(&self) -> ListGlobalForeignNetworkResponse {
let mut resp = ListGlobalForeignNetworkResponse::default();
let ret = self.get_route().list_foreign_network_info().await;
for info in ret.infos.iter() {
let entry = resp
.foreign_networks
.entry(info.key.as_ref().unwrap().peer_id)
.or_insert_with(|| Default::default());
let mut f = OneForeignNetwork::default();
f.network_name = info.key.as_ref().unwrap().network_name.clone();
f.peer_ids
.extend(info.value.as_ref().unwrap().foreign_peer_ids.iter());
f.last_updated = format!("{}", info.value.as_ref().unwrap().last_update.unwrap());
f.version = info.value.as_ref().unwrap().version;
entry.foreign_networks.push(f);
}
resp
}
async fn run_nic_packet_process_pipeline(&self, data: &mut ZCPacket) {
for pipeline in self.nic_packet_process_pipeline.read().await.iter().rev() {
pipeline.try_process_packet_from_nic(data).await;
@@ -649,13 +783,23 @@ impl PeerManager {
.get_gateway_peer_id(*peer_id, next_hop_policy.clone())
.await
{
if let Err(e) = self.peers.send_msg_directly(msg, gateway).await {
errs.push(e);
}
} else if self.foreign_network_client.has_next_hop(*peer_id) {
if let Err(e) = self.foreign_network_client.send_msg(msg, *peer_id).await {
errs.push(e);
if self.peers.has_peer(gateway) {
if let Err(e) = self.peers.send_msg_directly(msg, gateway).await {
errs.push(e);
}
} else if self.foreign_network_client.has_next_hop(gateway) {
if let Err(e) = self.foreign_network_client.send_msg(msg, gateway).await {
errs.push(e);
}
} else {
tracing::warn!(
?gateway,
?peer_id,
"cannot send msg to peer through gateway"
);
}
} else {
tracing::debug!(?peer_id, "no gateway for peer");
}
}
@@ -686,14 +830,12 @@ impl PeerManager {
.await
.replace(Arc::downgrade(&self.foreign_network_client));
self.foreign_network_manager.run().await;
self.foreign_network_client.run().await;
}
pub async fn run(&self) -> Result<(), Error> {
match &self.route_algo_inst {
RouteAlgoInst::Ospf(route) => self.add_route(route.clone()).await,
RouteAlgoInst::Rip(route) => self.add_route(route.clone()).await,
RouteAlgoInst::None => {}
};
@@ -732,13 +874,6 @@ impl PeerManager {
self.nic_channel.clone()
}
pub fn get_basic_route(&self) -> Arc<BasicRoute> {
match &self.route_algo_inst {
RouteAlgoInst::Rip(route) => route.clone(),
_ => panic!("not rip route"),
}
}
pub fn get_foreign_network_manager(&self) -> Arc<ForeignNetworkManager> {
self.foreign_network_manager.clone()
}
@@ -747,8 +882,8 @@ impl PeerManager {
self.foreign_network_client.clone()
}
pub fn get_my_info(&self) -> crate::rpc::NodeInfo {
crate::rpc::NodeInfo {
pub fn get_my_info(&self) -> cli::NodeInfo {
cli::NodeInfo {
peer_id: self.my_peer_id,
ipv4_addr: self
.global_ctx
@@ -771,6 +906,14 @@ impl PeerManager {
.map(|x| x.to_string())
.collect(),
config: self.global_ctx.config.dump(),
version: EASYTIER_VERSION.to_string(),
feature_flag: Some(self.global_ctx.get_feature_flags()),
}
}
pub async fn wait(&self) {
while !self.tasks.lock().await.is_empty() {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
}
}
@@ -788,12 +931,11 @@ mod tests {
instance::listeners::get_listener_by_url,
peers::{
peer_manager::RouteAlgoType,
peer_rpc::tests::{MockService, TestRpcService, TestRpcServiceClient},
peer_rpc::tests::register_service,
tests::{connect_peer_manager, wait_route_appear},
},
rpc::NatType,
tunnel::common::tests::wait_for_condition,
tunnel::{TunnelConnector, TunnelListener},
proto::common::NatType,
tunnel::{common::tests::wait_for_condition, TunnelConnector, TunnelListener},
};
use super::PeerManager;
@@ -856,25 +998,18 @@ mod tests {
#[values("tcp", "udp", "wg", "quic")] proto1: &str,
#[values("tcp", "udp", "wg", "quic")] proto2: &str,
) {
use crate::proto::{
rpc_impl::RpcController,
tests::{GreetingClientFactory, SayHelloRequest},
};
let peer_mgr_a = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
peer_mgr_a.get_peer_rpc_mgr().run_service(
100,
MockService {
prefix: "hello a".to_owned(),
}
.serve(),
);
register_service(&peer_mgr_a.peer_rpc_mgr, "", 0, "hello a");
let peer_mgr_b = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
let peer_mgr_c = create_mock_peer_manager_with_mock_stun(NatType::Unknown).await;
peer_mgr_c.get_peer_rpc_mgr().run_service(
100,
MockService {
prefix: "hello c".to_owned(),
}
.serve(),
);
register_service(&peer_mgr_c.peer_rpc_mgr, "", 0, "hello c");
let mut listener1 = get_listener_by_url(
&format!("{}://0.0.0.0:31013", proto1).parse().unwrap(),
@@ -912,16 +1047,26 @@ mod tests {
.await
.unwrap();
let ret = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(100, peer_mgr_c.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), "abc".to_owned()).await;
ret
})
let stub = peer_mgr_a
.peer_rpc_mgr
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id,
peer_mgr_c.my_peer_id,
"".to_string(),
);
let ret = stub
.say_hello(
RpcController {},
SayHelloRequest {
name: "abc".to_string(),
},
)
.await
.unwrap();
assert_eq!(ret, "hello c abc");
assert_eq!(ret.greeting, "hello c abc!");
}
#[tokio::test]

View File

@@ -7,12 +7,11 @@ use tokio::sync::RwLock;
use crate::{
common::{
error::Error,
global_ctx::{ArcGlobalCtx, GlobalCtxEvent},
global_ctx::{ArcGlobalCtx, GlobalCtxEvent, NetworkIdentity},
PeerId,
},
rpc::PeerConnInfo,
tunnel::packet_def::ZCPacket,
tunnel::TunnelError,
proto::cli::PeerConnInfo,
tunnel::{packet_def::ZCPacket, TunnelError},
};
use super::{
@@ -66,7 +65,7 @@ impl PeerMap {
}
pub fn has_peer(&self, peer_id: PeerId) -> bool {
self.peer_map.contains_key(&peer_id)
peer_id == self.my_peer_id || self.peer_map.contains_key(&peer_id)
}
pub async fn send_msg_directly(&self, msg: ZCPacket, dst_peer_id: PeerId) -> Result<(), Error> {
@@ -113,16 +112,28 @@ impl PeerMap {
.get_next_hop_with_policy(dst_peer_id, policy.clone())
.await
{
// for foreign network, gateway_peer_id may not connect to me
if self.has_peer(gateway_peer_id) {
return Some(gateway_peer_id);
}
// NOTIC: for foreign network, gateway_peer_id may not connect to me
return Some(gateway_peer_id);
}
}
None
}
pub async fn list_peers_own_foreign_network(
&self,
network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
let mut ret = Vec::new();
for route in self.routes.read().await.iter() {
let peers = route
.list_peers_own_foreign_network(&network_identity)
.await;
ret.extend(peers);
}
ret
}
pub async fn send_msg(
&self,
msg: ZCPacket,
@@ -240,3 +251,13 @@ impl PeerMap {
route_map
}
}
impl Drop for PeerMap {
fn drop(&mut self) {
tracing::debug!(
self.my_peer_id,
network = ?self.global_ctx.get_network_identity(),
"PeerMap is dropped"
);
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,753 +0,0 @@
use std::{
net::Ipv4Addr,
sync::{atomic::AtomicU32, Arc},
time::{Duration, Instant},
};
use async_trait::async_trait;
use dashmap::DashMap;
use tokio::{
sync::{Mutex, RwLock},
task::JoinSet,
};
use tokio_util::bytes::Bytes;
use tracing::Instrument;
use crate::{
common::{error::Error, global_ctx::ArcGlobalCtx, stun::StunInfoCollectorTrait, PeerId},
peers::route_trait::{Route, RouteInterfaceBox},
rpc::{NatType, StunInfo},
tunnel::packet_def::{PacketType, ZCPacket},
};
use super::PeerPacketFilter;
const SEND_ROUTE_PERIOD_SEC: u64 = 60;
const SEND_ROUTE_FAST_REPLY_SEC: u64 = 5;
const ROUTE_EXPIRED_SEC: u64 = 70;
type Version = u32;
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug, PartialEq)]
// Derives can be passed through to the generated type:
pub struct SyncPeerInfo {
// means next hop in route table.
pub peer_id: PeerId,
pub cost: u32,
pub ipv4_addr: Option<Ipv4Addr>,
pub proxy_cidrs: Vec<String>,
pub hostname: Option<String>,
pub udp_stun_info: i8,
}
impl SyncPeerInfo {
pub fn new_self(from_peer: PeerId, global_ctx: &ArcGlobalCtx) -> Self {
SyncPeerInfo {
peer_id: from_peer,
cost: 0,
ipv4_addr: global_ctx.get_ipv4(),
proxy_cidrs: global_ctx
.get_proxy_cidrs()
.iter()
.map(|x| x.to_string())
.chain(global_ctx.get_vpn_portal_cidr().map(|x| x.to_string()))
.collect(),
hostname: Some(global_ctx.get_hostname()),
udp_stun_info: global_ctx
.get_stun_info_collector()
.get_stun_info()
.udp_nat_type as i8,
}
}
pub fn clone_for_route_table(&self, next_hop: PeerId, cost: u32, from: &Self) -> Self {
SyncPeerInfo {
peer_id: next_hop,
cost,
ipv4_addr: from.ipv4_addr.clone(),
proxy_cidrs: from.proxy_cidrs.clone(),
hostname: from.hostname.clone(),
udp_stun_info: from.udp_stun_info,
}
}
}
#[derive(serde::Deserialize, serde::Serialize, Clone, Debug)]
pub struct SyncPeer {
pub myself: SyncPeerInfo,
pub neighbors: Vec<SyncPeerInfo>,
// the route table version of myself
pub version: Version,
// the route table version of peer that we have received last time
pub peer_version: Option<Version>,
// if we do not have latest peer version, need_reply is true
pub need_reply: bool,
}
impl SyncPeer {
pub fn new(
from_peer: PeerId,
_to_peer: PeerId,
neighbors: Vec<SyncPeerInfo>,
global_ctx: ArcGlobalCtx,
version: Version,
peer_version: Option<Version>,
need_reply: bool,
) -> Self {
SyncPeer {
myself: SyncPeerInfo::new_self(from_peer, &global_ctx),
neighbors,
version,
peer_version,
need_reply,
}
}
}
#[derive(Debug)]
struct SyncPeerFromRemote {
packet: SyncPeer,
last_update: std::time::Instant,
}
type SyncPeerFromRemoteMap = Arc<DashMap<PeerId, SyncPeerFromRemote>>;
#[derive(Debug)]
struct RouteTable {
route_info: DashMap<PeerId, SyncPeerInfo>,
ipv4_peer_id_map: DashMap<Ipv4Addr, PeerId>,
cidr_peer_id_map: DashMap<cidr::IpCidr, PeerId>,
}
impl RouteTable {
fn new() -> Self {
RouteTable {
route_info: DashMap::new(),
ipv4_peer_id_map: DashMap::new(),
cidr_peer_id_map: DashMap::new(),
}
}
fn copy_from(&self, other: &Self) {
self.route_info.clear();
for item in other.route_info.iter() {
let (k, v) = item.pair();
self.route_info.insert(*k, v.clone());
}
self.ipv4_peer_id_map.clear();
for item in other.ipv4_peer_id_map.iter() {
let (k, v) = item.pair();
self.ipv4_peer_id_map.insert(*k, *v);
}
self.cidr_peer_id_map.clear();
for item in other.cidr_peer_id_map.iter() {
let (k, v) = item.pair();
self.cidr_peer_id_map.insert(*k, *v);
}
}
}
#[derive(Debug, Clone)]
struct RouteVersion(Arc<AtomicU32>);
impl RouteVersion {
fn new() -> Self {
// RouteVersion(Arc::new(AtomicU32::new(rand::random())))
RouteVersion(Arc::new(AtomicU32::new(0)))
}
fn get(&self) -> Version {
self.0.load(std::sync::atomic::Ordering::Relaxed)
}
fn inc(&self) {
self.0.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
}
}
pub struct BasicRoute {
my_peer_id: PeerId,
global_ctx: ArcGlobalCtx,
interface: Arc<Mutex<Option<RouteInterfaceBox>>>,
route_table: Arc<RouteTable>,
sync_peer_from_remote: SyncPeerFromRemoteMap,
tasks: Mutex<JoinSet<()>>,
need_sync_notifier: Arc<tokio::sync::Notify>,
version: RouteVersion,
myself: Arc<RwLock<SyncPeerInfo>>,
last_send_time_map: Arc<DashMap<PeerId, (Version, Option<Version>, Instant)>>,
}
impl BasicRoute {
pub fn new(my_peer_id: PeerId, global_ctx: ArcGlobalCtx) -> Self {
BasicRoute {
my_peer_id,
global_ctx: global_ctx.clone(),
interface: Arc::new(Mutex::new(None)),
route_table: Arc::new(RouteTable::new()),
sync_peer_from_remote: Arc::new(DashMap::new()),
tasks: Mutex::new(JoinSet::new()),
need_sync_notifier: Arc::new(tokio::sync::Notify::new()),
version: RouteVersion::new(),
myself: Arc::new(RwLock::new(SyncPeerInfo::new_self(
my_peer_id.into(),
&global_ctx,
))),
last_send_time_map: Arc::new(DashMap::new()),
}
}
fn update_route_table(
my_id: PeerId,
sync_peer_reqs: SyncPeerFromRemoteMap,
route_table: Arc<RouteTable>,
) {
tracing::trace!(my_id = ?my_id, route_table = ?route_table, "update route table");
let new_route_table = Arc::new(RouteTable::new());
for item in sync_peer_reqs.iter() {
Self::update_route_table_with_req(my_id, &item.value().packet, new_route_table.clone());
}
route_table.copy_from(&new_route_table);
}
async fn update_myself(
my_peer_id: PeerId,
myself: &Arc<RwLock<SyncPeerInfo>>,
global_ctx: &ArcGlobalCtx,
) -> bool {
let new_myself = SyncPeerInfo::new_self(my_peer_id, &global_ctx);
if *myself.read().await != new_myself {
*myself.write().await = new_myself;
true
} else {
false
}
}
fn update_route_table_with_req(my_id: PeerId, packet: &SyncPeer, route_table: Arc<RouteTable>) {
let peer_id = packet.myself.peer_id.clone();
let update = |cost: u32, peer_info: &SyncPeerInfo| {
let node_id: PeerId = peer_info.peer_id.into();
let ret = route_table
.route_info
.entry(node_id.clone().into())
.and_modify(|info| {
if info.cost > cost {
*info = info.clone_for_route_table(peer_id, cost, &peer_info);
}
})
.or_insert(
peer_info
.clone()
.clone_for_route_table(peer_id, cost, &peer_info),
)
.value()
.clone();
if ret.cost > 6 {
tracing::error!(
"cost too large: {}, may lost connection, remove it",
ret.cost
);
route_table.route_info.remove(&node_id);
}
tracing::trace!(
"update route info, to: {:?}, gateway: {:?}, cost: {}, peer: {:?}",
node_id,
peer_id,
cost,
&peer_info
);
if let Some(ipv4) = peer_info.ipv4_addr {
route_table
.ipv4_peer_id_map
.insert(ipv4.clone(), node_id.clone().into());
}
for cidr in peer_info.proxy_cidrs.iter() {
let cidr: cidr::IpCidr = cidr.parse().unwrap();
route_table
.cidr_peer_id_map
.insert(cidr, node_id.clone().into());
}
};
for neighbor in packet.neighbors.iter() {
if neighbor.peer_id == my_id {
continue;
}
update(neighbor.cost + 1, &neighbor);
tracing::trace!("route info: {:?}", neighbor);
}
// add the sender peer to route info
update(1, &packet.myself);
tracing::trace!("my_id: {:?}, current route table: {:?}", my_id, route_table);
}
async fn send_sync_peer_request(
interface: &RouteInterfaceBox,
my_peer_id: PeerId,
global_ctx: ArcGlobalCtx,
peer_id: PeerId,
route_table: Arc<RouteTable>,
my_version: Version,
peer_version: Option<Version>,
need_reply: bool,
) -> Result<(), Error> {
let mut route_info_copy: Vec<SyncPeerInfo> = Vec::new();
// copy the route info
for item in route_table.route_info.iter() {
let (k, v) = item.pair();
route_info_copy.push(v.clone().clone_for_route_table(*k, v.cost, &v));
}
let msg = SyncPeer::new(
my_peer_id,
peer_id,
route_info_copy,
global_ctx,
my_version,
peer_version,
need_reply,
);
// TODO: this may exceed the MTU of the tunnel
interface
.send_route_packet(postcard::to_allocvec(&msg).unwrap().into(), 1, peer_id)
.await
}
async fn sync_peer_periodically(&self) {
let route_table = self.route_table.clone();
let global_ctx = self.global_ctx.clone();
let my_peer_id = self.my_peer_id.clone();
let interface = self.interface.clone();
let notifier = self.need_sync_notifier.clone();
let sync_peer_from_remote = self.sync_peer_from_remote.clone();
let myself = self.myself.clone();
let version = self.version.clone();
let last_send_time_map = self.last_send_time_map.clone();
self.tasks.lock().await.spawn(
async move {
loop {
if Self::update_myself(my_peer_id,&myself, &global_ctx).await {
version.inc();
tracing::info!(
my_id = ?my_peer_id,
version = version.get(),
"update route table version when myself changed"
);
}
let lockd_interface = interface.lock().await;
let interface = lockd_interface.as_ref().unwrap();
let last_send_time_map_new = DashMap::new();
let peers = interface.list_peers().await;
for peer in peers.iter() {
let last_send_time = last_send_time_map.get(peer).map(|v| *v).unwrap_or((0, None, Instant::now() - Duration::from_secs(3600)));
let my_version_peer_saved = sync_peer_from_remote.get(peer).and_then(|v| v.packet.peer_version);
let peer_have_latest_version = my_version_peer_saved == Some(version.get());
if peer_have_latest_version && last_send_time.2.elapsed().as_secs() < SEND_ROUTE_PERIOD_SEC {
last_send_time_map_new.insert(*peer, last_send_time);
continue;
}
tracing::trace!(
my_id = ?my_peer_id,
dst_peer_id = ?peer,
version = version.get(),
?my_version_peer_saved,
last_send_version = ?last_send_time.0,
last_send_peer_version = ?last_send_time.1,
last_send_elapse = ?last_send_time.2.elapsed().as_secs(),
"need send route info"
);
let peer_version_we_saved = sync_peer_from_remote.get(&peer).and_then(|v| Some(v.packet.version));
last_send_time_map_new.insert(*peer, (version.get(), peer_version_we_saved, Instant::now()));
let ret = Self::send_sync_peer_request(
interface,
my_peer_id.clone(),
global_ctx.clone(),
*peer,
route_table.clone(),
version.get(),
peer_version_we_saved,
!peer_have_latest_version,
)
.await;
match &ret {
Ok(_) => {
tracing::trace!("send sync peer request to peer: {}", peer);
}
Err(Error::PeerNoConnectionError(_)) => {
tracing::trace!("peer {} no connection", peer);
}
Err(e) => {
tracing::error!(
"send sync peer request to peer: {} error: {:?}",
peer,
e
);
}
};
}
last_send_time_map.clear();
for item in last_send_time_map_new.iter() {
let (k, v) = item.pair();
last_send_time_map.insert(*k, *v);
}
tokio::select! {
_ = notifier.notified() => {
tracing::trace!("sync peer request triggered by notifier");
}
_ = tokio::time::sleep(Duration::from_secs(1)) => {
tracing::trace!("sync peer request triggered by timeout");
}
}
}
}
.instrument(
tracing::info_span!("sync_peer_periodically", my_id = ?self.my_peer_id, global_ctx = ?self.global_ctx),
),
);
}
async fn check_expired_sync_peer_from_remote(&self) {
let route_table = self.route_table.clone();
let my_peer_id = self.my_peer_id.clone();
let sync_peer_from_remote = self.sync_peer_from_remote.clone();
let notifier = self.need_sync_notifier.clone();
let interface = self.interface.clone();
let version = self.version.clone();
self.tasks.lock().await.spawn(async move {
loop {
let mut need_update_route = false;
let now = std::time::Instant::now();
let mut need_remove = Vec::new();
let connected_peers = interface.lock().await.as_ref().unwrap().list_peers().await;
for item in sync_peer_from_remote.iter() {
let (k, v) = item.pair();
if now.duration_since(v.last_update).as_secs() > ROUTE_EXPIRED_SEC
|| !connected_peers.contains(k)
{
need_update_route = true;
need_remove.insert(0, k.clone());
}
}
for k in need_remove.iter() {
tracing::warn!("remove expired sync peer: {:?}", k);
sync_peer_from_remote.remove(k);
}
if need_update_route {
Self::update_route_table(
my_peer_id,
sync_peer_from_remote.clone(),
route_table.clone(),
);
version.inc();
tracing::info!(
my_id = ?my_peer_id,
version = version.get(),
"update route table when check expired peer"
);
notifier.notify_one();
}
tokio::time::sleep(Duration::from_secs(1)).await;
}
});
}
fn get_peer_id_for_proxy(&self, ipv4: &Ipv4Addr) -> Option<PeerId> {
let ipv4 = std::net::IpAddr::V4(*ipv4);
for item in self.route_table.cidr_peer_id_map.iter() {
let (k, v) = item.pair();
if k.contains(&ipv4) {
return Some(*v);
}
}
None
}
#[tracing::instrument(skip(self, packet), fields(my_id = ?self.my_peer_id, ctx = ?self.global_ctx))]
async fn handle_route_packet(&self, src_peer_id: PeerId, packet: Bytes) {
let packet = postcard::from_bytes::<SyncPeer>(&packet).unwrap();
let p = &packet;
let mut updated = true;
assert_eq!(packet.myself.peer_id, src_peer_id);
self.sync_peer_from_remote
.entry(packet.myself.peer_id.into())
.and_modify(|v| {
if v.packet.myself == p.myself && v.packet.neighbors == p.neighbors {
updated = false;
} else {
v.packet = p.clone();
}
v.packet.version = p.version;
v.packet.peer_version = p.peer_version;
v.last_update = std::time::Instant::now();
})
.or_insert(SyncPeerFromRemote {
packet: p.clone(),
last_update: std::time::Instant::now(),
});
if updated {
Self::update_route_table(
self.my_peer_id.clone(),
self.sync_peer_from_remote.clone(),
self.route_table.clone(),
);
self.version.inc();
tracing::info!(
my_id = ?self.my_peer_id,
?p,
version = self.version.get(),
"update route table when receive route packet"
);
}
if packet.need_reply {
self.last_send_time_map
.entry(packet.myself.peer_id.into())
.and_modify(|v| {
const FAST_REPLY_DURATION: u64 =
SEND_ROUTE_PERIOD_SEC - SEND_ROUTE_FAST_REPLY_SEC;
if v.0 != self.version.get() || v.1 != Some(p.version) {
v.2 = Instant::now() - Duration::from_secs(3600);
} else if v.2.elapsed().as_secs() < FAST_REPLY_DURATION {
// do not send same version route info too frequently
v.2 = Instant::now() - Duration::from_secs(FAST_REPLY_DURATION);
}
});
}
if updated || packet.need_reply {
self.need_sync_notifier.notify_one();
}
}
}
#[async_trait]
impl Route for BasicRoute {
async fn open(&self, interface: RouteInterfaceBox) -> Result<u8, ()> {
*self.interface.lock().await = Some(interface);
self.sync_peer_periodically().await;
self.check_expired_sync_peer_from_remote().await;
Ok(1)
}
async fn close(&self) {}
async fn get_next_hop(&self, dst_peer_id: PeerId) -> Option<PeerId> {
match self.route_table.route_info.get(&dst_peer_id) {
Some(info) => {
return Some(info.peer_id.clone().into());
}
None => {
tracing::error!("no route info for dst_peer_id: {}", dst_peer_id);
return None;
}
}
}
async fn list_routes(&self) -> Vec<crate::rpc::Route> {
let mut routes = Vec::new();
let parse_route_info = |real_peer_id: PeerId, route_info: &SyncPeerInfo| {
let mut route = crate::rpc::Route::default();
route.ipv4_addr = if let Some(ipv4_addr) = route_info.ipv4_addr {
ipv4_addr.to_string()
} else {
"".to_string()
};
route.peer_id = real_peer_id;
route.next_hop_peer_id = route_info.peer_id;
route.cost = route_info.cost as i32;
route.proxy_cidrs = route_info.proxy_cidrs.clone();
route.hostname = route_info.hostname.clone().unwrap_or_default();
let mut stun_info = StunInfo::default();
if let Ok(udp_nat_type) = NatType::try_from(route_info.udp_stun_info as i32) {
stun_info.set_udp_nat_type(udp_nat_type);
}
route.stun_info = Some(stun_info);
route
};
self.route_table.route_info.iter().for_each(|item| {
routes.push(parse_route_info(*item.key(), item.value()));
});
routes
}
async fn get_peer_id_by_ipv4(&self, ipv4_addr: &Ipv4Addr) -> Option<PeerId> {
if let Some(peer_id) = self.route_table.ipv4_peer_id_map.get(ipv4_addr) {
return Some(*peer_id);
}
if let Some(peer_id) = self.get_peer_id_for_proxy(ipv4_addr) {
return Some(peer_id);
}
tracing::info!("no peer id for ipv4: {}", ipv4_addr);
return None;
}
}
#[async_trait::async_trait]
impl PeerPacketFilter for BasicRoute {
async fn try_process_packet_from_peer(&self, packet: ZCPacket) -> Option<ZCPacket> {
let hdr = packet.peer_manager_header().unwrap();
if hdr.packet_type == PacketType::Route as u8 {
let b = packet.payload().to_vec();
self.handle_route_packet(hdr.from_peer_id.get(), b.into())
.await;
None
} else {
Some(packet)
}
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use crate::{
common::{global_ctx::tests::get_mock_global_ctx, PeerId},
connector::udp_hole_punch::tests::replace_stun_info_collector,
peers::{
peer_manager::{PeerManager, RouteAlgoType},
peer_rip_route::Version,
tests::{connect_peer_manager, wait_route_appear},
},
rpc::NatType,
};
async fn create_mock_pmgr() -> Arc<PeerManager> {
let (s, _r) = tokio::sync::mpsc::channel(1000);
let peer_mgr = Arc::new(PeerManager::new(
RouteAlgoType::Rip,
get_mock_global_ctx(),
s,
));
replace_stun_info_collector(peer_mgr.clone(), NatType::Unknown);
peer_mgr.run().await.unwrap();
peer_mgr
}
#[tokio::test]
async fn test_rip_route() {
let peer_mgr_a = create_mock_pmgr().await;
let peer_mgr_b = create_mock_pmgr().await;
let peer_mgr_c = create_mock_pmgr().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
connect_peer_manager(peer_mgr_b.clone(), peer_mgr_c.clone()).await;
wait_route_appear(peer_mgr_a.clone(), peer_mgr_b.clone())
.await
.unwrap();
wait_route_appear(peer_mgr_a.clone(), peer_mgr_c.clone())
.await
.unwrap();
let mgrs = vec![peer_mgr_a.clone(), peer_mgr_b.clone(), peer_mgr_c.clone()];
tokio::time::sleep(tokio::time::Duration::from_secs(4)).await;
let check_version = |version: Version, peer_id: PeerId, mgrs: &Vec<Arc<PeerManager>>| {
for mgr in mgrs.iter() {
tracing::warn!(
"check version: {:?}, {:?}, {:?}, {:?}",
version,
peer_id,
mgr,
mgr.get_basic_route().sync_peer_from_remote
);
assert_eq!(
version,
mgr.get_basic_route()
.sync_peer_from_remote
.get(&peer_id)
.unwrap()
.packet
.version,
);
assert_eq!(
mgr.get_basic_route()
.sync_peer_from_remote
.get(&peer_id)
.unwrap()
.packet
.peer_version
.unwrap(),
mgr.get_basic_route().version.get()
);
}
};
let check_sanity = || {
// check peer version in other peer mgr are correct.
check_version(
peer_mgr_b.get_basic_route().version.get(),
peer_mgr_b.my_peer_id(),
&vec![peer_mgr_a.clone(), peer_mgr_c.clone()],
);
check_version(
peer_mgr_a.get_basic_route().version.get(),
peer_mgr_a.my_peer_id(),
&vec![peer_mgr_b.clone()],
);
check_version(
peer_mgr_c.get_basic_route().version.get(),
peer_mgr_c.my_peer_id(),
&vec![peer_mgr_b.clone()],
);
};
check_sanity();
let versions = mgrs
.iter()
.map(|x| x.get_basic_route().version.get())
.collect::<Vec<_>>();
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
let versions2 = mgrs
.iter()
.map(|x| x.get_basic_route().version.get())
.collect::<Vec<_>>();
assert_eq!(versions, versions2);
check_sanity();
assert!(peer_mgr_a.get_basic_route().version.get() <= 3);
assert!(peer_mgr_b.get_basic_route().version.get() <= 6);
assert!(peer_mgr_c.get_basic_route().version.get() <= 3);
}
}

View File

@@ -1,27 +1,11 @@
use std::{
sync::{
atomic::{AtomicBool, AtomicU32, Ordering},
Arc,
},
time::Instant,
};
use std::sync::{Arc, Mutex};
use crossbeam::atomic::AtomicCell;
use dashmap::DashMap;
use futures::{SinkExt, StreamExt};
use prost::Message;
use tarpc::{server::Channel, transport::channel::UnboundedChannel};
use tokio::{
sync::mpsc::{self, UnboundedSender},
task::JoinSet,
};
use tracing::Instrument;
use futures::StreamExt;
use tokio::task::JoinSet;
use crate::{
common::{error::Error, PeerId},
rpc::TaRpcPacket,
proto::rpc_impl,
tunnel::packet_def::{PacketType, ZCPacket},
};
@@ -38,33 +22,13 @@ pub trait PeerRpcManagerTransport: Send + Sync + 'static {
async fn recv(&self) -> Result<ZCPacket, Error>;
}
type PacketSender = UnboundedSender<ZCPacket>;
struct PeerRpcEndPoint {
peer_id: PeerId,
packet_sender: PacketSender,
create_time: AtomicCell<Instant>,
finished: Arc<AtomicBool>,
tasks: JoinSet<()>,
}
type PeerRpcEndPointCreator =
Box<dyn Fn(PeerId, PeerRpcTransactId) -> PeerRpcEndPoint + Send + Sync + 'static>;
#[derive(Hash, Eq, PartialEq, Clone)]
struct PeerRpcClientCtxKey(PeerId, PeerRpcServiceId, PeerRpcTransactId);
// handle rpc request from one peer
pub struct PeerRpcManager {
service_map: Arc<DashMap<PeerRpcServiceId, PacketSender>>,
tasks: JoinSet<()>,
tspt: Arc<Box<dyn PeerRpcManagerTransport>>,
rpc_client: rpc_impl::client::Client,
rpc_server: rpc_impl::server::Server,
service_registry: Arc<DashMap<PeerRpcServiceId, PeerRpcEndPointCreator>>,
peer_rpc_endpoints: Arc<DashMap<PeerRpcClientCtxKey, PeerRpcEndPoint>>,
client_resp_receivers: Arc<DashMap<PeerRpcClientCtxKey, PacketSender>>,
transact_id: AtomicU32,
tasks: Arc<Mutex<JoinSet<()>>>,
}
impl std::fmt::Debug for PeerRpcManager {
@@ -75,470 +39,82 @@ impl std::fmt::Debug for PeerRpcManager {
}
}
struct PacketMerger {
first_piece: Option<TaRpcPacket>,
pieces: Vec<TaRpcPacket>,
}
impl PacketMerger {
fn new() -> Self {
Self {
first_piece: None,
pieces: Vec::new(),
}
}
fn try_merge_pieces(&self) -> Option<TaRpcPacket> {
if self.first_piece.is_none() || self.pieces.is_empty() {
return None;
}
for p in &self.pieces {
// some piece is missing
if p.total_pieces == 0 {
return None;
}
}
// all pieces are received
let mut content = Vec::new();
for p in &self.pieces {
content.extend_from_slice(&p.content);
}
let mut tmpl_packet = self.first_piece.as_ref().unwrap().clone();
tmpl_packet.total_pieces = 1;
tmpl_packet.piece_idx = 0;
tmpl_packet.content = content;
Some(tmpl_packet)
}
fn feed(
&mut self,
packet: ZCPacket,
expected_tid: Option<PeerRpcTransactId>,
) -> Result<Option<TaRpcPacket>, Error> {
let payload = packet.payload();
let rpc_packet =
TaRpcPacket::decode(payload).map_err(|e| Error::MessageDecodeError(e.to_string()))?;
if expected_tid.is_some() && rpc_packet.transact_id != expected_tid.unwrap() {
return Ok(None);
}
let total_pieces = rpc_packet.total_pieces;
let piece_idx = rpc_packet.piece_idx;
// for compatibility with old version
if total_pieces == 0 && piece_idx == 0 {
return Ok(Some(rpc_packet));
}
if total_pieces > 100 || total_pieces == 0 {
return Err(Error::MessageDecodeError(format!(
"total_pieces is invalid: {}",
total_pieces
)));
}
if piece_idx >= total_pieces {
return Err(Error::MessageDecodeError(
"piece_idx >= total_pieces".to_owned(),
));
}
if self.first_piece.is_none()
|| self.first_piece.as_ref().unwrap().transact_id != rpc_packet.transact_id
|| self.first_piece.as_ref().unwrap().from_peer != rpc_packet.from_peer
{
self.first_piece = Some(rpc_packet.clone());
self.pieces.clear();
}
self.pieces
.resize(total_pieces as usize, Default::default());
self.pieces[piece_idx as usize] = rpc_packet;
Ok(self.try_merge_pieces())
}
}
impl PeerRpcManager {
pub fn new(tspt: impl PeerRpcManagerTransport) -> Self {
Self {
service_map: Arc::new(DashMap::new()),
tasks: JoinSet::new(),
tspt: Arc::new(Box::new(tspt)),
rpc_client: rpc_impl::client::Client::new(),
rpc_server: rpc_impl::server::Server::new(),
service_registry: Arc::new(DashMap::new()),
peer_rpc_endpoints: Arc::new(DashMap::new()),
client_resp_receivers: Arc::new(DashMap::new()),
transact_id: AtomicU32::new(0),
tasks: Arc::new(Mutex::new(JoinSet::new())),
}
}
pub fn run_service<S, Req>(self: &Self, service_id: PeerRpcServiceId, s: S) -> ()
where
S: tarpc::server::Serve<Req> + Clone + Send + Sync + 'static,
Req: Send + 'static + serde::Serialize + for<'a> serde::Deserialize<'a>,
S::Resp:
Send + std::fmt::Debug + 'static + serde::Serialize + for<'a> serde::Deserialize<'a>,
S::Fut: Send + 'static,
{
let tspt = self.tspt.clone();
let creator = Box::new(move |peer_id: PeerId, transact_id: PeerRpcTransactId| {
let mut tasks = JoinSet::new();
let (packet_sender, mut packet_receiver) = mpsc::unbounded_channel();
let (mut client_transport, server_transport) = tarpc::transport::channel::unbounded();
let server = tarpc::server::BaseChannel::with_defaults(server_transport);
let finished = Arc::new(AtomicBool::new(false));
let my_peer_id_clone = tspt.my_peer_id();
let peer_id_clone = peer_id.clone();
let o = server.execute(s.clone());
tasks.spawn(o);
let tspt = tspt.clone();
let finished_clone = finished.clone();
tasks.spawn(async move {
let mut packet_merger = PacketMerger::new();
loop {
tokio::select! {
Some(resp) = client_transport.next() => {
tracing::debug!(resp = ?resp, ?transact_id, ?peer_id, "server recv packet from service provider");
if resp.is_err() {
tracing::warn!(err = ?resp.err(),
"[PEER RPC MGR] client_transport in server side got channel error, ignore it.");
continue;
}
let resp = resp.unwrap();
let serialized_resp = postcard::to_allocvec(&resp);
if serialized_resp.is_err() {
tracing::error!(error = ?serialized_resp.err(), "serialize resp failed");
continue;
}
let msgs = Self::build_rpc_packet(
tspt.my_peer_id(),
peer_id,
service_id,
transact_id,
false,
serialized_resp.as_ref().unwrap(),
);
for msg in msgs {
if let Err(e) = tspt.send(msg, peer_id).await {
tracing::error!(error = ?e, peer_id = ?peer_id, service_id = ?service_id, "send resp to peer failed");
break;
}
}
finished_clone.store(true, Ordering::Relaxed);
}
Some(packet) = packet_receiver.recv() => {
tracing::trace!("recv packet from peer, packet: {:?}", packet);
let info = match packet_merger.feed(packet, None) {
Err(e) => {
tracing::error!(error = ?e, "feed packet to merger failed");
continue;
},
Ok(None) => {
continue;
},
Ok(Some(info)) => {
info
}
};
assert_eq!(info.service_id, service_id);
assert_eq!(info.from_peer, peer_id);
assert_eq!(info.transact_id, transact_id);
let decoded_ret = postcard::from_bytes(&info.content.as_slice());
if let Err(e) = decoded_ret {
tracing::error!(error = ?e, "decode rpc packet failed");
continue;
}
let decoded: tarpc::ClientMessage<Req> = decoded_ret.unwrap();
if let Err(e) = client_transport.send(decoded).await {
tracing::error!(error = ?e, "send to req to client transport failed");
}
}
else => {
tracing::warn!("[PEER RPC MGR] service runner destroy, peer_id: {}, service_id: {}", peer_id, service_id);
}
}
}
}.instrument(tracing::info_span!("service_runner", my_id = ?my_peer_id_clone, peer_id = ?peer_id_clone, service_id = ?service_id)));
tracing::info!(
"[PEER RPC MGR] create new service endpoint for peer {}, service {}",
peer_id,
service_id
);
return PeerRpcEndPoint {
peer_id,
packet_sender,
create_time: AtomicCell::new(Instant::now()),
finished,
tasks,
};
// let resp = client_transport.next().await;
});
if let Some(_) = self.service_registry.insert(service_id, creator) {
panic!(
"[PEER RPC MGR] service {} is already registered",
service_id
);
}
tracing::info!(
"[PEER RPC MGR] register service {} succeed, my_node_id {}",
service_id,
self.tspt.my_peer_id()
)
}
fn parse_rpc_packet(packet: &ZCPacket) -> Result<TaRpcPacket, Error> {
let payload = packet.payload();
TaRpcPacket::decode(payload).map_err(|e| Error::MessageDecodeError(e.to_string()))
}
fn build_rpc_packet(
from_peer: PeerId,
to_peer: PeerId,
service_id: PeerRpcServiceId,
transact_id: PeerRpcTransactId,
is_req: bool,
content: &Vec<u8>,
) -> Vec<ZCPacket> {
let mut ret = Vec::new();
let content_mtu = RPC_PACKET_CONTENT_MTU;
let total_pieces = (content.len() + content_mtu - 1) / content_mtu;
let mut cur_offset = 0;
while cur_offset < content.len() {
let mut cur_len = content_mtu;
if cur_offset + cur_len > content.len() {
cur_len = content.len() - cur_offset;
}
let mut cur_content = Vec::new();
cur_content.extend_from_slice(&content[cur_offset..cur_offset + cur_len]);
let cur_packet = TaRpcPacket {
from_peer,
to_peer,
service_id,
transact_id,
is_req,
total_pieces: total_pieces as u32,
piece_idx: (cur_offset / content_mtu) as u32,
content: cur_content,
};
cur_offset += cur_len;
let mut buf = Vec::new();
cur_packet.encode(&mut buf).unwrap();
let mut zc_packet = ZCPacket::new_with_payload(&buf);
zc_packet.fill_peer_manager_hdr(from_peer, to_peer, PacketType::TaRpc as u8);
ret.push(zc_packet);
}
ret
}
pub fn run(&self) {
self.rpc_client.run();
self.rpc_server.run();
let (server_tx, mut server_rx) = (
self.rpc_server.get_transport_sink(),
self.rpc_server.get_transport_stream(),
);
let (client_tx, mut client_rx) = (
self.rpc_client.get_transport_sink(),
self.rpc_client.get_transport_stream(),
);
let tspt = self.tspt.clone();
let service_registry = self.service_registry.clone();
let peer_rpc_endpoints = self.peer_rpc_endpoints.clone();
let client_resp_receivers = self.client_resp_receivers.clone();
tokio::spawn(async move {
self.tasks.lock().unwrap().spawn(async move {
loop {
let packet = tokio::select! {
Some(Ok(packet)) = server_rx.next() => {
tracing::trace!(?packet, "recv rpc packet from server");
packet
}
Some(Ok(packet)) = client_rx.next() => {
tracing::trace!(?packet, "recv rpc packet from client");
packet
}
else => {
tracing::warn!("rpc transport read aborted, exiting");
break;
}
};
let dst_peer_id = packet.peer_manager_header().unwrap().to_peer_id.into();
if let Err(e) = tspt.send(packet, dst_peer_id).await {
tracing::error!(error = ?e, dst_peer_id = ?dst_peer_id, "send to peer failed");
}
}
});
let tspt = self.tspt.clone();
self.tasks.lock().unwrap().spawn(async move {
loop {
let Ok(o) = tspt.recv().await else {
tracing::warn!("peer rpc transport read aborted, exiting");
break;
};
let info = Self::parse_rpc_packet(&o).unwrap();
tracing::debug!(?info, "recv rpc packet from peer");
if info.is_req {
if !service_registry.contains_key(&info.service_id) {
tracing::warn!(
"service {} not found, my_node_id: {}",
info.service_id,
tspt.my_peer_id()
);
continue;
}
let endpoint = peer_rpc_endpoints
.entry(PeerRpcClientCtxKey(
info.from_peer,
info.service_id,
info.transact_id,
))
.or_insert_with(|| {
service_registry.get(&info.service_id).unwrap()(
info.from_peer,
info.transact_id,
)
});
endpoint.packet_sender.send(o).unwrap();
} else {
if let Some(a) = client_resp_receivers.get(&PeerRpcClientCtxKey(
info.from_peer,
info.service_id,
info.transact_id,
)) {
tracing::trace!("recv resp: {:?}", info);
if let Err(e) = a.send(o) {
tracing::error!(error = ?e, "send resp to client failed");
}
} else {
tracing::warn!("client resp receiver not found, info: {:?}", info);
}
if o.peer_manager_header().unwrap().packet_type == PacketType::RpcReq as u8 {
server_tx.send(o).await.unwrap();
continue;
} else if o.peer_manager_header().unwrap().packet_type == PacketType::RpcResp as u8
{
client_tx.send(o).await.unwrap();
continue;
}
}
});
let peer_rpc_endpoints = self.peer_rpc_endpoints.clone();
tokio::spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
peer_rpc_endpoints.retain(|_, v| {
v.create_time.load().elapsed().as_secs() < 30
&& !v.finished.load(Ordering::Relaxed)
});
}
});
}
#[tracing::instrument(skip(f))]
pub async fn do_client_rpc_scoped<Resp, Req, RpcRet, Fut>(
&self,
service_id: PeerRpcServiceId,
dst_peer_id: PeerId,
f: impl FnOnce(UnboundedChannel<Resp, Req>) -> Fut,
) -> RpcRet
where
Resp: serde::Serialize
+ for<'a> serde::Deserialize<'a>
+ Send
+ Sync
+ std::fmt::Debug
+ 'static,
Req: serde::Serialize
+ for<'a> serde::Deserialize<'a>
+ Send
+ Sync
+ std::fmt::Debug
+ 'static,
Fut: std::future::Future<Output = RpcRet>,
{
let mut tasks = JoinSet::new();
let (packet_sender, mut packet_receiver) = mpsc::unbounded_channel();
pub fn rpc_client(&self) -> &rpc_impl::client::Client {
&self.rpc_client
}
let (client_transport, server_transport) =
tarpc::transport::channel::unbounded::<Resp, Req>();
let (mut server_s, mut server_r) = server_transport.split();
let transact_id = self
.transact_id
.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let tspt = self.tspt.clone();
tasks.spawn(async move {
while let Some(a) = server_r.next().await {
if a.is_err() {
tracing::error!(error = ?a.err(), "channel error");
continue;
}
let req = postcard::to_allocvec(&a.unwrap());
if req.is_err() {
tracing::error!(error = ?req.err(), "bincode serialize failed");
continue;
}
let packets = Self::build_rpc_packet(
tspt.my_peer_id(),
dst_peer_id,
service_id,
transact_id,
true,
req.as_ref().unwrap(),
);
tracing::debug!(?packets, ?req, ?transact_id, "client send rpc packet to peer");
for packet in packets {
if let Err(e) = tspt.send(packet, dst_peer_id).await {
tracing::error!(error = ?e, dst_peer_id = ?dst_peer_id, "send to peer failed");
break;
}
}
}
tracing::warn!("[PEER RPC MGR] server trasport read aborted");
});
tasks.spawn(async move {
let mut packet_merger = PacketMerger::new();
while let Some(packet) = packet_receiver.recv().await {
tracing::trace!("tunnel recv: {:?}", packet);
let info = match packet_merger.feed(packet, Some(transact_id)) {
Err(e) => {
tracing::error!(error = ?e, "feed packet to merger failed");
continue;
}
Ok(None) => {
continue;
}
Ok(Some(info)) => info,
};
let decoded = postcard::from_bytes(&info.content.as_slice());
tracing::debug!(?info, ?decoded, "client recv rpc packet from peer");
assert_eq!(info.transact_id, transact_id);
if let Err(e) = decoded {
tracing::error!(error = ?e, "decode rpc packet failed");
continue;
}
if let Err(e) = server_s.send(decoded.unwrap()).await {
tracing::error!(error = ?e, "send to rpc server channel failed");
}
}
tracing::warn!("[PEER RPC MGR] server packet read aborted");
});
let key = PeerRpcClientCtxKey(dst_peer_id, service_id, transact_id);
let _insert_ret = self
.client_resp_receivers
.insert(key.clone(), packet_sender);
let ret = f(client_transport).await;
self.client_resp_receivers.remove(&key);
ret
pub fn rpc_server(&self) -> &rpc_impl::server::Server {
&self.rpc_server
}
pub fn my_peer_id(&self) -> PeerId {
@@ -546,9 +122,15 @@ impl PeerRpcManager {
}
}
impl Drop for PeerRpcManager {
fn drop(&mut self) {
tracing::debug!("PeerRpcManager drop, my_peer_id: {:?}", self.my_peer_id());
}
}
#[cfg(test)]
pub mod tests {
use std::{pin::Pin, sync::Arc, time::Duration};
use std::{pin::Pin, sync::Arc};
use futures::{SinkExt, StreamExt};
use tokio::sync::Mutex;
@@ -559,31 +141,18 @@ pub mod tests {
peer_rpc::PeerRpcManager,
tests::{connect_peer_manager, create_mock_peer_manager, wait_route_appear},
},
proto::{
rpc_impl::RpcController,
tests::{GreetingClientFactory, GreetingServer, GreetingService, SayHelloRequest},
},
tunnel::{
common::tests::wait_for_condition, packet_def::ZCPacket, ring::create_ring_tunnel_pair,
Tunnel, ZCPacketSink, ZCPacketStream,
packet_def::ZCPacket, ring::create_ring_tunnel_pair, Tunnel, ZCPacketSink,
ZCPacketStream,
},
};
use super::PeerRpcManagerTransport;
#[tarpc::service]
pub trait TestRpcService {
async fn hello(s: String) -> String;
}
#[derive(Clone)]
pub struct MockService {
pub prefix: String,
}
#[tarpc::server]
impl TestRpcService for MockService {
async fn hello(self, _: tarpc::context::Context, s: String) -> String {
format!("{} {}", self.prefix, s)
}
}
fn random_string(len: usize) -> String {
use rand::distributions::Alphanumeric;
use rand::Rng;
@@ -595,6 +164,16 @@ pub mod tests {
String::from_utf8(s).unwrap()
}
pub fn register_service(rpc_mgr: &PeerRpcManager, domain: &str, delay_ms: u64, prefix: &str) {
rpc_mgr.rpc_server().registry().register(
GreetingServer::new(GreetingService {
delay_ms,
prefix: prefix.to_string(),
}),
domain,
);
}
#[tokio::test]
async fn peer_rpc_basic_test() {
struct MockTransport {
@@ -630,10 +209,7 @@ pub mod tests {
my_peer_id: new_peer_id(),
});
server_rpc_mgr.run();
let s = MockService {
prefix: "hello".to_owned(),
};
server_rpc_mgr.run_service(1, s.serve());
register_service(&server_rpc_mgr, "test", 0, "Hello");
let client_rpc_mgr = PeerRpcManager::new(MockTransport {
sink: Arc::new(Mutex::new(stsr)),
@@ -642,35 +218,27 @@ pub mod tests {
});
client_rpc_mgr.run();
let stub = client_rpc_mgr
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(1, 1, "test".to_string());
let msg = random_string(8192);
let ret = client_rpc_mgr
.do_client_rpc_scoped(1, server_rpc_mgr.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
println!("ret: {:?}", ret);
assert_eq!(ret.unwrap(), format!("hello {}", msg));
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let msg = random_string(10);
let ret = client_rpc_mgr
.do_client_rpc_scoped(1, server_rpc_mgr.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
println!("ret: {:?}", ret);
assert_eq!(ret.unwrap(), format!("hello {}", msg));
wait_for_condition(
|| async { server_rpc_mgr.peer_rpc_endpoints.is_empty() },
Duration::from_secs(10),
)
.await;
assert_eq!(ret.greeting, format!("Hello {}!", msg));
}
#[tokio::test]
@@ -680,6 +248,7 @@ pub mod tests {
let peer_mgr_c = create_mock_peer_manager().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
connect_peer_manager(peer_mgr_b.clone(), peer_mgr_c.clone()).await;
wait_route_appear(peer_mgr_a.clone(), peer_mgr_b.clone())
.await
.unwrap();
@@ -699,51 +268,42 @@ pub mod tests {
peer_mgr_b.my_peer_id()
);
let s = MockService {
prefix: "hello".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(1, s.serve());
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test", 0, "Hello");
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
let stub = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test".to_string(),
);
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
// call again
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_c
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
println!("ip_list: {:?}", ip_list);
assert_eq!(ip_list.unwrap(), format!("hello {}", msg));
let ret = stub
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
}
#[tokio::test]
async fn test_multi_service_with_peer_manager() {
async fn test_multi_domain_with_peer_manager() {
let peer_mgr_a = create_mock_peer_manager().await;
let peer_mgr_b = create_mock_peer_manager().await;
connect_peer_manager(peer_mgr_a.clone(), peer_mgr_b.clone()).await;
@@ -757,42 +317,37 @@ pub mod tests {
peer_mgr_b.my_peer_id()
);
let s = MockService {
prefix: "hello_a".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(1, s.serve());
let b = MockService {
prefix: "hello_b".to_owned(),
};
peer_mgr_b.get_peer_rpc_mgr().run_service(2, b.serve());
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test1", 0, "Hello");
register_service(&peer_mgr_b.get_peer_rpc_mgr(), "test2", 20000, "Hello2");
let stub1 = peer_mgr_a
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test1".to_string(),
);
let stub2 = peer_mgr_a
.get_peer_rpc_mgr()
.rpc_client()
.scoped_client::<GreetingClientFactory<RpcController>>(
peer_mgr_a.my_peer_id(),
peer_mgr_b.my_peer_id(),
"test2".to_string(),
);
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(1, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
let ret = stub1
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await
.unwrap();
assert_eq!(ret.greeting, format!("Hello {}!", msg));
let ret = stub2
.say_hello(RpcController {}, SayHelloRequest { name: msg.clone() })
.await;
assert_eq!(ip_list.unwrap(), format!("hello_a {}", msg));
let msg = random_string(16 * 1024);
let ip_list = peer_mgr_a
.get_peer_rpc_mgr()
.do_client_rpc_scoped(2, peer_mgr_b.my_peer_id(), |c| async {
let c = TestRpcServiceClient::new(tarpc::client::Config::default(), c).spawn();
let ret = c.hello(tarpc::context::current(), msg.clone()).await;
ret
})
.await;
assert_eq!(ip_list.unwrap(), format!("hello_b {}", msg));
wait_for_condition(
|| async { peer_mgr_b.get_peer_rpc_mgr().peer_rpc_endpoints.is_empty() },
Duration::from_secs(10),
)
.await;
assert!(ret.is_err() && ret.unwrap_err().to_string().contains("Timeout"));
}
}

View File

@@ -0,0 +1,39 @@
use crate::{
common::global_ctx::ArcGlobalCtx,
proto::{
peer_rpc::{DirectConnectorRpc, GetIpListRequest, GetIpListResponse},
rpc_types::{self, controller::BaseController},
},
};
#[derive(Clone)]
pub struct DirectConnectorManagerRpcServer {
// TODO: this only cache for one src peer, should make it global
global_ctx: ArcGlobalCtx,
}
#[async_trait::async_trait]
impl DirectConnectorRpc for DirectConnectorManagerRpcServer {
type Controller = BaseController;
async fn get_ip_list(
&self,
_: BaseController,
_: GetIpListRequest,
) -> rpc_types::error::Result<GetIpListResponse> {
let mut ret = self.global_ctx.get_ip_collector().collect_ip_addrs().await;
ret.listeners = self
.global_ctx
.get_running_listeners()
.into_iter()
.map(Into::into)
.collect();
Ok(ret)
}
}
impl DirectConnectorManagerRpcServer {
pub fn new(global_ctx: ArcGlobalCtx) -> Self {
Self { global_ctx }
}
}

View File

@@ -1,9 +1,13 @@
use std::{net::Ipv4Addr, sync::Arc};
use async_trait::async_trait;
use tokio_util::bytes::Bytes;
use dashmap::DashMap;
use crate::common::{error::Error, PeerId};
use crate::{
common::{global_ctx::NetworkIdentity, PeerId},
proto::peer_rpc::{
ForeignNetworkRouteInfoEntry, ForeignNetworkRouteInfoKey, RouteForeignNetworkInfos,
},
};
#[derive(Clone, Debug)]
pub enum NextHopPolicy {
@@ -17,16 +21,16 @@ impl Default for NextHopPolicy {
}
}
#[async_trait]
pub type ForeignNetworkRouteInfoMap =
DashMap<ForeignNetworkRouteInfoKey, ForeignNetworkRouteInfoEntry>;
#[async_trait::async_trait]
pub trait RouteInterface {
async fn list_peers(&self) -> Vec<PeerId>;
async fn send_route_packet(
&self,
msg: Bytes,
route_id: u8,
dst_peer_id: PeerId,
) -> Result<(), Error>;
fn my_peer_id(&self) -> PeerId;
async fn list_foreign_networks(&self) -> ForeignNetworkRouteInfoMap {
DashMap::new()
}
}
pub type RouteInterfaceBox = Box<dyn RouteInterface + Send + Sync>;
@@ -56,7 +60,7 @@ impl RouteCostCalculatorInterface for DefaultRouteCostCalculator {}
pub type RouteCostCalculator = Box<dyn RouteCostCalculatorInterface>;
#[async_trait]
#[async_trait::async_trait]
#[auto_impl::auto_impl(Box, Arc)]
pub trait Route {
async fn open(&self, interface: RouteInterfaceBox) -> Result<u8, ()>;
@@ -71,12 +75,23 @@ pub trait Route {
self.get_next_hop(peer_id).await
}
async fn list_routes(&self) -> Vec<crate::rpc::Route>;
async fn list_routes(&self) -> Vec<crate::proto::cli::Route>;
async fn get_peer_id_by_ipv4(&self, _ipv4: &Ipv4Addr) -> Option<PeerId> {
None
}
async fn list_peers_own_foreign_network(
&self,
_network_identity: &NetworkIdentity,
) -> Vec<PeerId> {
vec![]
}
async fn list_foreign_network_info(&self) -> RouteForeignNetworkInfos {
Default::default()
}
async fn set_route_cost_fn(&self, _cost_fn: RouteCostCalculator) {}
async fn dump(&self) -> String {

View File

@@ -1,14 +1,18 @@
use std::sync::Arc;
use crate::rpc::{
cli::PeerInfo, peer_manage_rpc_server::PeerManageRpc, DumpRouteRequest, DumpRouteResponse,
ListForeignNetworkRequest, ListForeignNetworkResponse, ListPeerRequest, ListPeerResponse,
ListRouteRequest, ListRouteResponse, ShowNodeInfoRequest, ShowNodeInfoResponse,
use crate::proto::{
cli::{
DumpRouteRequest, DumpRouteResponse, ListForeignNetworkRequest, ListForeignNetworkResponse,
ListGlobalForeignNetworkRequest, ListGlobalForeignNetworkResponse, ListPeerRequest,
ListPeerResponse, ListRouteRequest, ListRouteResponse, PeerInfo, PeerManageRpc,
ShowNodeInfoRequest, ShowNodeInfoResponse,
},
rpc_types::{self, controller::BaseController},
};
use tonic::{Request, Response, Status};
use super::peer_manager::PeerManager;
#[derive(Clone)]
pub struct PeerManagerRpcService {
peer_manager: Arc<PeerManager>,
}
@@ -19,7 +23,15 @@ impl PeerManagerRpcService {
}
pub async fn list_peers(&self) -> Vec<PeerInfo> {
let peers = self.peer_manager.get_peer_map().list_peers().await;
let mut peers = self.peer_manager.get_peer_map().list_peers().await;
peers.extend(
self.peer_manager
.get_foreign_network_client()
.get_peer_map()
.list_peers()
.await
.iter(),
);
let mut peer_infos = Vec::new();
for peer in peers {
let mut peer_info = PeerInfo::default();
@@ -27,6 +39,14 @@ impl PeerManagerRpcService {
if let Some(conns) = self.peer_manager.get_peer_map().list_peer_conns(peer).await {
peer_info.conns = conns;
} else if let Some(conns) = self
.peer_manager
.get_foreign_network_client()
.get_peer_map()
.list_peer_conns(peer)
.await
{
peer_info.conns = conns;
}
peer_infos.push(peer_info);
@@ -36,12 +56,14 @@ impl PeerManagerRpcService {
}
}
#[tonic::async_trait]
#[async_trait::async_trait]
impl PeerManageRpc for PeerManagerRpcService {
type Controller = BaseController;
async fn list_peer(
&self,
_request: Request<ListPeerRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListPeerResponse>, Status> {
_: BaseController,
_request: ListPeerRequest, // Accept request of type HelloRequest
) -> Result<ListPeerResponse, rpc_types::error::Error> {
let mut reply = ListPeerResponse::default();
let peers = self.list_peers().await;
@@ -49,45 +71,57 @@ impl PeerManageRpc for PeerManagerRpcService {
reply.peer_infos.push(peer);
}
Ok(Response::new(reply))
Ok(reply)
}
async fn list_route(
&self,
_request: Request<ListRouteRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListRouteResponse>, Status> {
_: BaseController,
_request: ListRouteRequest, // Accept request of type HelloRequest
) -> Result<ListRouteResponse, rpc_types::error::Error> {
let mut reply = ListRouteResponse::default();
reply.routes = self.peer_manager.list_routes().await;
Ok(Response::new(reply))
Ok(reply)
}
async fn dump_route(
&self,
_request: Request<DumpRouteRequest>, // Accept request of type HelloRequest
) -> Result<Response<DumpRouteResponse>, Status> {
_: BaseController,
_request: DumpRouteRequest, // Accept request of type HelloRequest
) -> Result<DumpRouteResponse, rpc_types::error::Error> {
let mut reply = DumpRouteResponse::default();
reply.result = self.peer_manager.dump_route().await;
Ok(Response::new(reply))
Ok(reply)
}
async fn list_foreign_network(
&self,
_request: Request<ListForeignNetworkRequest>, // Accept request of type HelloRequest
) -> Result<Response<ListForeignNetworkResponse>, Status> {
_: BaseController,
_request: ListForeignNetworkRequest, // Accept request of type HelloRequest
) -> Result<ListForeignNetworkResponse, rpc_types::error::Error> {
let reply = self
.peer_manager
.get_foreign_network_manager()
.list_foreign_networks()
.await;
Ok(Response::new(reply))
Ok(reply)
}
async fn list_global_foreign_network(
&self,
_: BaseController,
_request: ListGlobalForeignNetworkRequest,
) -> Result<ListGlobalForeignNetworkResponse, rpc_types::error::Error> {
Ok(self.peer_manager.list_global_foreign_network().await)
}
async fn show_node_info(
&self,
_request: Request<ShowNodeInfoRequest>, // Accept request of type HelloRequest
) -> Result<Response<ShowNodeInfoResponse>, Status> {
Ok(Response::new(ShowNodeInfoResponse {
_: BaseController,
_request: ShowNodeInfoRequest, // Accept request of type HelloRequest
) -> Result<ShowNodeInfoResponse, rpc_types::error::Error> {
Ok(ShowNodeInfoResponse {
node_info: Some(self.peer_manager.get_my_info()),
}))
})
}
}

View File

@@ -1,4 +1,7 @@
syntax = "proto3";
import "common.proto";
package cli;
message Status {
@@ -16,18 +19,12 @@ message PeerConnStats {
uint64 latency_us = 5;
}
message TunnelInfo {
string tunnel_type = 1;
string local_addr = 2;
string remote_addr = 3;
}
message PeerConnInfo {
string conn_id = 1;
uint32 my_peer_id = 2;
uint32 peer_id = 3;
repeated string features = 4;
TunnelInfo tunnel = 5;
common.TunnelInfo tunnel = 5;
PeerConnStats stats = 6;
float loss_rate = 7;
bool is_client = 8;
@@ -46,27 +43,6 @@ message ListPeerResponse {
NodeInfo my_info = 2;
}
enum NatType {
// has NAT; but own a single public IP, port is not changed
Unknown = 0;
OpenInternet = 1;
NoPAT = 2;
FullCone = 3;
Restricted = 4;
PortRestricted = 5;
Symmetric = 6;
SymUdpFirewall = 7;
}
message StunInfo {
NatType udp_nat_type = 1;
NatType tcp_nat_type = 2;
int64 last_update_time = 3;
repeated string public_ip = 4;
uint32 min_port = 5;
uint32 max_port = 6;
}
message Route {
uint32 peer_id = 1;
string ipv4_addr = 2;
@@ -74,8 +50,10 @@ message Route {
int32 cost = 4;
repeated string proxy_cidrs = 5;
string hostname = 6;
StunInfo stun_info = 7;
common.StunInfo stun_info = 7;
string inst_id = 8;
string version = 9;
common.PeerFeatureFlag feature_flag = 10;
}
message NodeInfo {
@@ -83,10 +61,12 @@ message NodeInfo {
string ipv4_addr = 2;
repeated string proxy_cidrs = 3;
string hostname = 4;
StunInfo stun_info = 5;
common.StunInfo stun_info = 5;
string inst_id = 6;
repeated string listeners = 7;
string config = 8;
string version = 9;
common.PeerFeatureFlag feature_flag = 10;
}
message ShowNodeInfoRequest {}
@@ -103,18 +83,40 @@ message DumpRouteResponse { string result = 1; }
message ListForeignNetworkRequest {}
message ForeignNetworkEntryPb { repeated PeerInfo peers = 1; }
message ForeignNetworkEntryPb {
repeated PeerInfo peers = 1;
bytes network_secret_digest = 2;
}
message ListForeignNetworkResponse {
// foreign network in local
map<string, ForeignNetworkEntryPb> foreign_networks = 1;
}
message ListGlobalForeignNetworkRequest {}
message ListGlobalForeignNetworkResponse {
// foreign network in the entire network
message OneForeignNetwork {
string network_name = 1;
repeated uint32 peer_ids = 2;
string last_updated = 3;
uint32 version = 4;
}
message ForeignNetworks { repeated OneForeignNetwork foreign_networks = 1; }
map<uint32, ForeignNetworks> foreign_networks = 1;
}
service PeerManageRpc {
rpc ListPeer(ListPeerRequest) returns (ListPeerResponse);
rpc ListRoute(ListRouteRequest) returns (ListRouteResponse);
rpc DumpRoute(DumpRouteRequest) returns (DumpRouteResponse);
rpc ListForeignNetwork(ListForeignNetworkRequest)
returns (ListForeignNetworkResponse);
rpc ListGlobalForeignNetwork(ListGlobalForeignNetworkRequest)
returns (ListGlobalForeignNetworkResponse);
rpc ShowNodeInfo(ShowNodeInfoRequest) returns (ShowNodeInfoResponse);
}
@@ -125,7 +127,7 @@ enum ConnectorStatus {
}
message Connector {
string url = 1;
common.Url url = 1;
ConnectorStatus status = 2;
}
@@ -140,7 +142,7 @@ enum ConnectorManageAction {
message ManageConnectorRequest {
ConnectorManageAction action = 1;
string url = 2;
common.Url url = 2;
}
message ManageConnectorResponse {}
@@ -150,23 +152,6 @@ service ConnectorManageRpc {
rpc ManageConnector(ManageConnectorRequest) returns (ManageConnectorResponse);
}
message DirectConnectedPeerInfo { int32 latency_ms = 1; }
message PeerInfoForGlobalMap {
map<uint32, DirectConnectedPeerInfo> direct_peers = 1;
}
message GetGlobalPeerMapRequest {}
message GetGlobalPeerMapResponse {
map<uint32, PeerInfoForGlobalMap> global_peer_map = 1;
}
service PeerCenterRpc {
rpc GetGlobalPeerMap(GetGlobalPeerMapRequest)
returns (GetGlobalPeerMapResponse);
}
message VpnPortalInfo {
string vpn_type = 1;
string client_config = 2;
@@ -180,24 +165,3 @@ service VpnPortalRpc {
rpc GetVpnPortalInfo(GetVpnPortalInfoRequest)
returns (GetVpnPortalInfoResponse);
}
message HandshakeRequest {
uint32 magic = 1;
uint32 my_peer_id = 2;
uint32 version = 3;
repeated string features = 4;
string network_name = 5;
bytes network_secret_digrest = 6;
}
message TaRpcPacket {
uint32 from_peer = 1;
uint32 to_peer = 2;
uint32 service_id = 3;
uint32 transact_id = 4;
bool is_req = 5;
bytes content = 6;
uint32 total_pieces = 7;
uint32 piece_idx = 8;
}

View File

@@ -0,0 +1 @@
include!(concat!(env!("OUT_DIR"), "/cli.rs"));

View File

@@ -0,0 +1,99 @@
syntax = "proto3";
import "error.proto";
package common;
message RpcDescriptor {
// allow same service registered multiple times in different domain
string domain_name = 1;
string proto_name = 2;
string service_name = 3;
uint32 method_index = 4;
}
message RpcRequest {
RpcDescriptor descriptor = 1;
bytes request = 2;
int32 timeout_ms = 3;
}
message RpcResponse {
bytes response = 1;
error.Error error = 2;
uint64 runtime_us = 3;
}
message RpcPacket {
uint32 from_peer = 1;
uint32 to_peer = 2;
int64 transaction_id = 3;
RpcDescriptor descriptor = 4;
bytes body = 5;
bool is_request = 6;
uint32 total_pieces = 7;
uint32 piece_idx = 8;
int32 trace_id = 9;
}
message UUID {
uint64 high = 1;
uint64 low = 2;
}
enum NatType {
// has NAT; but own a single public IP, port is not changed
Unknown = 0;
OpenInternet = 1;
NoPAT = 2;
FullCone = 3;
Restricted = 4;
PortRestricted = 5;
Symmetric = 6;
SymUdpFirewall = 7;
}
message Ipv4Addr { uint32 addr = 1; }
message Ipv6Addr {
uint32 part1 = 1;
uint32 part2 = 2;
uint32 part3 = 3;
uint32 part4 = 4;
}
message Url { string url = 1; }
message SocketAddr {
oneof ip {
Ipv4Addr ipv4 = 1;
Ipv6Addr ipv6 = 2;
};
uint32 port = 3;
}
message TunnelInfo {
string tunnel_type = 1;
common.Url local_addr = 2;
common.Url remote_addr = 3;
}
message StunInfo {
NatType udp_nat_type = 1;
NatType tcp_nat_type = 2;
int64 last_update_time = 3;
repeated string public_ip = 4;
uint32 min_port = 5;
uint32 max_port = 6;
}
message PeerFeatureFlag {
bool is_public_server = 1;
bool no_relay_data = 2;
}

View File

@@ -0,0 +1,137 @@
use std::{fmt::Display, str::FromStr};
include!(concat!(env!("OUT_DIR"), "/common.rs"));
impl From<uuid::Uuid> for Uuid {
fn from(uuid: uuid::Uuid) -> Self {
let (high, low) = uuid.as_u64_pair();
Uuid { low, high }
}
}
impl From<Uuid> for uuid::Uuid {
fn from(uuid: Uuid) -> Self {
uuid::Uuid::from_u64_pair(uuid.high, uuid.low)
}
}
impl Display for Uuid {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", uuid::Uuid::from(self.clone()))
}
}
impl From<std::net::Ipv4Addr> for Ipv4Addr {
fn from(value: std::net::Ipv4Addr) -> Self {
Self {
addr: u32::from_be_bytes(value.octets()),
}
}
}
impl From<Ipv4Addr> for std::net::Ipv4Addr {
fn from(value: Ipv4Addr) -> Self {
std::net::Ipv4Addr::from(value.addr)
}
}
impl ToString for Ipv4Addr {
fn to_string(&self) -> String {
std::net::Ipv4Addr::from(self.addr).to_string()
}
}
impl From<std::net::Ipv6Addr> for Ipv6Addr {
fn from(value: std::net::Ipv6Addr) -> Self {
let b = value.octets();
Self {
part1: u32::from_be_bytes([b[0], b[1], b[2], b[3]]),
part2: u32::from_be_bytes([b[4], b[5], b[6], b[7]]),
part3: u32::from_be_bytes([b[8], b[9], b[10], b[11]]),
part4: u32::from_be_bytes([b[12], b[13], b[14], b[15]]),
}
}
}
impl From<Ipv6Addr> for std::net::Ipv6Addr {
fn from(value: Ipv6Addr) -> Self {
let part1 = value.part1.to_be_bytes();
let part2 = value.part2.to_be_bytes();
let part3 = value.part3.to_be_bytes();
let part4 = value.part4.to_be_bytes();
std::net::Ipv6Addr::from([
part1[0], part1[1], part1[2], part1[3],
part2[0], part2[1], part2[2], part2[3],
part3[0], part3[1], part3[2], part3[3],
part4[0], part4[1], part4[2], part4[3]
])
}
}
impl ToString for Ipv6Addr {
fn to_string(&self) -> String {
std::net::Ipv6Addr::from(self.clone()).to_string()
}
}
impl From<url::Url> for Url {
fn from(value: url::Url) -> Self {
Url {
url: value.to_string(),
}
}
}
impl From<Url> for url::Url {
fn from(value: Url) -> Self {
url::Url::parse(&value.url).unwrap()
}
}
impl FromStr for Url {
type Err = url::ParseError;
fn from_str(s: &str) -> Result<Self, Self::Err> {
Ok(Url {
url: s.parse::<url::Url>()?.to_string(),
})
}
}
impl Display for Url {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.url)
}
}
impl From<std::net::SocketAddr> for SocketAddr {
fn from(value: std::net::SocketAddr) -> Self {
match value {
std::net::SocketAddr::V4(v4) => SocketAddr {
ip: Some(socket_addr::Ip::Ipv4(v4.ip().clone().into())),
port: v4.port() as u32,
},
std::net::SocketAddr::V6(v6) => SocketAddr {
ip: Some(socket_addr::Ip::Ipv6(v6.ip().clone().into())),
port: v6.port() as u32,
},
}
}
}
impl From<SocketAddr> for std::net::SocketAddr {
fn from(value: SocketAddr) -> Self {
match value.ip.unwrap() {
socket_addr::Ip::Ipv4(ip) => std::net::SocketAddr::V4(std::net::SocketAddrV4::new(
std::net::Ipv4Addr::from(ip),
value.port as u16,
)),
socket_addr::Ip::Ipv6(ip) => std::net::SocketAddr::V6(std::net::SocketAddrV6::new(
std::net::Ipv6Addr::from(ip),
value.port as u16,
0,
0,
)),
}
}
}

View File

@@ -0,0 +1,34 @@
syntax = "proto3";
package error;
message OtherError { string error_message = 1; }
message InvalidMethodIndex {
string service_name = 1;
uint32 method_index = 2;
}
message InvalidService { string service_name = 1; }
message ProstDecodeError {}
message ProstEncodeError {}
message ExecuteError { string error_message = 1; }
message MalformatRpcPacket { string error_message = 1; }
message Timeout { string error_message = 1; }
message Error {
oneof error {
OtherError other_error = 1;
InvalidMethodIndex invalid_method_index = 2;
InvalidService invalid_service = 3;
ProstDecodeError prost_decode_error = 4;
ProstEncodeError prost_encode_error = 5;
ExecuteError execute_error = 6;
MalformatRpcPacket malformat_rpc_packet = 7;
Timeout timeout = 8;
}
}

View File

@@ -0,0 +1,84 @@
use prost::DecodeError;
use super::rpc_types;
include!(concat!(env!("OUT_DIR"), "/error.rs"));
impl From<&rpc_types::error::Error> for Error {
fn from(e: &rpc_types::error::Error) -> Self {
use super::error::error::Error as ProtoError;
match e {
rpc_types::error::Error::ExecutionError(e) => Self {
error: Some(ProtoError::ExecuteError(ExecuteError {
error_message: e.to_string(),
})),
},
rpc_types::error::Error::DecodeError(_) => Self {
error: Some(ProtoError::ProstDecodeError(ProstDecodeError {})),
},
rpc_types::error::Error::EncodeError(_) => Self {
error: Some(ProtoError::ProstEncodeError(ProstEncodeError {})),
},
rpc_types::error::Error::InvalidMethodIndex(m, s) => Self {
error: Some(ProtoError::InvalidMethodIndex(InvalidMethodIndex {
method_index: *m as u32,
service_name: s.to_string(),
})),
},
rpc_types::error::Error::InvalidServiceKey(s, _) => Self {
error: Some(ProtoError::InvalidService(InvalidService {
service_name: s.to_string(),
})),
},
rpc_types::error::Error::MalformatRpcPacket(e) => Self {
error: Some(ProtoError::MalformatRpcPacket(MalformatRpcPacket {
error_message: e.to_string(),
})),
},
rpc_types::error::Error::Timeout(e) => Self {
error: Some(ProtoError::Timeout(Timeout {
error_message: e.to_string(),
})),
},
#[allow(unreachable_patterns)]
e => Self {
error: Some(ProtoError::OtherError(OtherError {
error_message: e.to_string(),
})),
},
}
}
}
impl From<&Error> for rpc_types::error::Error {
fn from(e: &Error) -> Self {
use super::error::error::Error as ProtoError;
match &e.error {
Some(ProtoError::ExecuteError(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
Some(ProtoError::ProstDecodeError(_)) => {
Self::DecodeError(DecodeError::new("decode error"))
}
Some(ProtoError::ProstEncodeError(_)) => {
Self::DecodeError(DecodeError::new("encode error"))
}
Some(ProtoError::InvalidMethodIndex(e)) => {
Self::InvalidMethodIndex(e.method_index as u8, e.service_name.clone())
}
Some(ProtoError::InvalidService(e)) => {
Self::InvalidServiceKey(e.service_name.clone(), "".to_string())
}
Some(ProtoError::MalformatRpcPacket(e)) => {
Self::MalformatRpcPacket(e.error_message.clone())
}
Some(ProtoError::Timeout(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
Some(ProtoError::OtherError(e)) => {
Self::ExecutionError(anyhow::anyhow!(e.error_message.clone()))
}
None => Self::ExecutionError(anyhow::anyhow!("unknown error {:?}", e)),
}
}
}

View File

@@ -0,0 +1,9 @@
pub mod rpc_impl;
pub mod rpc_types;
pub mod cli;
pub mod common;
pub mod error;
pub mod peer_rpc;
pub mod tests;

View File

@@ -0,0 +1,153 @@
syntax = "proto3";
import "google/protobuf/timestamp.proto";
import "common.proto";
package peer_rpc;
message RoutePeerInfo {
// means next hop in route table.
uint32 peer_id = 1;
common.UUID inst_id = 2;
uint32 cost = 3;
optional common.Ipv4Addr ipv4_addr = 4;
repeated string proxy_cidrs = 5;
optional string hostname = 6;
common.NatType udp_stun_info = 7;
google.protobuf.Timestamp last_update = 8;
uint32 version = 9;
string easytier_version = 10;
common.PeerFeatureFlag feature_flag = 11;
}
message PeerIdVersion {
uint32 peer_id = 1;
uint32 version = 2;
}
message RouteConnBitmap {
repeated PeerIdVersion peer_ids = 1;
bytes bitmap = 2;
}
message RoutePeerInfos { repeated RoutePeerInfo items = 1; }
message ForeignNetworkRouteInfoKey {
uint32 peer_id = 1;
string network_name = 2;
}
message ForeignNetworkRouteInfoEntry {
repeated uint32 foreign_peer_ids = 1;
google.protobuf.Timestamp last_update = 2;
uint32 version = 3;
bytes network_secret_digest = 4;
}
message RouteForeignNetworkInfos {
message Info {
ForeignNetworkRouteInfoKey key = 1;
ForeignNetworkRouteInfoEntry value = 2;
}
repeated Info infos = 1;
}
message SyncRouteInfoRequest {
uint32 my_peer_id = 1;
uint64 my_session_id = 2;
bool is_initiator = 3;
RoutePeerInfos peer_infos = 4;
RouteConnBitmap conn_bitmap = 5;
RouteForeignNetworkInfos foreign_network_infos = 6;
}
enum SyncRouteInfoError {
DuplicatePeerId = 0;
Stopped = 1;
}
message SyncRouteInfoResponse {
bool is_initiator = 1;
uint64 session_id = 2;
optional SyncRouteInfoError error = 3;
}
service OspfRouteRpc {
// Generates a "hello" greeting based on the supplied info.
rpc SyncRouteInfo(SyncRouteInfoRequest) returns (SyncRouteInfoResponse);
}
message GetIpListRequest {}
message GetIpListResponse {
common.Ipv4Addr public_ipv4 = 1;
repeated common.Ipv4Addr interface_ipv4s = 2;
common.Ipv6Addr public_ipv6 = 3;
repeated common.Ipv6Addr interface_ipv6s = 4;
repeated common.Url listeners = 5;
}
service DirectConnectorRpc {
rpc GetIpList(GetIpListRequest) returns (GetIpListResponse);
}
message TryPunchHoleRequest { common.SocketAddr local_mapped_addr = 1; }
message TryPunchHoleResponse { common.SocketAddr remote_mapped_addr = 1; }
message TryPunchSymmetricRequest {
common.SocketAddr listener_addr = 1;
uint32 port = 2;
repeated common.Ipv4Addr public_ips = 3;
uint32 min_port = 4;
uint32 max_port = 5;
uint32 transaction_id = 6;
uint32 round = 7;
uint32 last_port_index = 8;
}
message TryPunchSymmetricResponse { uint32 last_port_index = 1; }
service UdpHolePunchRpc {
rpc TryPunchHole(TryPunchHoleRequest) returns (TryPunchHoleResponse);
rpc TryPunchSymmetric(TryPunchSymmetricRequest)
returns (TryPunchSymmetricResponse);
}
message DirectConnectedPeerInfo { int32 latency_ms = 1; }
message PeerInfoForGlobalMap {
map<uint32, DirectConnectedPeerInfo> direct_peers = 1;
}
message ReportPeersRequest {
uint32 my_peer_id = 1;
PeerInfoForGlobalMap peer_infos = 2;
}
message ReportPeersResponse {}
message GlobalPeerMap { map<uint32, PeerInfoForGlobalMap> map = 1; }
message GetGlobalPeerMapRequest { uint64 digest = 1; }
message GetGlobalPeerMapResponse {
map<uint32, PeerInfoForGlobalMap> global_peer_map = 1;
optional uint64 digest = 2;
}
service PeerCenterRpc {
rpc ReportPeers(ReportPeersRequest) returns (ReportPeersResponse);
rpc GetGlobalPeerMap(GetGlobalPeerMapRequest)
returns (GetGlobalPeerMapResponse);
}
message HandshakeRequest {
uint32 magic = 1;
uint32 my_peer_id = 2;
uint32 version = 3;
repeated string features = 4;
string network_name = 5;
bytes network_secret_digrest = 6;
}

View File

@@ -0,0 +1 @@
include!(concat!(env!("OUT_DIR"), "/peer_rpc.rs"));

View File

@@ -0,0 +1,8 @@
[package]
name = "rpc_build"
version = "0.1.0"
edition = "2021"
[dependencies]
heck = "0.5"
prost-build = "0.13"

View File

@@ -0,0 +1,383 @@
extern crate heck;
extern crate prost_build;
use std::fmt;
const NAMESPACE: &str = "crate::proto::rpc_types";
/// The service generator to be used with `prost-build` to generate RPC implementations for
/// `prost-simple-rpc`.
///
/// See the crate-level documentation for more info.
#[allow(missing_copy_implementations)]
#[derive(Clone, Debug)]
pub struct ServiceGenerator {
_private: (),
}
impl ServiceGenerator {
/// Create a new `ServiceGenerator` instance with the default options set.
pub fn new() -> ServiceGenerator {
ServiceGenerator { _private: () }
}
}
impl prost_build::ServiceGenerator for ServiceGenerator {
fn generate(&mut self, service: prost_build::Service, mut buf: &mut String) {
use std::fmt::Write;
let descriptor_name = format!("{}Descriptor", service.name);
let server_name = format!("{}Server", service.name);
let client_name = format!("{}Client", service.name);
let method_descriptor_name = format!("{}MethodDescriptor", service.name);
let mut trait_methods = String::new();
let mut enum_methods = String::new();
let mut list_enum_methods = String::new();
let mut client_methods = String::new();
let mut client_own_methods = String::new();
let mut match_name_methods = String::new();
let mut match_proto_name_methods = String::new();
let mut match_input_type_methods = String::new();
let mut match_input_proto_type_methods = String::new();
let mut match_output_type_methods = String::new();
let mut match_output_proto_type_methods = String::new();
let mut match_handle_methods = String::new();
let mut match_method_try_from = String::new();
for (idx, method) in service.methods.iter().enumerate() {
assert!(
!method.client_streaming,
"Client streaming not yet supported for method {}",
method.proto_name
);
assert!(
!method.server_streaming,
"Server streaming not yet supported for method {}",
method.proto_name
);
ServiceGenerator::write_comments(&mut trait_methods, 4, &method.comments).unwrap();
writeln!(
trait_methods,
r#" async fn {name}(&self, ctrl: Self::Controller, input: {input_type}) -> {namespace}::error::Result<{output_type}>;"#,
name = method.name,
input_type = method.input_type,
output_type = method.output_type,
namespace = NAMESPACE,
)
.unwrap();
ServiceGenerator::write_comments(&mut enum_methods, 4, &method.comments).unwrap();
writeln!(
enum_methods,
" {name} = {index},",
name = method.proto_name,
index = format!("{}", idx + 1)
)
.unwrap();
writeln!(
match_method_try_from,
" {index} => Ok({service_name}MethodDescriptor::{name}),",
service_name = service.name,
name = method.proto_name,
index = format!("{}", idx + 1),
)
.unwrap();
writeln!(
list_enum_methods,
" {service_name}MethodDescriptor::{name},",
service_name = service.name,
name = method.proto_name
)
.unwrap();
writeln!(
client_methods,
r#" async fn {name}(&self, ctrl: H::Controller, input: {input_type}) -> {namespace}::error::Result<{output_type}> {{
{client_name}::{name}_inner(self.0.clone(), ctrl, input).await
}}"#,
name = method.name,
input_type = method.input_type,
output_type = method.output_type,
client_name = format!("{}Client", service.name),
namespace = NAMESPACE,
)
.unwrap();
writeln!(
client_own_methods,
r#" async fn {name}_inner(handler: H, ctrl: H::Controller, input: {input_type}) -> {namespace}::error::Result<{output_type}> {{
{namespace}::__rt::call_method(handler, ctrl, {method_descriptor_name}::{proto_name}, input).await
}}"#,
name = method.name,
method_descriptor_name = method_descriptor_name,
proto_name = method.proto_name,
input_type = method.input_type,
output_type = method.output_type,
namespace = NAMESPACE,
).unwrap();
let case = format!(
" {service_name}MethodDescriptor::{proto_name} => ",
service_name = service.name,
proto_name = method.proto_name
);
writeln!(match_name_methods, "{}{:?},", case, method.name).unwrap();
writeln!(match_proto_name_methods, "{}{:?},", case, method.proto_name).unwrap();
writeln!(
match_input_type_methods,
"{}::std::any::TypeId::of::<{}>(),",
case, method.input_type
)
.unwrap();
writeln!(
match_input_proto_type_methods,
"{}{:?},",
case, method.input_proto_type
)
.unwrap();
writeln!(
match_output_type_methods,
"{}::std::any::TypeId::of::<{}>(),",
case, method.output_type
)
.unwrap();
writeln!(
match_output_proto_type_methods,
"{}{:?},",
case, method.output_proto_type
)
.unwrap();
write!(
match_handle_methods,
r#"{} {{
let decoded: {input_type} = {namespace}::__rt::decode(input)?;
let ret = service.{name}(ctrl, decoded).await?;
{namespace}::__rt::encode(ret)
}}
"#,
case,
input_type = method.input_type,
name = method.name,
namespace = NAMESPACE,
)
.unwrap();
}
ServiceGenerator::write_comments(&mut buf, 0, &service.comments).unwrap();
write!(
buf,
r#"
#[async_trait::async_trait]
#[auto_impl::auto_impl(&, Arc, Box)]
pub trait {name} {{
type Controller: {namespace}::controller::Controller;
{trait_methods}
}}
/// A service descriptor for a `{name}`.
#[derive(Clone, Debug, Eq, Ord, PartialEq, PartialOrd, Default)]
pub struct {descriptor_name};
/// Methods available on a `{name}`.
///
/// This can be used as a key when routing requests for servers/clients of a `{name}`.
#[derive(Clone, Copy, Debug, Eq, Ord, PartialEq, PartialOrd)]
#[repr(u8)]
pub enum {method_descriptor_name} {{
{enum_methods}
}}
impl std::convert::TryFrom<u8> for {method_descriptor_name} {{
type Error = {namespace}::error::Error;
fn try_from(value: u8) -> {namespace}::error::Result<Self> {{
match value {{
{match_method_try_from}
_ => Err({namespace}::error::Error::InvalidMethodIndex(value, "{name}".to_string())),
}}
}}
}}
/// A client for a `{name}`.
///
/// This implements the `{name}` trait by dispatching all method calls to the supplied `Handler`.
#[derive(Clone, Debug)]
pub struct {client_name}<H>(H) where H: {namespace}::handler::Handler;
impl<H> {client_name}<H> where H: {namespace}::handler::Handler<Descriptor = {descriptor_name}> {{
/// Creates a new client instance that delegates all method calls to the supplied handler.
pub fn new(handler: H) -> {client_name}<H> {{
{client_name}(handler)
}}
}}
impl<H> {client_name}<H> where H: {namespace}::handler::Handler<Descriptor = {descriptor_name}> {{
{client_own_methods}
}}
#[async_trait::async_trait]
impl<H> {name} for {client_name}<H> where H: {namespace}::handler::Handler<Descriptor = {descriptor_name}> {{
type Controller = H::Controller;
{client_methods}
}}
pub struct {client_name}Factory<C: {namespace}::controller::Controller>(std::marker::PhantomData<C>);
impl<C: {namespace}::controller::Controller> Clone for {client_name}Factory<C> {{
fn clone(&self) -> Self {{
Self(std::marker::PhantomData)
}}
}}
impl<C> {namespace}::__rt::RpcClientFactory for {client_name}Factory<C> where C: {namespace}::controller::Controller {{
type Descriptor = {descriptor_name};
type ClientImpl = Box<dyn {name}<Controller = C> + Send + 'static>;
type Controller = C;
fn new(handler: impl {namespace}::handler::Handler<Descriptor = Self::Descriptor, Controller = Self::Controller>) -> Self::ClientImpl {{
Box::new({client_name}::new(handler))
}}
}}
/// A server for a `{name}`.
///
/// This implements the `Server` trait by handling requests and dispatch them to methods on the
/// supplied `{name}`.
#[derive(Clone, Debug)]
pub struct {server_name}<A>(A) where A: {name} + Clone + Send + 'static;
impl<A> {server_name}<A> where A: {name} + Clone + Send + 'static {{
/// Creates a new server instance that dispatches all calls to the supplied service.
pub fn new(service: A) -> {server_name}<A> {{
{server_name}(service)
}}
async fn call_inner(
service: A,
method: {method_descriptor_name},
ctrl: A::Controller,
input: ::bytes::Bytes)
-> {namespace}::error::Result<::bytes::Bytes> {{
match method {{
{match_handle_methods}
}}
}}
}}
impl {namespace}::descriptor::ServiceDescriptor for {descriptor_name} {{
type Method = {method_descriptor_name};
fn name(&self) -> &'static str {{ {name:?} }}
fn proto_name(&self) -> &'static str {{ {proto_name:?} }}
fn package(&self) -> &'static str {{ {package:?} }}
fn methods(&self) -> &'static [Self::Method] {{
&[ {list_enum_methods} ]
}}
}}
#[async_trait::async_trait]
impl<A> {namespace}::handler::Handler for {server_name}<A>
where
A: {name} + Clone + Send + Sync + 'static {{
type Descriptor = {descriptor_name};
type Controller = A::Controller;
async fn call(
&self,
ctrl: A::Controller,
method: {method_descriptor_name},
input: ::bytes::Bytes)
-> {namespace}::error::Result<::bytes::Bytes> {{
{server_name}::call_inner(self.0.clone(), method, ctrl, input).await
}}
}}
impl {namespace}::descriptor::MethodDescriptor for {method_descriptor_name} {{
fn name(&self) -> &'static str {{
match *self {{
{match_name_methods}
}}
}}
fn proto_name(&self) -> &'static str {{
match *self {{
{match_proto_name_methods}
}}
}}
fn input_type(&self) -> ::std::any::TypeId {{
match *self {{
{match_input_type_methods}
}}
}}
fn input_proto_type(&self) -> &'static str {{
match *self {{
{match_input_proto_type_methods}
}}
}}
fn output_type(&self) -> ::std::any::TypeId {{
match *self {{
{match_output_type_methods}
}}
}}
fn output_proto_type(&self) -> &'static str {{
match *self {{
{match_output_proto_type_methods}
}}
}}
fn index(&self) -> u8 {{
*self as u8
}}
}}
"#,
name = service.name,
descriptor_name = descriptor_name,
server_name = server_name,
client_name = client_name,
method_descriptor_name = method_descriptor_name,
proto_name = service.proto_name,
package = service.package,
trait_methods = trait_methods,
enum_methods = enum_methods,
list_enum_methods = list_enum_methods,
client_own_methods = client_own_methods,
client_methods = client_methods,
match_name_methods = match_name_methods,
match_proto_name_methods = match_proto_name_methods,
match_input_type_methods = match_input_type_methods,
match_input_proto_type_methods = match_input_proto_type_methods,
match_output_type_methods = match_output_type_methods,
match_output_proto_type_methods = match_output_proto_type_methods,
match_handle_methods = match_handle_methods,
namespace = NAMESPACE,
).unwrap();
}
}
impl ServiceGenerator {
fn write_comments<W>(
mut write: W,
indent: usize,
comments: &prost_build::Comments,
) -> fmt::Result
where
W: fmt::Write,
{
for comment in &comments.leading {
for line in comment.lines().filter(|s| !s.is_empty()) {
writeln!(write, "{}///{}", " ".repeat(indent), line)?;
}
}
Ok(())
}
}

View File

@@ -0,0 +1,240 @@
use std::marker::PhantomData;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use bytes::Bytes;
use dashmap::DashMap;
use prost::Message;
use tokio::sync::mpsc;
use tokio::task::JoinSet;
use tokio::time::timeout;
use tokio_stream::StreamExt;
use crate::common::PeerId;
use crate::defer;
use crate::proto::common::{RpcDescriptor, RpcPacket, RpcRequest, RpcResponse};
use crate::proto::rpc_impl::packet::build_rpc_packet;
use crate::proto::rpc_types::controller::Controller;
use crate::proto::rpc_types::descriptor::MethodDescriptor;
use crate::proto::rpc_types::{
__rt::RpcClientFactory, descriptor::ServiceDescriptor, handler::Handler,
};
use crate::proto::rpc_types::error::Result;
use crate::tunnel::mpsc::{MpscTunnel, MpscTunnelSender};
use crate::tunnel::packet_def::ZCPacket;
use crate::tunnel::ring::create_ring_tunnel_pair;
use crate::tunnel::{Tunnel, TunnelError, ZCPacketStream};
use super::packet::PacketMerger;
use super::{RpcTransactId, Transport};
static CUR_TID: once_cell::sync::Lazy<atomic_shim::AtomicI64> =
once_cell::sync::Lazy::new(|| atomic_shim::AtomicI64::new(rand::random()));
type RpcPacketSender = mpsc::UnboundedSender<RpcPacket>;
type RpcPacketReceiver = mpsc::UnboundedReceiver<RpcPacket>;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
struct InflightRequestKey {
from_peer_id: PeerId,
to_peer_id: PeerId,
transaction_id: RpcTransactId,
}
struct InflightRequest {
sender: RpcPacketSender,
merger: PacketMerger,
start_time: std::time::Instant,
}
type InflightRequestTable = Arc<DashMap<InflightRequestKey, InflightRequest>>;
pub struct Client {
mpsc: Mutex<MpscTunnel<Box<dyn Tunnel>>>,
transport: Mutex<Transport>,
inflight_requests: InflightRequestTable,
tasks: Arc<Mutex<JoinSet<()>>>,
}
impl Client {
pub fn new() -> Self {
let (ring_a, ring_b) = create_ring_tunnel_pair();
Self {
mpsc: Mutex::new(MpscTunnel::new(ring_a)),
transport: Mutex::new(MpscTunnel::new(ring_b)),
inflight_requests: Arc::new(DashMap::new()),
tasks: Arc::new(Mutex::new(JoinSet::new())),
}
}
pub fn get_transport_sink(&self) -> MpscTunnelSender {
self.transport.lock().unwrap().get_sink()
}
pub fn get_transport_stream(&self) -> Pin<Box<dyn ZCPacketStream>> {
self.transport.lock().unwrap().get_stream()
}
pub fn run(&self) {
let mut tasks = self.tasks.lock().unwrap();
let mut rx = self.mpsc.lock().unwrap().get_stream();
let inflight_requests = self.inflight_requests.clone();
tasks.spawn(async move {
while let Some(packet) = rx.next().await {
if let Err(err) = packet {
tracing::error!(?err, "Failed to receive packet");
continue;
}
let packet = match RpcPacket::decode(packet.unwrap().payload()) {
Err(err) => {
tracing::error!(?err, "Failed to decode packet");
continue;
}
Ok(packet) => packet,
};
if packet.is_request {
tracing::warn!(?packet, "Received non-response packet");
continue;
}
let key = InflightRequestKey {
from_peer_id: packet.to_peer,
to_peer_id: packet.from_peer,
transaction_id: packet.transaction_id,
};
let Some(mut inflight_request) = inflight_requests.get_mut(&key) else {
tracing::warn!(?key, "No inflight request found for key");
continue;
};
let ret = inflight_request.merger.feed(packet);
match ret {
Ok(Some(rpc_packet)) => {
inflight_request.sender.send(rpc_packet).unwrap();
}
Ok(None) => {}
Err(err) => {
tracing::error!(?err, "Failed to feed packet to merger");
}
}
}
});
}
pub fn scoped_client<F: RpcClientFactory>(
&self,
from_peer_id: PeerId,
to_peer_id: PeerId,
domain_name: String,
) -> F::ClientImpl {
#[derive(Clone)]
struct HandlerImpl<F> {
domain_name: String,
from_peer_id: PeerId,
to_peer_id: PeerId,
zc_packet_sender: MpscTunnelSender,
inflight_requests: InflightRequestTable,
_phan: PhantomData<F>,
}
impl<F: RpcClientFactory> HandlerImpl<F> {
async fn do_rpc(
&self,
packets: Vec<ZCPacket>,
rx: &mut RpcPacketReceiver,
) -> Result<RpcPacket> {
for packet in packets {
self.zc_packet_sender.send(packet).await?;
}
Ok(rx.recv().await.ok_or(TunnelError::Shutdown)?)
}
}
#[async_trait::async_trait]
impl<F: RpcClientFactory> Handler for HandlerImpl<F> {
type Descriptor = F::Descriptor;
type Controller = F::Controller;
async fn call(
&self,
ctrl: Self::Controller,
method: <Self::Descriptor as ServiceDescriptor>::Method,
input: bytes::Bytes,
) -> Result<bytes::Bytes> {
let transaction_id = CUR_TID.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
let (tx, mut rx) = mpsc::unbounded_channel();
let key = InflightRequestKey {
from_peer_id: self.from_peer_id,
to_peer_id: self.to_peer_id,
transaction_id,
};
defer!(self.inflight_requests.remove(&key););
self.inflight_requests.insert(
key.clone(),
InflightRequest {
sender: tx,
merger: PacketMerger::new(),
start_time: std::time::Instant::now(),
},
);
let desc = self.service_descriptor();
let rpc_desc = RpcDescriptor {
domain_name: self.domain_name.clone(),
proto_name: desc.proto_name().to_string(),
service_name: desc.name().to_string(),
method_index: method.index() as u32,
};
let rpc_req = RpcRequest {
descriptor: Some(rpc_desc.clone()),
request: input.into(),
timeout_ms: ctrl.timeout_ms(),
};
let packets = build_rpc_packet(
self.from_peer_id,
self.to_peer_id,
rpc_desc,
transaction_id,
true,
&rpc_req.encode_to_vec(),
ctrl.trace_id(),
);
let timeout_dur = std::time::Duration::from_millis(ctrl.timeout_ms() as u64);
let rpc_packet = timeout(timeout_dur, self.do_rpc(packets, &mut rx)).await??;
assert_eq!(rpc_packet.transaction_id, transaction_id);
let rpc_resp = RpcResponse::decode(Bytes::from(rpc_packet.body))?;
if let Some(err) = &rpc_resp.error {
return Err(err.into());
}
Ok(bytes::Bytes::from(rpc_resp.response))
}
}
F::new(HandlerImpl::<F> {
domain_name: domain_name.to_string(),
from_peer_id,
to_peer_id,
zc_packet_sender: self.mpsc.lock().unwrap().get_sink(),
inflight_requests: self.inflight_requests.clone(),
_phan: PhantomData,
})
}
pub fn inflight_count(&self) -> usize {
self.inflight_requests.len()
}
}

View File

@@ -0,0 +1,12 @@
use crate::tunnel::{mpsc::MpscTunnel, Tunnel};
pub type RpcController = super::rpc_types::controller::BaseController;
pub mod client;
pub mod packet;
pub mod server;
pub mod service_registry;
pub mod standalone;
pub type Transport = MpscTunnel<Box<dyn Tunnel>>;
pub type RpcTransactId = i64;

View File

@@ -0,0 +1,161 @@
use prost::Message as _;
use crate::{
common::PeerId,
proto::{
common::{RpcDescriptor, RpcPacket},
rpc_types::error::Error,
},
tunnel::packet_def::{PacketType, ZCPacket},
};
use super::RpcTransactId;
const RPC_PACKET_CONTENT_MTU: usize = 1300;
pub struct PacketMerger {
first_piece: Option<RpcPacket>,
pieces: Vec<RpcPacket>,
last_updated: std::time::Instant,
}
impl PacketMerger {
pub fn new() -> Self {
Self {
first_piece: None,
pieces: Vec::new(),
last_updated: std::time::Instant::now(),
}
}
fn try_merge_pieces(&self) -> Option<RpcPacket> {
if self.first_piece.is_none() || self.pieces.is_empty() {
return None;
}
for p in &self.pieces {
// some piece is missing
if p.total_pieces == 0 {
return None;
}
}
// all pieces are received
let mut body = Vec::new();
for p in &self.pieces {
body.extend_from_slice(&p.body);
}
let mut tmpl_packet = self.first_piece.as_ref().unwrap().clone();
tmpl_packet.total_pieces = 1;
tmpl_packet.piece_idx = 0;
tmpl_packet.body = body;
Some(tmpl_packet)
}
pub fn feed(&mut self, rpc_packet: RpcPacket) -> Result<Option<RpcPacket>, Error> {
let total_pieces = rpc_packet.total_pieces;
let piece_idx = rpc_packet.piece_idx;
if rpc_packet.descriptor.is_none() {
return Err(Error::MalformatRpcPacket(
"descriptor is missing".to_owned(),
));
}
// for compatibility with old version
if total_pieces == 0 && piece_idx == 0 {
return Ok(Some(rpc_packet));
}
// about 32MB max size
if total_pieces > 32 * 1024 || total_pieces == 0 {
return Err(Error::MalformatRpcPacket(format!(
"total_pieces is invalid: {}",
total_pieces
)));
}
if piece_idx >= total_pieces {
return Err(Error::MalformatRpcPacket(
"piece_idx >= total_pieces".to_owned(),
));
}
if self.first_piece.is_none()
|| self.first_piece.as_ref().unwrap().transaction_id != rpc_packet.transaction_id
|| self.first_piece.as_ref().unwrap().from_peer != rpc_packet.from_peer
{
self.first_piece = Some(rpc_packet.clone());
self.pieces.clear();
}
self.pieces
.resize(total_pieces as usize, Default::default());
self.pieces[piece_idx as usize] = rpc_packet;
self.last_updated = std::time::Instant::now();
Ok(self.try_merge_pieces())
}
pub fn last_updated(&self) -> std::time::Instant {
self.last_updated
}
}
pub fn build_rpc_packet(
from_peer: PeerId,
to_peer: PeerId,
rpc_desc: RpcDescriptor,
transaction_id: RpcTransactId,
is_req: bool,
content: &Vec<u8>,
trace_id: i32,
) -> Vec<ZCPacket> {
let mut ret = Vec::new();
let content_mtu = RPC_PACKET_CONTENT_MTU;
let total_pieces = (content.len() + content_mtu - 1) / content_mtu;
let mut cur_offset = 0;
while cur_offset < content.len() || content.len() == 0 {
let mut cur_len = content_mtu;
if cur_offset + cur_len > content.len() {
cur_len = content.len() - cur_offset;
}
let mut cur_content = Vec::new();
cur_content.extend_from_slice(&content[cur_offset..cur_offset + cur_len]);
let cur_packet = RpcPacket {
from_peer,
to_peer,
descriptor: Some(rpc_desc.clone()),
is_request: is_req,
total_pieces: total_pieces as u32,
piece_idx: (cur_offset / content_mtu) as u32,
transaction_id,
body: cur_content,
trace_id,
};
cur_offset += cur_len;
let packet_type = if is_req {
PacketType::RpcReq
} else {
PacketType::RpcResp
};
let mut buf = Vec::new();
cur_packet.encode(&mut buf).unwrap();
let mut zc_packet = ZCPacket::new_with_payload(&buf);
zc_packet.fill_peer_manager_hdr(from_peer, to_peer, packet_type as u8);
ret.push(zc_packet);
if content.len() == 0 {
break;
}
}
ret
}

View File

@@ -0,0 +1,207 @@
use std::{
pin::Pin,
sync::{Arc, Mutex},
};
use bytes::Bytes;
use dashmap::DashMap;
use prost::Message;
use tokio::{task::JoinSet, time::timeout};
use tokio_stream::StreamExt;
use crate::{
common::{join_joinset_background, PeerId},
proto::{
common::{self, RpcDescriptor, RpcPacket, RpcRequest, RpcResponse},
rpc_types::error::Result,
},
tunnel::{
mpsc::{MpscTunnel, MpscTunnelSender},
ring::create_ring_tunnel_pair,
Tunnel, ZCPacketStream,
},
};
use super::{
packet::{build_rpc_packet, PacketMerger},
service_registry::ServiceRegistry,
RpcController, Transport,
};
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
struct PacketMergerKey {
from_peer_id: PeerId,
rpc_desc: RpcDescriptor,
transaction_id: i64,
}
pub struct Server {
registry: Arc<ServiceRegistry>,
mpsc: Mutex<Option<MpscTunnel<Box<dyn Tunnel>>>>,
transport: Mutex<Transport>,
tasks: Arc<Mutex<JoinSet<()>>>,
packet_mergers: Arc<DashMap<PacketMergerKey, PacketMerger>>,
}
impl Server {
pub fn new() -> Self {
Server::new_with_registry(Arc::new(ServiceRegistry::new()))
}
pub fn new_with_registry(registry: Arc<ServiceRegistry>) -> Self {
let (ring_a, ring_b) = create_ring_tunnel_pair();
Self {
registry,
mpsc: Mutex::new(Some(MpscTunnel::new(ring_a))),
transport: Mutex::new(MpscTunnel::new(ring_b)),
tasks: Arc::new(Mutex::new(JoinSet::new())),
packet_mergers: Arc::new(DashMap::new()),
}
}
pub fn registry(&self) -> &ServiceRegistry {
&self.registry
}
pub fn get_transport_sink(&self) -> MpscTunnelSender {
self.transport.lock().unwrap().get_sink()
}
pub fn get_transport_stream(&self) -> Pin<Box<dyn ZCPacketStream>> {
self.transport.lock().unwrap().get_stream()
}
pub fn run(&self) {
let tasks = self.tasks.clone();
join_joinset_background(tasks.clone(), "rpc server".to_string());
let mpsc = self.mpsc.lock().unwrap().take().unwrap();
let packet_merges = self.packet_mergers.clone();
let reg = self.registry.clone();
let t = tasks.clone();
tasks.lock().unwrap().spawn(async move {
let mut mpsc = mpsc;
let mut rx = mpsc.get_stream();
while let Some(packet) = rx.next().await {
if let Err(err) = packet {
tracing::error!(?err, "Failed to receive packet");
continue;
}
let packet = match common::RpcPacket::decode(packet.unwrap().payload()) {
Err(err) => {
tracing::error!(?err, "Failed to decode packet");
continue;
}
Ok(packet) => packet,
};
if !packet.is_request {
tracing::warn!(?packet, "Received non-request packet");
continue;
}
let key = PacketMergerKey {
from_peer_id: packet.from_peer,
rpc_desc: packet.descriptor.clone().unwrap_or_default(),
transaction_id: packet.transaction_id,
};
let ret = packet_merges
.entry(key.clone())
.or_insert_with(PacketMerger::new)
.feed(packet);
match ret {
Ok(Some(packet)) => {
packet_merges.remove(&key);
t.lock().unwrap().spawn(Self::handle_rpc(
mpsc.get_sink(),
packet,
reg.clone(),
));
}
Ok(None) => {}
Err(err) => {
tracing::error!("Failed to feed packet to merger, {}", err.to_string());
}
}
}
});
let packet_mergers = self.packet_mergers.clone();
tasks.lock().unwrap().spawn(async move {
loop {
tokio::time::sleep(tokio::time::Duration::from_secs(5)).await;
packet_mergers.retain(|_, v| v.last_updated().elapsed().as_secs() < 10);
}
});
}
async fn handle_rpc_request(packet: RpcPacket, reg: Arc<ServiceRegistry>) -> Result<Bytes> {
let rpc_request = RpcRequest::decode(Bytes::from(packet.body))?;
let timeout_duration = std::time::Duration::from_millis(rpc_request.timeout_ms as u64);
let ctrl = RpcController {};
Ok(timeout(
timeout_duration,
reg.call_method(
packet.descriptor.unwrap(),
ctrl,
Bytes::from(rpc_request.request),
),
)
.await??)
}
async fn handle_rpc(sender: MpscTunnelSender, packet: RpcPacket, reg: Arc<ServiceRegistry>) {
let from_peer = packet.from_peer;
let to_peer = packet.to_peer;
let transaction_id = packet.transaction_id;
let trace_id = packet.trace_id;
let desc = packet.descriptor.clone().unwrap();
let mut resp_msg = RpcResponse::default();
let now = std::time::Instant::now();
let resp_bytes = Self::handle_rpc_request(packet, reg).await;
match &resp_bytes {
Ok(r) => {
resp_msg.response = r.clone().into();
}
Err(err) => {
resp_msg.error = Some(err.into());
}
};
resp_msg.runtime_us = now.elapsed().as_micros() as u64;
let packets = build_rpc_packet(
to_peer,
from_peer,
desc,
transaction_id,
false,
&resp_msg.encode_to_vec(),
trace_id,
);
for packet in packets {
if let Err(err) = sender.send(packet).await {
tracing::error!(?err, "Failed to send response packet");
}
}
}
pub fn inflight_count(&self) -> usize {
self.packet_mergers.len()
}
pub fn close(&self) {
self.transport.lock().unwrap().close();
}
}

View File

@@ -0,0 +1,109 @@
use std::sync::Arc;
use dashmap::DashMap;
use crate::proto::common::RpcDescriptor;
use crate::proto::rpc_types;
use crate::proto::rpc_types::descriptor::ServiceDescriptor;
use crate::proto::rpc_types::handler::{Handler, HandlerExt};
use super::RpcController;
#[derive(Clone, PartialEq, Eq, Debug, Hash)]
pub struct ServiceKey {
pub domain_name: String,
pub service_name: String,
pub proto_name: String,
}
impl From<&RpcDescriptor> for ServiceKey {
fn from(desc: &RpcDescriptor) -> Self {
Self {
domain_name: desc.domain_name.to_string(),
service_name: desc.service_name.to_string(),
proto_name: desc.proto_name.to_string(),
}
}
}
#[derive(Clone)]
struct ServiceEntry {
service: Arc<Box<dyn HandlerExt<Controller = RpcController>>>,
}
impl ServiceEntry {
fn new<H: Handler<Controller = RpcController>>(h: H) -> Self {
Self {
service: Arc::new(Box::new(h)),
}
}
async fn call_method(
&self,
ctrl: RpcController,
method_index: u8,
input: bytes::Bytes,
) -> rpc_types::error::Result<bytes::Bytes> {
self.service.call_method(ctrl, method_index, input).await
}
}
pub struct ServiceRegistry {
table: DashMap<ServiceKey, ServiceEntry>,
}
impl ServiceRegistry {
pub fn new() -> Self {
Self {
table: DashMap::new(),
}
}
pub fn register<H: Handler<Controller = RpcController>>(&self, h: H, domain_name: &str) {
let desc = h.service_descriptor();
let key = ServiceKey {
domain_name: domain_name.to_string(),
service_name: desc.name().to_string(),
proto_name: desc.proto_name().to_string(),
};
let entry = ServiceEntry::new(h);
self.table.insert(key, entry);
}
pub fn unregister<H: Handler<Controller = RpcController>>(
&self,
h: H,
domain_name: &str,
) -> Option<()> {
let desc = h.service_descriptor();
let key = ServiceKey {
domain_name: domain_name.to_string(),
service_name: desc.name().to_string(),
proto_name: desc.proto_name().to_string(),
};
self.table.remove(&key).map(|_| ())
}
pub fn unregister_by_domain(&self, domain_name: &str) {
self.table.retain(|k, _| k.domain_name != domain_name);
}
pub async fn call_method(
&self,
rpc_desc: RpcDescriptor,
ctrl: RpcController,
input: bytes::Bytes,
) -> rpc_types::error::Result<bytes::Bytes> {
let service_key = ServiceKey::from(&rpc_desc);
let method_index = rpc_desc.method_index as u8;
let entry = self
.table
.get(&service_key)
.ok_or(rpc_types::error::Error::InvalidServiceKey(
service_key.service_name.clone(),
service_key.proto_name.clone(),
))?
.clone();
entry.call_method(ctrl, method_index, input).await
}
}

View File

@@ -0,0 +1,245 @@
use std::{
sync::{atomic::AtomicU32, Arc, Mutex},
time::Duration,
};
use anyhow::Context as _;
use futures::{SinkExt as _, StreamExt};
use tokio::task::JoinSet;
use crate::{
common::join_joinset_background,
proto::rpc_types::{__rt::RpcClientFactory, error::Error},
tunnel::{Tunnel, TunnelConnector, TunnelListener},
};
use super::{client::Client, server::Server, service_registry::ServiceRegistry};
struct StandAloneServerOneTunnel {
tunnel: Box<dyn Tunnel>,
rpc_server: Server,
}
impl StandAloneServerOneTunnel {
pub fn new(tunnel: Box<dyn Tunnel>, registry: Arc<ServiceRegistry>) -> Self {
let rpc_server = Server::new_with_registry(registry);
StandAloneServerOneTunnel { tunnel, rpc_server }
}
pub async fn run(self) {
use tokio_stream::StreamExt as _;
let (tunnel_rx, tunnel_tx) = self.tunnel.split();
let (rpc_rx, rpc_tx) = (
self.rpc_server.get_transport_stream(),
self.rpc_server.get_transport_sink(),
);
let mut tasks = JoinSet::new();
tasks.spawn(async move {
let ret = tunnel_rx.timeout(Duration::from_secs(60));
tokio::pin!(ret);
while let Ok(Some(Ok(p))) = ret.try_next().await {
if let Err(e) = rpc_tx.send(p).await {
tracing::error!("tunnel_rx send to rpc_tx error: {:?}", e);
break;
}
}
tracing::info!("forward tunnel_rx to rpc_tx done");
});
tasks.spawn(async move {
let ret = rpc_rx.forward(tunnel_tx).await;
tracing::info!("rpc_rx forward tunnel_tx done: {:?}", ret);
});
self.rpc_server.run();
while let Some(ret) = tasks.join_next().await {
self.rpc_server.close();
tracing::info!("task done: {:?}", ret);
}
tracing::info!("all tasks done");
}
}
pub struct StandAloneServer<L> {
registry: Arc<ServiceRegistry>,
listener: Option<L>,
inflight_server: Arc<AtomicU32>,
tasks: Arc<Mutex<JoinSet<()>>>,
}
impl<L: TunnelListener + 'static> StandAloneServer<L> {
pub fn new(listener: L) -> Self {
StandAloneServer {
registry: Arc::new(ServiceRegistry::new()),
listener: Some(listener),
inflight_server: Arc::new(AtomicU32::new(0)),
tasks: Arc::new(Mutex::new(JoinSet::new())),
}
}
pub fn registry(&self) -> &ServiceRegistry {
&self.registry
}
pub async fn serve(&mut self) -> Result<(), Error> {
let tasks = self.tasks.clone();
let mut listener = self.listener.take().unwrap();
let registry = self.registry.clone();
join_joinset_background(tasks.clone(), "standalone server tasks".to_string());
listener
.listen()
.await
.with_context(|| "failed to listen")?;
let inflight_server = self.inflight_server.clone();
self.tasks.lock().unwrap().spawn(async move {
while let Ok(tunnel) = listener.accept().await {
let server = StandAloneServerOneTunnel::new(tunnel, registry.clone());
let inflight_server = inflight_server.clone();
inflight_server.fetch_add(1, std::sync::atomic::Ordering::Relaxed);
tasks.lock().unwrap().spawn(async move {
server.run().await;
inflight_server.fetch_sub(1, std::sync::atomic::Ordering::Relaxed);
});
}
panic!("standalone server listener exit");
});
Ok(())
}
pub fn inflight_server(&self) -> u32 {
self.inflight_server
.load(std::sync::atomic::Ordering::Relaxed)
}
}
struct StandAloneClientOneTunnel {
rpc_client: Client,
tasks: Arc<Mutex<JoinSet<()>>>,
error: Arc<Mutex<Option<Error>>>,
}
impl StandAloneClientOneTunnel {
pub fn new(tunnel: Box<dyn Tunnel>) -> Self {
let rpc_client = Client::new();
let (mut rpc_rx, rpc_tx) = (
rpc_client.get_transport_stream(),
rpc_client.get_transport_sink(),
);
let tasks = Arc::new(Mutex::new(JoinSet::new()));
let (mut tunnel_rx, mut tunnel_tx) = tunnel.split();
let error_store = Arc::new(Mutex::new(None));
let error = error_store.clone();
tasks.lock().unwrap().spawn(async move {
while let Some(p) = rpc_rx.next().await {
match p {
Ok(p) => {
if let Err(e) = tunnel_tx
.send(p)
.await
.with_context(|| "failed to send packet")
{
*error.lock().unwrap() = Some(e.into());
}
}
Err(e) => {
*error.lock().unwrap() = Some(anyhow::Error::from(e).into());
}
}
}
*error.lock().unwrap() = Some(anyhow::anyhow!("rpc_rx next exit").into());
});
let error = error_store.clone();
tasks.lock().unwrap().spawn(async move {
while let Some(p) = tunnel_rx.next().await {
match p {
Ok(p) => {
if let Err(e) = rpc_tx
.send(p)
.await
.with_context(|| "failed to send packet")
{
*error.lock().unwrap() = Some(e.into());
}
}
Err(e) => {
*error.lock().unwrap() = Some(anyhow::Error::from(e).into());
}
}
}
*error.lock().unwrap() = Some(anyhow::anyhow!("tunnel_rx next exit").into());
});
rpc_client.run();
StandAloneClientOneTunnel {
rpc_client,
tasks,
error: error_store,
}
}
pub fn take_error(&self) -> Option<Error> {
self.error.lock().unwrap().take()
}
}
pub struct StandAloneClient<C: TunnelConnector> {
connector: C,
client: Option<StandAloneClientOneTunnel>,
}
impl<C: TunnelConnector> StandAloneClient<C> {
pub fn new(connector: C) -> Self {
StandAloneClient {
connector,
client: None,
}
}
async fn connect(&mut self) -> Result<Box<dyn Tunnel>, Error> {
Ok(self.connector.connect().await.with_context(|| {
format!(
"failed to connect to server: {:?}",
self.connector.remote_url()
)
})?)
}
pub async fn scoped_client<F: RpcClientFactory>(
&mut self,
domain_name: String,
) -> Result<F::ClientImpl, Error> {
let mut c = self.client.take();
let error = c.as_ref().and_then(|c| c.take_error());
if c.is_none() || error.is_some() {
tracing::info!("reconnect due to error: {:?}", error);
let tunnel = self.connect().await?;
c = Some(StandAloneClientOneTunnel::new(tunnel));
}
self.client = c;
Ok(self
.client
.as_ref()
.unwrap()
.rpc_client
.scoped_client::<F>(1, 1, domain_name))
}
}

View File

@@ -0,0 +1,57 @@
//! Utility functions used by generated code; this is *not* part of the crate's public API!
use bytes;
use prost;
use super::controller;
use super::descriptor;
use super::descriptor::ServiceDescriptor;
use super::error;
use super::handler;
use super::handler::Handler;
/// Efficiently decode a particular message type from a byte buffer.
pub fn decode<M>(buf: bytes::Bytes) -> error::Result<M>
where
M: prost::Message + Default,
{
let message = prost::Message::decode(buf)?;
Ok(message)
}
/// Efficiently encode a particular message into a byte buffer.
pub fn encode<M>(message: M) -> error::Result<bytes::Bytes>
where
M: prost::Message,
{
let len = prost::Message::encoded_len(&message);
let mut buf = ::bytes::BytesMut::with_capacity(len);
prost::Message::encode(&message, &mut buf)?;
Ok(buf.freeze())
}
pub async fn call_method<H, I, O>(
handler: H,
ctrl: H::Controller,
method: <H::Descriptor as descriptor::ServiceDescriptor>::Method,
input: I,
) -> super::error::Result<O>
where
H: handler::Handler,
I: prost::Message,
O: prost::Message + Default,
{
type Error = super::error::Error;
let input_bytes = encode(input)?;
let ret_msg = handler.call(ctrl, method, input_bytes).await?;
decode(ret_msg)
}
pub trait RpcClientFactory: Clone + Send + Sync + 'static {
type Descriptor: ServiceDescriptor + Default;
type ClientImpl;
type Controller: controller::Controller;
fn new(
handler: impl Handler<Descriptor = Self::Descriptor, Controller = Self::Controller>,
) -> Self::ClientImpl;
}

View File

@@ -0,0 +1,18 @@
pub trait Controller: Send + Sync + 'static {
fn timeout_ms(&self) -> i32 {
5000
}
fn set_timeout_ms(&mut self, _timeout_ms: i32) {}
fn set_trace_id(&mut self, _trace_id: i32) {}
fn trace_id(&self) -> i32 {
0
}
}
#[derive(Debug)]
pub struct BaseController {}
impl Controller for BaseController {}

View File

@@ -0,0 +1,50 @@
//! Traits for defining generic service descriptor definitions.
//!
//! These traits are built on the assumption that some form of code generation is being used (e.g.
//! using only `&'static str`s) but it's of course possible to implement these traits manually.
use std::any;
use std::fmt;
/// A descriptor for an available RPC service.
pub trait ServiceDescriptor: Clone + fmt::Debug + Send + Sync {
/// The associated type of method descriptors.
type Method: MethodDescriptor + fmt::Debug + TryFrom<u8>;
/// The name of the service, used in Rust code and perhaps for human readability.
fn name(&self) -> &'static str;
/// The raw protobuf name of the service.
fn proto_name(&self) -> &'static str;
/// The package name of the service.
fn package(&self) -> &'static str {
""
}
/// All of the available methods on the service.
fn methods(&self) -> &'static [Self::Method];
}
/// A descriptor for a method available on an RPC service.
pub trait MethodDescriptor: Clone + Copy + fmt::Debug + Send + Sync {
/// The name of the service, used in Rust code and perhaps for human readability.
fn name(&self) -> &'static str;
/// The raw protobuf name of the service.
fn proto_name(&self) -> &'static str;
/// The Rust `TypeId` for the input that this method accepts.
fn input_type(&self) -> any::TypeId;
/// The raw protobuf name for the input type that this method accepts.
fn input_proto_type(&self) -> &'static str;
/// The Rust `TypeId` for the output that this method produces.
fn output_type(&self) -> any::TypeId;
/// The raw protobuf name for the output type that this method produces.
fn output_proto_type(&self) -> &'static str;
/// The index of the method in the service descriptor.
fn index(&self) -> u8;
}

View File

@@ -0,0 +1,34 @@
//! Error type definitions for errors that can occur during RPC interactions.
use std::result;
use prost;
use thiserror;
#[derive(Debug, thiserror::Error)]
pub enum Error {
#[error("rust tun error {0}")]
ExecutionError(#[from] anyhow::Error),
#[error("Decode error: {0}")]
DecodeError(#[from] prost::DecodeError),
#[error("Encode error: {0}")]
EncodeError(#[from] prost::EncodeError),
#[error("Invalid method index: {0}, service: {1}")]
InvalidMethodIndex(u8, String),
#[error("Invalid service name: {0}, proto name: {1}")]
InvalidServiceKey(String, String),
#[error("Invalid packet: {0}")]
MalformatRpcPacket(String),
#[error("Timeout: {0}")]
Timeout(#[from] tokio::time::error::Elapsed),
#[error("Tunnel error: {0}")]
TunnelError(#[from] crate::tunnel::TunnelError),
}
pub type Result<T> = result::Result<T, Error>;

View File

@@ -0,0 +1,67 @@
//! Traits for defining generic RPC handlers.
use super::{
controller::Controller,
descriptor::{self, ServiceDescriptor},
};
use bytes;
/// An implementation of a specific RPC handler.
///
/// This can be an actual implementation of a service, or something that will send a request over
/// a network to fulfill a request.
#[async_trait::async_trait]
pub trait Handler: Clone + Send + Sync + 'static {
/// The service descriptor for the service whose requests this handler can handle.
type Descriptor: descriptor::ServiceDescriptor + Default;
type Controller: super::controller::Controller;
///
/// Perform a raw call to the specified service and method.
async fn call(
&self,
ctrl: Self::Controller,
method: <Self::Descriptor as descriptor::ServiceDescriptor>::Method,
input: bytes::Bytes,
) -> super::error::Result<bytes::Bytes>;
fn service_descriptor(&self) -> Self::Descriptor {
Self::Descriptor::default()
}
fn get_method_from_index(
&self,
index: u8,
) -> super::error::Result<<Self::Descriptor as descriptor::ServiceDescriptor>::Method> {
let desc = self.service_descriptor();
<Self::Descriptor as descriptor::ServiceDescriptor>::Method::try_from(index)
.map_err(|_| super::error::Error::InvalidMethodIndex(index, desc.name().to_string()))
}
}
#[async_trait::async_trait]
pub trait HandlerExt: Send + Sync + 'static {
type Controller;
async fn call_method(
&self,
ctrl: Self::Controller,
method_index: u8,
input: bytes::Bytes,
) -> super::error::Result<bytes::Bytes>;
}
#[async_trait::async_trait]
impl<C: Controller, T: Handler<Controller = C>> HandlerExt for T {
type Controller = C;
async fn call_method(
&self,
ctrl: Self::Controller,
method_index: u8,
input: bytes::Bytes,
) -> super::error::Result<bytes::Bytes> {
let method = self.get_method_from_index(method_index)?;
self.call(ctrl, method, input).await
}
}

View File

@@ -0,0 +1,5 @@
pub mod __rt;
pub mod controller;
pub mod descriptor;
pub mod error;
pub mod handler;

Some files were not shown because too many files have changed in this diff Show More