This project page is still under migration. Not all blogs are here.
Please go to https://huakunshen.super.site/ to see a full list of my blogs.
This project page is still under migration. Not all blogs are here.
Please go to https://huakunshen.super.site/ to see a full list of my blogs.
Let's say you have a local network with subnet 192.168.0.0/24
, the router is on 192.168.0.1
, you have a VM running on 192.168.0.2
(Doesn't matter here).
I want to give a teammate access to the VM but doesn't allow the VM to access any other devices under the same subnet.
VLAN is a way to do it, but too complicated and require some hardwares and softwares.
The easiest way is to rely on the VM's firewall, simply don't give sudo access to the guest user.
sudo ufw enable
sudo ufw allow out to 192.168.0.1
sudo ufw deny out to 192.168.0.0/24
ufw allow out to 192.168.0.1
is for allowing traffic to router, otherwise it will not be able to connect to external network.
The ufw deny out to 192.168.0.0/24
must be run after ufw allow out to 192.168.0.1
, as the rules are like a chain in iptables
, if the deny
rule comes first, traffic to router will be blocked and the allow
rule won't even to reached.
The prepend
keyword can be used to move a rule's priority ufw prepend deny out to 192.168.0.0/24
.
The following script detects all submodules, cd
into them and commit all changes.
Good for note taking submodules which don't require a very meaningful commit message.
Saves the trouble for commit every submodule separately.
Save the script as a shell script file update-all.sh
and run it before every commit to the main repo.
#!/bin/bash
cwd=$PWD
submodulePaths=$(git submodule | awk '{$1=$1;print}' | cut -d ' ' -f 2)
function git_add_commit_update {
nothing=$(git status | grep 'nothing to commit')
if [[ -z $nothing ]]; then
git add .
git commit -m "Auto (Update Module)"
git push
fi
}
for path in $submodulePaths
do
cd $path
detached_head=$(git branch | grep 'HEAD detached')
echo $path
if [[ -z $detached_head ]]; then
git_add_commit_update
else
git stash
git checkout master
git stash pop
git_add_commit_update
fi
cd $cwd
done
Bundler Artifacts has sample CI and config script.
Cross-Platform Compilation has a sample GitHub Action CI script for cross-platform compilation (Windows, MacOS and Linux). Compiled files are stored as artifacts in a draft GitHub release. The release assets will be read by updater server for auto-update.
For updater to work, a public key is required.
"updater": {
"active": true,
"endpoints": [
"https://releases.myapp.com/{{target}}/{{current_version}}"
],
"dialog": true,
"pubkey": "YOUR_UPDATER_SIGNATURE_PUBKEY_HERE"
}
A pair of keys can be generated with tauri signer generate -w ~/.tauri/ezup.key
.
If update is configured, then private key and password environment variables must be set.
The following script can automatically load the private key as environment variable. Assuming password is an empty string.
#!/usr/bin/env bash
PRIVATE_KEY_PATH="$HOME/.tauri/ezup.key";
if test -f "$PRIVATE_KEY_PATH"; then
export TAURI_PRIVATE_KEY=$(cat ~/.tauri/ezup.key); # if the private key is stored on disk
export TAURI_KEY_PASSWORD="";
else
echo "Warning: Private Key File Not Found";
fi
In GitHub Action, environment variables can be set like this in the top level of yml file.
env:
TAURI_PRIVATE_KEY: ${{ secrets.TAURI_PRIVATE_KEY }}
TAURI_KEY_PASSWORD: ${{ secrets.TAURI_KEY_PASSWORD }}
I encountered a error during compilation on Ubuntu platform.
Error: thread '<unnamed>' panicked at 'Can't detect any appindicator library', src/build.rs:326:17
I found a solution in this issue.
Install libayatana-appindicator3-1-dev
with apt for ubuntu.
vercel/hazel is a updater server for electron, can be deployed in a few clicks on vercel.
lemarier/tauri-update-server forks vercel/hazel.
I forked lemarier/tauri-update-server to be HuakunShen/tauri-update-server.
The reason I made a fork is that, new upates were made in vercel/hazel, and I merged the new commits to lemarier/tauri-update-server.
With one click, an update server can be deployed on Vercel.
See EzUp and HuakunShen/tauri-ezup-updater for and example.
The former is the actual Tauri app. The later is the corresponding update server.
I encountered a weird problem with MacOS storage.
I wanted to install a MacOS VM with Parallels Desktop, but was unable to. The error message says I don't have enough space (at least 49.13G) on disk. But storage displayed in preference tells me I have more than 500GB of space, and I just cleaned the storage.
Although preference says that I have space, other places give me different result.
df -h .
says my disk usage is 97%
In disk utility, my disk usage is displayed to be 942.28/994.66GB.
Right click on the disk -> Get Info
. The purgeable space section takes ~500GB of space.
Usually, Mac should handle it. Space should be freed while needed, but not here.
MacOS doesn't seem to provide a option to manually purge space, so I have to use a third party software.
How stupid is this?
With DaisyDisk, you can find the purgeable space visually. Drag it to the button at the bottom, and delete. Then the purgeable space will be purged.
First of all, I have to say, I am very disappointed with AWS's documentation. They do have many documentation and sample code, but I am still unable to find what I was looking for (easily).
I was working on a project that requires using Rust to upload files to AWS S3. I wanted to use Rest API to do this, but could not find enough information from the documentation. There is no sample code or something like a postman API doc that allows you to generate client code from a Rest API.
For example, in this API doc on PutObject, https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html.
Authorization:authorization string
doesn't mean anything to me. I have access key and secret, but there must be a way to get this authorization string.
I am pretty sure it exists, and must be somewhere in the docs. I just couldn't find it. Put a link in the documentation isn't hard.
It saves people time from looking through your entire documentation.
The purpose of World Wide Web is to link things together, instead of look for things separately and try to assemble in clients' head.
Then I switched to Rust SDK. They have plenty of documentation and sample code; but I got stuck on one problem for a long time.
Again, authorization. The documentation and sample code always assume you have the same scenario as they do.
They assume you have a ~/.aws/credentials
file with your access key id and secret.
Sample code always loads credentials automatically from default locations or environment variables, which is fine for a server application.
For client-side software, this doesn't hold. I need to explicitly pass credentials to a function to generate a client.
This is possible and documented for both Python and Nodejs version of the doc, but not for Rust.
I had to go over so many documentation and sample code to figure out how to do this naive thing. Function from another Rust crate (package) has to be used. aws_types.
Basically, there are many different ways to produce credentials and client; but for someone without prior knowledge about your nasty design, there is no way to know which package I should find what the method needed. If you decide to put things in different packages, then at least provide an obvious link somewhere to indicate "You have the option to do blah blah, read the docs here".
Reading AWS docs (Rust) is like browse information everywhere and try to assemble in my head. Without enough prior knowledge, it's not easy to get things done quickly.
hardcoded-credentials
feature needs to be enabled means, enable it in Cargo.toml
.aws-types = { version = "0.52.0", features = ["hardcoded-credentials"] }
Cargo.toml
please. Features can be enabled with anything, environment variables, parameters, configurationsCargo.toml
featureWhen I google "AWS s3 python client credential loading", the first link gives me what I need: Passing credentials as parameters . Took me 10 seconds to find the answer.
For Nodejs, it took me ~10 minutes. To find docs and examples everywhere. This is how I found the solution eventually.
accessKeyId, expiration, secretAccessKey, sessionToken
.This is no different from browsing source code. It's important developers has the ability to read source code and API docs. That doesn't mean the docs provider don't need to provide easy access to the most basic functionalities.
At least I could figure out Nodejs solution within 20 minutes. Took me a few hours to figure out the Rust solution.
aws.amazon.com
, github.com
, and external websites like S3 Client - AWS SDK for JavaScript v3, (different for every language).I run into a problem related to ssh and got stuck for a few days. On a fresh-intsalled win11 PC, I tried to configure OpenSSh server, which I have done a thousand times on Linux OS.
I could ssh into the windows PC, but only using password; Key-based auth (i.e. password-less) doesn't work no matter what I do.
I added the authorized_keys
file to C:\Users\username\.ssh
, checked the file permission and even created another user account to compare.
None of these fixed the problem, and I couldn't find a good solution from Google or Chat GPT. BTW, Chat GPT keeps giving me Linux-related solutions.
The solution is actually quite simple. I read the official Doc by microsoft. Key-based authentication (Administrative user).
It turns out, Windows treat admin account differently. On Linux, regular accounts and sudoer accounts both use $HOME/.ssh/authorized_keys
to stored trusted public keys.
The reason of my failure was because I was trying to ssh into an admin account.
On Windows, a administrators_authorized_keys
should be placed under C:\ProgramData\ssh\
. The content of administrators_authorized_keys
is exactly the same.
Solution verified to work.
Good Luck Hacking!
This is a very weird bug. I will describe the scenario first, then the hacky fix.
I built a new PC with Gigabyte Z690 UD AX DDR4. The wifi 6 wireless adapter works fine, reaches 800+Mbps on my gigabit network; but the ethernet connection can only reach ~10Mbps.
This is super weird, the name is Realtek Gaming 2.5GbE Family Controlleras, the ethernet adapter should support 2.5G, and there is nothing wrong with the router.
From this debugging process, we know that the hardware should be fine. The problem most likely comes from Windows, or compatibility issue between windows and the network adapter (driver).
The first time I got it working is by opening "Device", uninstall the device, and reinstall it.
After scanning, the network adapter is back, and the network speed is fixed (900+Mbps).
But this process has to be done every time a computer is booted.
This powershell script basically automate the previous solution using powershell.
It requires Adminitrator permission to run the script.
Running this manually everytime is not better than the previous solution, we have to auto-run it on system starts.
The commented out lines is for debugging purpose, to see if the script has really run.
Edit the following script to fit your scenario.
e.g. Update the deviceName
, and log path.
# echo "start" | Out-File -Append C:\Users\user\Desktop\debug.log
# date | Out-File -Append C:\Users\user\Desktop\debug.log
# Start-Sleep -Seconds 5
$deviceName="Realtek Gaming 2.5GbE Family Controller"
foreach ($dev in (Get-PnpDevice | Where-Object{$_.Name -eq $deviceName})) {
&"pnputil" /remove-device $dev.InstanceId
}
pnputil /scan-devices
# echo "finish" | Out-File -Append C:\Users\user\Desktop\debug.log
powershell <script-path>
Restart the computer, it should now be fixed.
If it doesn't, you may need to debug.
Microsoft, Realtek, and Gigabyte, I don't know which of you should be responsible for this bug, but this needs to be fixed.
A gaming motherboard can't game at 10Mbps!
I personally believe Microsft/Windows should take the blame, as it works fine on Ubuntu and the driver itself should be fine. Reinstalling the device fixes the problem could induce that a wrong driver is used (override the correct driver).
This solution is hacky and inelegant, but I can't find a better solution online.
Come on, Microsoft.
Welcome to My Website