This project page is still under migration. Not all blogs are here.
Please go to https://huakunshen.super.site/ to see a full list of my blogs.
This project page is still under migration. Not all blogs are here.
Please go to https://huakunshen.super.site/ to see a full list of my blogs.
Cloud Run is a good serverless solution for deploying docker containers. It's easy to deploy and scale. Normally, if you can run a docker container in local dev environment, you can deploy it to cloud run directly without too much extra configuration.
However, in my experience, there is one thing that is easy to waste lots of time on, that is mounting a secret file to the container.
Cloud run allows you to add environment variables, but one by one. It's not convenient if you have a lot of environment variables, and if you need to change them often.
My solution is to add the content of .env
file to Secret Manager in GCP and mount the secret file to the container, then load the .env
file in the source code. This way I can update all env vars at once by creating a new version of the secret.
With docker volum, we can mount a single file easily like this docker run -v ./secret.env:/app/.env image-name
.
However in cloud run, it's not that easy. If you try to configure the volume the same way docker does, your container will fail to start.
Here is the process to mount a secret file to cloud run;
.env
.Container(s)
tab. Under VOLUME MOUNTS
you can add Volume Mount.The mount path points to the folder where the secret file is mounted, but the folder has to be empty/non-existent in your source code. Cloud Run doesn't allow mounting a single file, the mounted folder will replace the folder in your source code, because the folder is a protected folder by GCP.
If your source code is in /xc-server
, and the mount path is set to /xc-server
with the mounted file at /xc-server/.env
, then the /xc-server
folder will be completely removed and contain only the .env
file.
What I do is mount the folder to /xc-server/env/.env
, then in the source code load the .env
file from /xc-server/env/.env
.
When I tried to run prisma within a docker container, I got this error:
Error opening a TLS connection: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1919: (unable to get local issuer certificate)
The docker image I used was oven/bun
, but I believe this error can happen to any docker image that doesn't install ca-certificates
.
The solution is simple, add the following to the Dockerfile
RUN apt update && apt install -y ca-certificates
https://packages.debian.org/sid/ca-certificates
Contains the certificate authorities shipped with Mozilla's browser to allow SSL-based applications to check for the authenticity of SSL connections.
Browsers like chrome or firefox have built-in trusted certificate authorities, which means they can communicate to verify the authenticity of SSL connections. When your prisma connect requires SSL connection, then you have to install ca-certificates
to allow SSL-based applications to check for the authenticity of SSL connections.
My project CrossCopy has been using the following tech stack
Recently I spend a few days migrating from this stack to the following stack
This makes the project much easier to develop and maintain. This is a blog I copied from our dev docs and may be useful for people who are interested in using bun for their projects.
There are huge breaking changes in this migration to improve development experience and performance.
This blog is must read, there are many places to be aware of, otherwise you may not be able to run the project.
Here are the PRs in our two main monorepo repositories for server and clients:
Many changes was maade in the @crosscopy/core
and @crosscopy/graphql-schema
repo but they don't have a separate PR for simplicity. Check the associated commits in the 2 PRs above if you really want.
After tons of research and experiment, I decided to make this huge refactoring and toolchain migration.
@crosscopy/core
and @crosscopy/graphql
which requires building TypeScript into JavaScript first, and then import them in another project. This is complicated during development, especially the 2 libraries are only used by ourselves. Bun allows us to import TypeScript directly using relative paths, we no longer need to build them. Before, after making any changes in @crosscopy/core
, I have to manually compile it again before I can see the changes in server who imports it. Now, I can see the changes immediately. Dev servers with bun --watch
will pick up the changes from it's dependencies automatically although it's in another package, there is no need to restart server, without feeling another separate package.dotenv
package to programmatically load environment variables from .env
file..env
file, we no longer need to use dotenv
package. See https://bun.sh/guides/runtime/set-env and https://bun.sh/guides/runtime/read-envbun run <script>
or bunx vitest
, bun will automatically load .env
file in the current directory, making testing and development easier. This is why the test
script in package.json
for many packages are changed to things like bunx vitest
or bun run test
. Sometimes npm run test
won't work because it doesn't load .env
file.pnpm
bun
is not only a runtime, but also a package manager. It's the fastest I've ever seen. Faster than npm
, yarn
and pnpm
. During migration to bun
runtime I always use bun to install packages and do package management. I changed my plan when I started migrating clients CICD to use bun, as bun currently only works on MacOS and Linux, not Windows. Our future Windows client will have to be built on Windows in CICD, and our development environment should support Windows although I personally use MacOS and Linux all the time.npm
to do package management is no longer possible because bun workspaces uses slightly different syntax from npm workspaces.package.json
. npm workspaces uses "@crosscopy/core": "*"
syntax, while bun workspaces uses "@crosscopy/core": "workspace:*"
. The extra workspace:
prefix is required and prevent npm workspaces to work with bun workspaces. i.e. npm package management and bun runtime simply won't work together in the same mono project.pnpm
comes to resecure. pnpm ranks first in 2022 stateofjs for monorepo tools (https://2022.stateofjs.com/en-US/libraries/monorepo-tools/). Turbo repo is second and we are using them together for our monorepo management. pnpm workspaces uses the same syntax as bun workspaces with a workspace:*
prefix for within-monorepo dependencies. Making it possible to use pnpm
for package management and bun
for runtime in the same monorepo. Perfect! Problem solved.crosscopy-dev
repo from using nx monorepo to turborepo. I simply had more bugs and issues with nx repo. crosscopy-clients
repo uses turborepo and has a better experience, so I decided to migrate crosscopy-dev
repo to turborepo as well. turborepo also ranks higher than nx in 2022 stateofjs for monorepo tools (https://2022.stateofjs.com/en-US/libraries/monorepo-tools/), with 10% more retention rate, 14% more interest.bun dev
is used to start development server, bun run test
is used to run tests, bun run build
is used to build TypeScript into JavaScript. bun run
is used to run any script in package.json
.pnpm run build
and pnpm run test
in the root of a mono repo will use turborepo to build all packages in the monorepo. I've configured everything to work. If you need to run tests in subpackages, try to use bun run test
or bunx vitest
or bunx jest
as I didn't write code to load .env
file, using a bun command does that for us even if the test still uses nodejs under the hood. As long as bun is used as the initial command, .env.
is loaded..github/workflows
to see what commands CI uses to build the repo. If CI works, then so should your local environment work if configured correctly.Here is a Drag and Drop wrapper component I wrote for svelte.
It's important to note that, in order for the drop event to be fired properly, we also need to call preentDefault()
on the dragover
event.
<script lang="ts">
import { createEventDispatcher } from 'svelte';
const dispatch = createEventDispatcher();
function onDrop(e: DragEvent) {
e.preventDefault();
e.stopPropagation();
dispatch('drop', e);
}
function onDragOver(e: DragEvent) {
e.preventDefault();
dispatch('dragover', e);
}
</script>
<span
class={$$props.class}
on:dragenter
on:drop={onDrop}
on:dragleave
on:dragover={onDragOver}
on:dragend
on:click
role="none"
>
<slot />
</span>
To use the component, here is an example. The UI requires tailwind and skeleton library to properly render. Just read the code to get the idea, it's quite simple to use in fact. DragNDrop
is a wrapper, you need to set the size, and styles for it, then it's region will be file-droppable.
<script lang="ts">
import DragNDrop from './drag-n-drop.svelte';
function onDrop(e: CustomEvent<DragEvent>) {
console.log(e.detail.dataTransfer?.files);
}
let isOver = false;
let fileInputEle: HTMLInputElement;
</script>
<DragNDrop
class="flex flex-col justify-center items-center space-y-6 card {$$props.class} cursor-pointer"
on:click={() => {
fileInputEle.click();
}}
on:drop={onDrop}
on:dragleave={() => {
isOver = false;
}}
on:dragenter={() => {
isOver = true;
}}
>
<iconify-icon class="text-6xl" icon="ic:round-upload" />
{#if isOver}
<span class="text-lg"><strong>Drop the file</strong></span>
{:else}
<span class="text-lg"><strong>Upload a file</strong> or drag and drop</span>
{/if}
<input
type="file"
class="hidden"
multiple
bind:this={fileInputEle}
on:change={(e) => {
// console.log(e.target);
}}
/>
</DragNDrop>
Let's say you have a local network with subnet 192.168.0.0/24
, the router is on 192.168.0.1
, you have a VM running on 192.168.0.2
(Doesn't matter here).
I want to give a teammate access to the VM but doesn't allow the VM to access any other devices under the same subnet.
VLAN is a way to do it, but too complicated and require some hardwares and softwares.
The easiest way is to rely on the VM's firewall, simply don't give sudo access to the guest user.
sudo ufw enable
sudo ufw allow out to 192.168.0.1
sudo ufw deny out to 192.168.0.0/24
ufw allow out to 192.168.0.1
is for allowing traffic to router, otherwise it will not be able to connect to external network.
The ufw deny out to 192.168.0.0/24
must be run after ufw allow out to 192.168.0.1
, as the rules are like a chain in iptables
, if the deny
rule comes first, traffic to router will be blocked and the allow
rule won't even to reached.
The prepend
keyword can be used to move a rule's priority ufw prepend deny out to 192.168.0.0/24
.
The following script detects all submodules, cd
into them and commit all changes.
Good for note taking submodules which don't require a very meaningful commit message.
Saves the trouble for commit every submodule separately.
Save the script as a shell script file update-all.sh
and run it before every commit to the main repo.
#!/bin/bash
cwd=$PWD
submodulePaths=$(git submodule | awk '{$1=$1;print}' | cut -d ' ' -f 2)
function git_add_commit_update {
nothing=$(git status | grep 'nothing to commit')
if [[ -z $nothing ]]; then
git add .
git commit -m "Auto (Update Module)"
git push
fi
}
for path in $submodulePaths
do
cd $path
detached_head=$(git branch | grep 'HEAD detached')
echo $path
if [[ -z $detached_head ]]; then
git_add_commit_update
else
git stash
git checkout master
git stash pop
git_add_commit_update
fi
cd $cwd
done
Bundler Artifacts has sample CI and config script.
Cross-Platform Compilation has a sample GitHub Action CI script for cross-platform compilation (Windows, MacOS and Linux). Compiled files are stored as artifacts in a draft GitHub release. The release assets will be read by updater server for auto-update.
For updater to work, a public key is required.
"updater": {
"active": true,
"endpoints": [
"https://releases.myapp.com/{{target}}/{{current_version}}"
],
"dialog": true,
"pubkey": "YOUR_UPDATER_SIGNATURE_PUBKEY_HERE"
}
A pair of keys can be generated with tauri signer generate -w ~/.tauri/ezup.key
.
If update is configured, then private key and password environment variables must be set.
The following script can automatically load the private key as environment variable. Assuming password is an empty string.
#!/usr/bin/env bash
PRIVATE_KEY_PATH="$HOME/.tauri/ezup.key";
if test -f "$PRIVATE_KEY_PATH"; then
export TAURI_PRIVATE_KEY=$(cat ~/.tauri/ezup.key); # if the private key is stored on disk
export TAURI_KEY_PASSWORD="";
else
echo "Warning: Private Key File Not Found";
fi
In GitHub Action, environment variables can be set like this in the top level of yml file.
env:
TAURI_PRIVATE_KEY: ${{ secrets.TAURI_PRIVATE_KEY }}
TAURI_KEY_PASSWORD: ${{ secrets.TAURI_KEY_PASSWORD }}
I encountered a error during compilation on Ubuntu platform.
Error: thread '<unnamed>' panicked at 'Can't detect any appindicator library', src/build.rs:326:17
I found a solution in this issue.
Install libayatana-appindicator3-1-dev
with apt for ubuntu.
vercel/hazel is a updater server for electron, can be deployed in a few clicks on vercel.
lemarier/tauri-update-server forks vercel/hazel.
I forked lemarier/tauri-update-server to be HuakunShen/tauri-update-server.
The reason I made a fork is that, new upates were made in vercel/hazel, and I merged the new commits to lemarier/tauri-update-server.
With one click, an update server can be deployed on Vercel.
See EzUp and HuakunShen/tauri-ezup-updater for and example.
The former is the actual Tauri app. The later is the corresponding update server.
I encountered a weird problem with MacOS storage.
I wanted to install a MacOS VM with Parallels Desktop, but was unable to. The error message says I don't have enough space (at least 49.13G) on disk. But storage displayed in preference tells me I have more than 500GB of space, and I just cleaned the storage.
Although preference says that I have space, other places give me different result.
df -h .
says my disk usage is 97%
In disk utility, my disk usage is displayed to be 942.28/994.66GB.
Right click on the disk -> Get Info
. The purgeable space section takes ~500GB of space.
Usually, Mac should handle it. Space should be freed while needed, but not here.
MacOS doesn't seem to provide a option to manually purge space, so I have to use a third party software.
How stupid is this?
With DaisyDisk, you can find the purgeable space visually. Drag it to the button at the bottom, and delete. Then the purgeable space will be purged.
First of all, I have to say, I am very disappointed with AWS's documentation. They do have many documentation and sample code, but I am still unable to find what I was looking for (easily).
I was working on a project that requires using Rust to upload files to AWS S3. I wanted to use Rest API to do this, but could not find enough information from the documentation. There is no sample code or something like a postman API doc that allows you to generate client code from a Rest API.
For example, in this API doc on PutObject, https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html.
Authorization:authorization string
doesn't mean anything to me. I have access key and secret, but there must be a way to get this authorization string.
I am pretty sure it exists, and must be somewhere in the docs. I just couldn't find it. Put a link in the documentation isn't hard.
It saves people time from looking through your entire documentation.
The purpose of World Wide Web is to link things together, instead of look for things separately and try to assemble in clients' head.
Then I switched to Rust SDK. They have plenty of documentation and sample code; but I got stuck on one problem for a long time.
Again, authorization. The documentation and sample code always assume you have the same scenario as they do.
They assume you have a ~/.aws/credentials
file with your access key id and secret.
Sample code always loads credentials automatically from default locations or environment variables, which is fine for a server application.
For client-side software, this doesn't hold. I need to explicitly pass credentials to a function to generate a client.
This is possible and documented for both Python and Nodejs version of the doc, but not for Rust.
I had to go over so many documentation and sample code to figure out how to do this naive thing. Function from another Rust crate (package) has to be used. aws_types.
Basically, there are many different ways to produce credentials and client; but for someone without prior knowledge about your nasty design, there is no way to know which package I should find what the method needed. If you decide to put things in different packages, then at least provide an obvious link somewhere to indicate "You have the option to do blah blah, read the docs here".
Reading AWS docs (Rust) is like browse information everywhere and try to assemble in my head. Without enough prior knowledge, it's not easy to get things done quickly.
hardcoded-credentials
feature needs to be enabled means, enable it in Cargo.toml
.aws-types = { version = "0.52.0", features = ["hardcoded-credentials"] }
Cargo.toml
please. Features can be enabled with anything, environment variables, parameters, configurationsCargo.toml
featureWhen I google "AWS s3 python client credential loading", the first link gives me what I need: Passing credentials as parameters . Took me 10 seconds to find the answer.
For Nodejs, it took me ~10 minutes. To find docs and examples everywhere. This is how I found the solution eventually.
accessKeyId, expiration, secretAccessKey, sessionToken
.This is no different from browsing source code. It's important developers has the ability to read source code and API docs. That doesn't mean the docs provider don't need to provide easy access to the most basic functionalities.
At least I could figure out Nodejs solution within 20 minutes. Took me a few hours to figure out the Rust solution.
aws.amazon.com
, github.com
, and external websites like S3 Client - AWS SDK for JavaScript v3, (different for every language).