Skip to main content

· One min read
caution

This project page is still under migration. Not all blogs are here.

Please go to https://huakunshen.super.site/ to see a full list of my blogs.

· 2 min read
Huakun Shen

Cloud Run is a good serverless solution for deploying docker containers. It's easy to deploy and scale. Normally, if you can run a docker container in local dev environment, you can deploy it to cloud run directly without too much extra configuration.

However, in my experience, there is one thing that is easy to waste lots of time on, that is mounting a secret file to the container.

Volumes

Cloud run allows you to add environment variables, but one by one. It's not convenient if you have a lot of environment variables, and if you need to change them often.

My solution is to add the content of .env file to Secret Manager in GCP and mount the secret file to the container, then load the .env file in the source code. This way I can update all env vars at once by creating a new version of the secret.

With docker volum, we can mount a single file easily like this docker run -v ./secret.env:/app/.env image-name.

However in cloud run, it's not that easy. If you try to configure the volume the same way docker does, your container will fail to start.

Here is the process to mount a secret file to cloud run;

  • Under volumes tab, you can add a secret volume type, choose a secret from Secret Manager.
  • The mount path can be a filename, such as .env.
  • Then go to the Container(s) tab. Under VOLUME MOUNTS you can add Volume Mount.

The mount path points to the folder where the secret file is mounted, but the folder has to be empty/non-existent in your source code. Cloud Run doesn't allow mounting a single file, the mounted folder will replace the folder in your source code, because the folder is a protected folder by GCP.

If your source code is in /xc-server, and the mount path is set to /xc-server with the mounted file at /xc-server/.env, then the /xc-server folder will be completely removed and contain only the .env file.

What I do is mount the folder to /xc-server/env/.env, then in the source code load the .env file from /xc-server/env/.env.

· One min read
Huakun Shen

When I tried to run prisma within a docker container, I got this error:

Error opening a TLS connection: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:../ssl/statem/statem_clnt.c:1919: (unable to get local issuer certificate)

The docker image I used was oven/bun, but I believe this error can happen to any docker image that doesn't install ca-certificates.

The solution is simple, add the following to the Dockerfile

RUN apt update && apt install -y ca-certificates

What is ca-certificates

https://packages.debian.org/sid/ca-certificates

Contains the certificate authorities shipped with Mozilla's browser to allow SSL-based applications to check for the authenticity of SSL connections.

Browsers like chrome or firefox have built-in trusted certificate authorities, which means they can communicate to verify the authenticity of SSL connections. When your prisma connect requires SSL connection, then you have to install ca-certificates to allow SSL-based applications to check for the authenticity of SSL connections.

· 9 min read
Huakun Shen

My project CrossCopy has been using the following tech stack

  • npm for package management
  • nodejs for JS runtime
  • Server: expressjs + nodejs
  • Web: Svelte + nodejs
  • Monorepo: nx monorepo + turborepo + npm workspaces
  • JS Module: CJS
  • API: GraphQL
  • Sync: GraphQL subscription

Recently I spend a few days migrating from this stack to the following stack

  • pnpm + bun for package management
  • bun for JS runtime
  • Server: Nestjs + bun
  • Web: Svelte + bun
  • Monorepo: turborepo + (pnpm workspaces + bun workspaces)
    • bun and pnpm workspaces are pretty much the same
  • JS Module: ESM
  • API: GraphQL
  • Sync: SocketIO

This makes the project much easier to develop and maintain. This is a blog I copied from our dev docs and may be useful for people who are interested in using bun for their projects.

There are huge breaking changes in this migration to improve development experience and performance.

This blog is must read, there are many places to be aware of, otherwise you may not be able to run the project.

Here are the PRs in our two main monorepo repositories for server and clients:

Many changes was maade in the @crosscopy/core and @crosscopy/graphql-schema repo but they don't have a separate PR for simplicity. Check the associated commits in the 2 PRs above if you really want.

After tons of research and experiment, I decided to make this huge refactoring and toolchain migration.

  • Migrate nodejs runtime to bun runtime (https://bun.sh/)
    • Bun is much mcuh faster than nodejs
    • For now I removed the subpath exports in package.json for @crosscopy/core and @crosscopy/graphql which requires building TypeScript into JavaScript first, and then import them in another project. This is complicated during development, especially the 2 libraries are only used by ourselves. Bun allows us to import TypeScript directly using relative paths, we no longer need to build them. Before, after making any changes in @crosscopy/core, I have to manually compile it again before I can see the changes in server who imports it. Now, I can see the changes immediately. Dev servers with bun --watch will pick up the changes from it's dependencies automatically although it's in another package, there is no need to restart server, without feeling another separate package.
    • Bun runs TypeScript directly, no need to build TypeScript into JavaScript first, which is much faster.
    • Environment Variables
      • With nodejs and ts-node, we have to use a dotenv package to programmatically load environment variables from .env file.
      • Bun has a built-in support for .env file, we no longer need to use dotenv package. See https://bun.sh/guides/runtime/set-env and https://bun.sh/guides/runtime/read-env
      • When running and command with bun, bun run <script> or bunx vitest, bun will automatically load .env file in the current directory, making testing and development easier. This is why the test script in package.json for many packages are changed to things like bunx vitest or bun run test. Sometimes npm run test won't work because it doesn't load .env file.
  • Migrate package manager to pnpm
    • bun is not only a runtime, but also a package manager. It's the fastest I've ever seen. Faster than npm, yarn and pnpm. During migration to bun runtime I always use bun to install packages and do package management. I changed my plan when I started migrating clients CICD to use bun, as bun currently only works on MacOS and Linux, not Windows. Our future Windows client will have to be built on Windows in CICD, and our development environment should support Windows although I personally use MacOS and Linux all the time.
    • Using npm to do package management is no longer possible because bun workspaces uses slightly different syntax from npm workspaces.
      • Using packages from the same monorepo as dependency requires adding the package name to package.json. npm workspaces uses "@crosscopy/core": "*" syntax, while bun workspaces uses "@crosscopy/core": "workspace:*". The extra workspace: prefix is required and prevent npm workspaces to work with bun workspaces. i.e. npm package management and bun runtime simply won't work together in the same mono project.
    • pnpm comes to resecure. pnpm ranks first in 2022 stateofjs for monorepo tools (https://2022.stateofjs.com/en-US/libraries/monorepo-tools/). Turbo repo is second and we are using them together for our monorepo management. pnpm workspaces uses the same syntax as bun workspaces with a workspace:* prefix for within-monorepo dependencies. Making it possible to use pnpm for package management and bun for runtime in the same monorepo. Perfect! Problem solved.
  • Migration to ESM from CJS.
    • CommonJS was a legacy module system of the JavaScript ecosystem. We used to use CJS for compatibility with most packages. Some packages migrate to only ESM recently, causing problems. ESM has more features and is easier to use, for example top-level await is only in ESM.
    • We now migrate every package to ESM except for the new server written with Nest.js, I will talk more about it.
  • Migrate express to nestjs (https://nestjs.com/)
    • We used to use Express.js as our backend server framework. It's popular and undoubtedly the most popular framework in JS. However, it's not designed for large projects. It's hard to maintain and scale. Nest.js is a framework designed for large projects. It's based on Express.js, but it's much more powerful. It's also written in TypeScript, which is a big plus.
    • Express.js is a barebone framework, with no template, developers have to design and write everything from scratch. This is great, but bad initial design could lead to unmaintainable code in the future.
      • Our previous version server worked fine, but after the project get's bigger and a few refactor, I realized that the initial design was not good enough for this project as it grows bigger and bigger.
    • Nest.js has a lot of built-in features and templates. It organizes everything in the OOP way just like Spring Boot, many many files, but easier to read and maintain. With lots of built-in features that work out of the box, like Auth with JWT, rate limit throttler, GraphQL, Websocket with SocketIO, middleware and interceptor and much more. I don't need to set up everything from scratch, connecting components manually and making the code ugly and hard to maintain.
      • The testing framework is more mature, and it's easier to write tests. Everything is separated into modules, and it's easier to mock dependencies.
    • Problem with Nest.js.
      • I rewrite the server in Nest.js, fantastic experience and I can expect a better development and testing experience with it. However, a limitation of Nestjs is that it's completely in CommonJS, not possible to migrate to ESM. Our dependency packages (core and graphql-schema) has been migrated to the newer ESM standard, and to work with nest, they have to be compiled to CJS JavaScript first before they can be imported into Nest.js server, which gives up the freedom of importing TypeScript freely from any location.
      • Another problem with Nest.js + bun is that GraphQL subscription doesn't work with bun.
        • This is not a problem with Nest.js actually, but a problem with bun + Apollo server. bun's developer has worked so hard to make bun seamlessly integrate as a drop-in replacement for nodejs runtime by implementing most of the nodejs APIs. Most of the time I can use bun as a drop-in replacement for nodejs runtime. Bun works with nest.js websocket, but not with Apollo Server subscription. I don't know the reason either, but probably due to some missing APIs, there is no error shown. After hours of debugging, I found that bun simply won't work with Apollo Server, even without Nest.js. So it's not a Nest.js problem, but a problem between bun and Apollo Server.
        • Luckily, GraphQL Query and Mutation still work with bun runtime as they are simply HTTP requests under the hood. And since we have already decided to use SocketIO for realtime synchronization as it's more flexible and powerful than GraphQL subscription (SocketIO is two-way while subscription is only one-way), we don't need to use GraphQL subscription anymore. So this is not a problem for us. Later if Bun supports apollo server, we can use subscription again for some other simpler use cases that doesn't require two-way communication.
  • Migrate crosscopy-dev repo from using nx monorepo to turborepo. I simply had more bugs and issues with nx repo. crosscopy-clients repo uses turborepo and has a better experience, so I decided to migrate crosscopy-dev repo to turborepo as well. turborepo also ranks higher than nx in 2022 stateofjs for monorepo tools (https://2022.stateofjs.com/en-US/libraries/monorepo-tools/), with 10% more retention rate, 14% more interest.

Note

  • Install bun and pnpm. bun work similar to nodejs, pnpm works similar to npm.
  • In most packages, bun dev is used to start development server, bun run test is used to run tests, bun run build is used to build TypeScript into JavaScript. bun run is used to run any script in package.json.
  • pnpm run build and pnpm run test in the root of a mono repo will use turborepo to build all packages in the monorepo. I've configured everything to work. If you need to run tests in subpackages, try to use bun run test or bunx vitest or bunx jest as I didn't write code to load .env file, using a bun command does that for us even if the test still uses nodejs under the hood. As long as bun is used as the initial command, .env. is loaded.
  • If you are unsure about the command to use, look at .github/workflows to see what commands CI uses to build the repo. If CI works, then so should your local environment work if configured correctly.

· 2 min read
Huakun Shen

Here is a Drag and Drop wrapper component I wrote for svelte. It's important to note that, in order for the drop event to be fired properly, we also need to call preentDefault() on the dragover event.

<script lang="ts">
import { createEventDispatcher } from 'svelte';

const dispatch = createEventDispatcher();

function onDrop(e: DragEvent) {
e.preventDefault();
e.stopPropagation();
dispatch('drop', e);
}

function onDragOver(e: DragEvent) {
e.preventDefault();
dispatch('dragover', e);
}
</script>

<span
class={$$props.class}
on:dragenter
on:drop={onDrop}
on:dragleave
on:dragover={onDragOver}
on:dragend
on:click
role="none"
>
<slot />
</span>

To use the component, here is an example. The UI requires tailwind and skeleton library to properly render. Just read the code to get the idea, it's quite simple to use in fact. DragNDrop is a wrapper, you need to set the size, and styles for it, then it's region will be file-droppable.

<script lang="ts">
import DragNDrop from './drag-n-drop.svelte';

function onDrop(e: CustomEvent<DragEvent>) {
console.log(e.detail.dataTransfer?.files);
}

let isOver = false;
let fileInputEle: HTMLInputElement;
</script>

<DragNDrop
class="flex flex-col justify-center items-center space-y-6 card {$$props.class} cursor-pointer"
on:click={() => {
fileInputEle.click();
}}
on:drop={onDrop}
on:dragleave={() => {
isOver = false;
}}
on:dragenter={() => {
isOver = true;
}}
>
<iconify-icon class="text-6xl" icon="ic:round-upload" />
{#if isOver}
<span class="text-lg"><strong>Drop the file</strong></span>
{:else}
<span class="text-lg"><strong>Upload a file</strong> or drag and drop</span>
{/if}
<input
type="file"
class="hidden"
multiple
bind:this={fileInputEle}
on:change={(e) => {
// console.log(e.target);
}}
/>
</DragNDrop>

· One min read
Huakun Shen

Let's say you have a local network with subnet 192.168.0.0/24, the router is on 192.168.0.1, you have a VM running on 192.168.0.2 (Doesn't matter here).

I want to give a teammate access to the VM but doesn't allow the VM to access any other devices under the same subnet.

VLAN is a way to do it, but too complicated and require some hardwares and softwares.

The easiest way is to rely on the VM's firewall, simply don't give sudo access to the guest user.

sudo ufw enable
sudo ufw allow out to 192.168.0.1
sudo ufw deny out to 192.168.0.0/24

ufw allow out to 192.168.0.1 is for allowing traffic to router, otherwise it will not be able to connect to external network.

The ufw deny out to 192.168.0.0/24 must be run after ufw allow out to 192.168.0.1, as the rules are like a chain in iptables, if the deny rule comes first, traffic to router will be blocked and the allow rule won't even to reached.

The prepend keyword can be used to move a rule's priority ufw prepend deny out to 192.168.0.0/24.

· One min read
Huakun Shen

The following script detects all submodules, cd into them and commit all changes.

Good for note taking submodules which don't require a very meaningful commit message.

Saves the trouble for commit every submodule separately.

Save the script as a shell script file update-all.sh and run it before every commit to the main repo.

#!/bin/bash
cwd=$PWD


submodulePaths=$(git submodule | awk '{$1=$1;print}' | cut -d ' ' -f 2)

function git_add_commit_update {
nothing=$(git status | grep 'nothing to commit')
if [[ -z $nothing ]]; then
git add .
git commit -m "Auto (Update Module)"
git push
fi
}

for path in $submodulePaths
do
cd $path
detached_head=$(git branch | grep 'HEAD detached')
echo $path
if [[ -z $detached_head ]]; then
git_add_commit_update
else
git stash
git checkout master
git stash pop
git_add_commit_update
fi
cd $cwd
done

· 2 min read
Huakun Shen

Docs

Tauri Updater

Bundler Artifacts has sample CI and config script.

Cross-Platform Compilation has a sample GitHub Action CI script for cross-platform compilation (Windows, MacOS and Linux). Compiled files are stored as artifacts in a draft GitHub release. The release assets will be read by updater server for auto-update.

Sample tauri.config.json

Building

For updater to work, a public key is required.

tauri.config.json
"updater": {
"active": true,
"endpoints": [
"https://releases.myapp.com/{{target}}/{{current_version}}"
],
"dialog": true,
"pubkey": "YOUR_UPDATER_SIGNATURE_PUBKEY_HERE"
}

A pair of keys can be generated with tauri signer generate -w ~/.tauri/ezup.key.

If update is configured, then private key and password environment variables must be set.

The following script can automatically load the private key as environment variable. Assuming password is an empty string.

#!/usr/bin/env bash
PRIVATE_KEY_PATH="$HOME/.tauri/ezup.key";
if test -f "$PRIVATE_KEY_PATH"; then
export TAURI_PRIVATE_KEY=$(cat ~/.tauri/ezup.key); # if the private key is stored on disk
export TAURI_KEY_PASSWORD="";
else
echo "Warning: Private Key File Not Found";
fi

CICD (GitHub Action)

In GitHub Action, environment variables can be set like this in the top level of yml file.

env:
TAURI_PRIVATE_KEY: ${{ secrets.TAURI_PRIVATE_KEY }}
TAURI_KEY_PASSWORD: ${{ secrets.TAURI_KEY_PASSWORD }}

I encountered a error during compilation on Ubuntu platform.

Error: thread '<unnamed>' panicked at 'Can't detect any appindicator library', src/build.rs:326:17

I found a solution in this issue.

Install libayatana-appindicator3-1-dev with apt for ubuntu.

Updater Server

vercel/hazel is a updater server for electron, can be deployed in a few clicks on vercel.

lemarier/tauri-update-server forks vercel/hazel.

I forked lemarier/tauri-update-server to be HuakunShen/tauri-update-server.

The reason I made a fork is that, new upates were made in vercel/hazel, and I merged the new commits to lemarier/tauri-update-server.

With one click, an update server can be deployed on Vercel.

See EzUp and HuakunShen/tauri-ezup-updater for and example.

The former is the actual Tauri app. The later is the corresponding update server.

· One min read
Huakun Shen

Problem Description

I encountered a weird problem with MacOS storage.

I wanted to install a MacOS VM with Parallels Desktop, but was unable to. The error message says I don't have enough space (at least 49.13G) on disk. But storage displayed in preference tells me I have more than 500GB of space, and I just cleaned the storage.

Although preference says that I have space, other places give me different result.

df -h . says my disk usage is 97%

In disk utility, my disk usage is displayed to be 942.28/994.66GB.

Right click on the disk -> Get Info. The purgeable space section takes ~500GB of space.

Usually, Mac should handle it. Space should be freed while needed, but not here.

MacOS doesn't seem to provide a option to manually purge space, so I have to use a third party software.

How stupid is this?

Readings and Solutions

With DaisyDisk, you can find the purgeable space visually. Drag it to the button at the bottom, and delete. Then the purgeable space will be purged.

Result

· 5 min read
Huakun Shen

First of all, I have to say, I am very disappointed with AWS's documentation. They do have many documentation and sample code, but I am still unable to find what I was looking for (easily).

Intro

I was working on a project that requires using Rust to upload files to AWS S3. I wanted to use Rest API to do this, but could not find enough information from the documentation. There is no sample code or something like a postman API doc that allows you to generate client code from a Rest API.

For example, in this API doc on PutObject, https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html.

Authorization:authorization string doesn't mean anything to me. I have access key and secret, but there must be a way to get this authorization string. I am pretty sure it exists, and must be somewhere in the docs. I just couldn't find it. Put a link in the documentation isn't hard. It saves people time from looking through your entire documentation. The purpose of World Wide Web is to link things together, instead of look for things separately and try to assemble in clients' head.

Then I switched to Rust SDK. They have plenty of documentation and sample code; but I got stuck on one problem for a long time. Again, authorization. The documentation and sample code always assume you have the same scenario as they do. They assume you have a ~/.aws/credentials file with your access key id and secret. Sample code always loads credentials automatically from default locations or environment variables, which is fine for a server application. For client-side software, this doesn't hold. I need to explicitly pass credentials to a function to generate a client. This is possible and documented for both Python and Nodejs version of the doc, but not for Rust.

I had to go over so many documentation and sample code to figure out how to do this naive thing. Function from another Rust crate (package) has to be used. aws_types.

Basically, there are many different ways to produce credentials and client; but for someone without prior knowledge about your nasty design, there is no way to know which package I should find what the method needed. If you decide to put things in different packages, then at least provide an obvious link somewhere to indicate "You have the option to do blah blah, read the docs here".

Reading AWS docs (Rust) is like browse information everywhere and try to assemble in my head. Without enough prior knowledge, it's not easy to get things done quickly.

Details

Comparison

When I google "AWS s3 python client credential loading", the first link gives me what I need: Passing credentials as parameters . Took me 10 seconds to find the answer.

For Nodejs, it took me ~10 minutes. To find docs and examples everywhere. This is how I found the solution eventually.

  1. Google "aws s3 create nodejs client with credentials"
  2. Found S3 Client - AWS SDK for JavaScript v3, the JS package API docs
  3. S3Client class API
  4. Constructor API
  5. S3ClientConfig Interface API
  6. Properties -> Credentials Type
  7. AwsCredentialIdentity (Type of credentials property)
  8. Finally found that this is where to pass in accessKeyId, expiration, secretAccessKey, sessionToken.

This is no different from browsing source code. It's important developers has the ability to read source code and API docs. That doesn't mean the docs provider don't need to provide easy access to the most basic functionalities.

At least I could figure out Nodejs solution within 20 minutes. Took me a few hours to figure out the Rust solution.

Final Words

  • Also, why is documentation and examples everywhere? aws.amazon.com, github.com, and external websites like S3 Client - AWS SDK for JavaScript v3, (different for every language).
  • It's OK to have external API docs as each language have their own platforms. Like rust docs for rust crates.
  • But you should have a central place for links to everywhere and a easy-to-use search utility.
  • Could you put everything in one place and provide a search utility to search everything?
  • Like what you have in JS API docs
  • If your example is on GitHub, it's not that to search through the source code