Skip to main content

Read Safari Bookmark

· One min read

On Mac, Safari bookmarks are stored in ~/Library/Safari/Bookmarks.plist.

Permission

It's not readable directly. To read the file, we need to open a dialog to let user select the folder ~/Library/Safari/, then read the file.

This is only the first step, bypass permission.

Then we need to parse the plist file.

Parse plist file

May need to copy the plist file to another folder first so other processes can read it.

The following crates can parse plist file:

But in my scenario, I can't directly use them. I am in a browser environment (tauri).

On Mac, the built-in /usr/bin/python3 comes with plistlib.

To get the full code in python, ask CharGPT, it could easily given you the parser code.

Another option is to use mac's built-in command plutil.

The available formats are xml1, binary1, json, swift, objc.

json format doesn't seem to work, xml1 format works.

After converting to xml1 format, it's easy to convert to json with any language.

plutil -convert xml1 -o - ./Bookmarks.plist
plutil -convert xml1 -o output.xml ./Bookmarks.plist

Rust Sqlite Cipher (Windows)

· 2 min read

To use sqlite with encryption enabled, rusqlite is a popular option. Just enable bundled-sqlcipher feature.

It's super simple on Mac and Linux, on Windows, I have to configure OpenSSL library, it's a bit harder than Linux/Mac.

First download openssl from https://slproweb.com/products/Win32OpenSSL.html

OpenSSL is a very important library used everywhere. This website looks a bit old and I don't know if I can trust it. Why don't Microsoft include it in Windows or provide an official way to install it?

Download the latest, larger file, not the light installer.

The file I downloaded was Win64OpenSSL-3_4_0.msi, install it.

Before installing, here is the error

note: To improve backtraces for build dependencies, set the CARGO_PROFILE_DEV_BUILD_OVERRIDE_DEBUG=true environment variable to enable debug information generation.

Caused by:
process didn't exit successfully: `C:\Users\shenh\Desktop\sqlcipher\target\debug\build\libsqlite3-sys-458bb895065b5297\build-script-build` (exit code: 101)
--- stdout
cargo:rerun-if-env-changed=LIBSQLITE3_SYS_USE_PKG_CONFIG
cargo:include=C:\Users\shenh\.cargo\registry\src\index.crates.io-6f17d22bba15001f\libsqlite3-sys-0.30.1/sqlcipher
cargo:rerun-if-changed=sqlcipher/sqlite3.c
cargo:rerun-if-changed=sqlite3/wasm32-wasi-vfs.c

--- stderr
thread 'main' panicked at C:\Users\shenh\.cargo\registry\src\index.crates.io-6f17d22bba15001f\libsqlite3-sys-0.30.1\build.rs:164:29:
Missing environment variable OPENSSL_DIR or OPENSSL_DIR is not set

OPENSSL_DIR is missing.

My install path is C:\Program Files\OpenSSL-Win64.

Set OPENSSL_DIR to C:\Program Files\OpenSSL-Win64.

Then remove target and build again. This time I get new errors

error: linking with `link.exe` failed: exit code: 1181
...
= note: LINK : fatal error LNK1181: cannot open input file 'libcrypto.lib'

libcrypto.lib is not found.

For my installation, I found the file under C:\Program Files\OpenSSL-Win64\lib\VC\x64\MDd

I need to add environment variable OPENSSL_LIB_DIR to C:\Program Files\OpenSSL-Win64\lib\VC\x64\MDd

Then it works. DB is encrypted. VSCode has sqlite viewer extension. Try open the encrypted db file, it won't open as it's encrypted.

From instructions online, I initially set OPENSSL_LIB_DIR to C:\Program Files\OpenSSL-Win64\lib, it doesn't work.

You may also want to set OPENSSL_INCLUDE_DIR to C:\Program Files\OpenSSL-Win64\include\

Also, you have to restart code editor everytime PATH is changed.

As a Mac user and Linux fan, Windows is too hard for developers. Even deleting files could be a big problem as it's used by some process. I rarely had these problems on Mac and Linux.

Also configuring environment variables on Mac and Linux is simply editing files.

Asymmetric Encryption and TLS in Rust

· 5 min read

In this article, I will briefly discuss application of Asymmetric Encryption (ed25519) and TLS in Rust.

  • SSL/TLS uses asymmetric encryption during the handshake phase to exchange keys securely and ensure authenticity.
  • Asymmetric encryption ensures that communication is secure by using public-private key pairs. Algorithms like RSA, ECDSA, or ed25519 are used to verify identities and sometimes exchange keys.
  • ed25519 is a specific algorithm used primarily for signing in asymmetric cryptography, providing a fast and secure way to verify that data (or certificates) are from a trusted source.

Data Signing

Signing Data with ed25519

ed25519 is often used for SSH authentication. You store the public key on the server, and the private key on the client. Then you can ssh to server with the client private key, without the need to enter password.

SSH (Secure Shell) uses ed25519 keys for authentication in the following way:

  1. Key Generation: The user generates an ed25519 key pair. The private key is kept securely on the client machine, while the public key is placed on the server.

  2. Connection Initiation: When a client attempts to connect to an SSH server, the server sends a challenge to the client.

  3. Signing the Challenge: The client uses its private key to sign the challenge, creating a signature.

  4. Signature Verification: The client sends this signature back to the server. The server then uses the stored public key to verify the signature.

  5. Authentication: If the signature is valid, it proves that the client possesses the corresponding private key, and the server grants access.

This process ensures secure authentication without transmitting the private key, leveraging the strength and efficiency of the ed25519 algorithm.

Let's use the popular ed25519 algorithm, which is usually used for ssh keys.

use ring::rand::SystemRandom;
use ring::signature::{Ed25519KeyPair, KeyPair, Signature, UnparsedPublicKey, ED25519};

fn main() {
// Generate key pair
let rng = SystemRandom::new();
let private_key = Ed25519KeyPair::generate_pkcs8(&rng).unwrap();
let key_pair = Ed25519KeyPair::from_pkcs8(private_key.as_ref()).unwrap();

// Data to sign
let message = b"Hello, sign this data!";

// Sign the message
let signature: Signature = key_pair.sign(message);

// Verify the signature using the public key
let public_key = key_pair.public_key();
let peer_public_key_bytes = public_key.as_ref();

let public_key_for_verification = UnparsedPublicKey::new(&ED25519, peer_public_key_bytes);
match public_key_for_verification.verify(message, signature.as_ref()) {
Ok(_) => println!("Signature verified successfully!"),
Err(_) => println!("Signature verification failed!"),
}
}

RSA Encryption

Asymmetric encryption is used for public key encryption, where the public key is used to encrypt data, and the private key is used to decrypt data.

ed25519 is not used for public key encryption, but RSA is.

use rsa::{Pkcs1v15Encrypt, RsaPrivateKey, RsaPublicKey};

fn main() {
let mut rng = rand::thread_rng();
let bits = 2048;
let priv_key = RsaPrivateKey::new(&mut rng, bits).expect("failed to generate a key");
let pub_key = RsaPublicKey::from(&priv_key);

// Encrypt
let data = b"hello world";
let enc_data = pub_key
.encrypt(&mut rng, Pkcs1v15Encrypt, &data[..])
.expect("failed to encrypt");
assert_ne!(&data[..], &enc_data[..]);

// Decrypt
let dec_data = priv_key
.decrypt(Pkcs1v15Encrypt, &enc_data)
.expect("failed to decrypt");
assert_eq!(&data[..], &dec_data[..]);
}

SSL Certificate for HTTPS Server

TLS/SSL is more complicated, it involves asymmetric encryption, symmetric encryption, hash functions, and more.

Here we only show how to use a self-signed certificate for local use.

Let's generate a self-signed certificate and serve with an Axum rust server. Self-signed certificate is usually used in local network, between devices, but require some level of security.

Generate SSL Certificate

// rcgen = "0.13.1"
use rcgen::generate_simple_self_signed;
use std::fs::File;
use std::io::Write;
use std::path::PathBuf;

fn main() {
let subject_alt_names = vec!["localhost".to_string()];
let cert_key = generate_simple_self_signed(subject_alt_names).unwrap();
let cert = cert_key.cert;
let key = cert_key.key_pair;
println!("Certificate generated successfully");
let pem = cert.pem();
let key = key.serialize_pem();

println!("{}", pem);
println!("{}", key);
let pem_path = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("cert.pem");
let key_path = PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("key.pem");

let mut pem_file = File::create(pem_path).unwrap();
let mut key_file = File::create(key_path).unwrap();

pem_file.write_all(pem.as_bytes()).unwrap();
key_file.write_all(key.as_bytes()).unwrap();
}

Simple Server with TLS

use axum::{routing::get, Router};
use axum_server::tls_rustls::RustlsConfig;
use std::{net::SocketAddr, path::PathBuf};

#[tokio::main]
async fn main() {
let https_port = 8080;
rustls::crypto::ring::default_provider()
.install_default()
.expect("Failed to install default CryptoProvider");
let handle = axum_server::Handle::new();
let config = RustlsConfig::from_pem_file(
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("cert.pem"),
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("key.pem"),
)
.await
.unwrap();
let app = Router::new().route("/", get(handler));
let addr = SocketAddr::from(([127, 0, 0, 1], https_port));
tracing::debug!("listening on {addr}");
axum_server::bind_rustls(addr, config)
.handle(handle)
.serve(app.into_make_service())
.await
.unwrap();
}

async fn handler() -> &'static str {
"Hello, World!"
}

For full examples with http redirect and graceful shutdown, go to Axum's GitHub repo which contains TLS examples.

Query HTTPS

To query a https server with self-signed cert, you would usually get errors.

To do it in Rust with reqwest, set .danger_accept_invalid_certs(true).

use reqwest::Client;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create a custom client
let client = Client::builder()
.danger_accept_invalid_certs(true)
.build()?;

// Make the request
let response = client
.get("https://localhost:3000")
.send()
.await?;

println!("Status: {}", response.status());
println!("Body: {}", response.text().await?);

Ok(())
}

In Node.js and Bun, need to set environment variable NODE_TLS_REJECT_UNAUTHORIZED=0, but this will disable security check for entire program, not just a single request.

Problem with Deno + DOM + NPM Package

· One min read

I get type error with TypeScript when I try to use Deno + DOM in an npm package.

Normally, a Deno package is a standalone Deno package with a deno.json file, without package.json or tsconfig.json. That's the point of using Deno.

However, I have a package with all kinds of code, divided into many subpackages, some runs in browser, some in Deno, some in Node.

Then there could be many type errors

Missing Deno

When Deno VSCode extension disabled, Deno gloabal var is missing.

Create deno.d.ts in root

deno types > deno.d.ts

Missing DOM

If you code contains dom operation, like using document. or KeyboardEvent, adding deno.d.ts will cause DOM and other types to be missing.

This is because the following lines are added to deno.d.ts

/// <reference no-default-lib="true" />
/// <reference lib="esnext" />
/// <reference lib="deno.net" />

Simply remove the first line containing no-default-lib.

Then other libs specified in tsconfig.json will be loaded.

This works for me, but may not be the best solution for other use cases.

Deno and DOM packages should be separated into multiple packages when feasible.

A JS Way of Hot Reloading Go Server (PocketBase)

· 2 min read
Huakun Shen
Website Owner

I was working on a Go project by writing PocketBase Go extension code. I wanted to have a hot reload feature for the Go server. I found that there are many ways to achieve this, such as using air, fresh, realize, etc.

But I wanted to try a different way by using JavaScript/TypeScript.

This gives me more control over the reload process.

Deno Dev Script

deno-dev.ts
let child: Deno.ChildProcess | null = null;

async function startGoServer() {
if (child) {
console.log("Killing previous Go server...");
child.kill("SIGTERM"); // Send SIGTERM to the process
await child.status; // Wait for the process to terminate
}

const buildCmd = new Deno.Command("go", {
args: ["build", "-o", "pocketbase", "main.go"],
});
await buildCmd.output();
const cmd = new Deno.Command("./pocketbase", { args: ["serve"] });
child = cmd.spawn();
console.log("Go server started.");
}

startGoServer();

for await (const _event of Deno.watchFs("main.go")) {
console.log("File change detected, restarting server...");
await startGoServer();
}

Bun Dev Script

bun-dev.ts
import { $, type Subprocess } from "bun";
import { watch } from "fs";

let child: Subprocess | null = null;

async function startGoServer() {
console.log("Starting Go server...");
if (child) {
console.log(`Killing previous Go server with PID: ${child.pid}`);
await child.kill(9);
// sleep 3 seconds
// use setTimeout to iteratively wait until killed is true, check every 1 second
while (child.killed === false) {
await new Promise((resolve) => setTimeout(resolve, 1000));
}
console.log(`Killed ${child.pid}: ${child.killed}`);
}
await $`go build -o pocketbase main.go`;
child = Bun.spawn(["./pocketbase", "serve"], {
stdio: ["inherit", "inherit", "inherit"],
});
console.log(`Go server started with PID: ${child.pid}`);
}

startGoServer();
watch("./main.go", { recursive: true }, async (event, filename) => {
await startGoServer();
});

iife

· 3 min read

title: IIFE authors: [huakun] tags: [Web]


https://developer.mozilla.org/en-US/docs/Glossary/IIFE

An IIFE (Immediately Invoked Function Expression) is a JavaScript function that runs as soon as it is defined.

Immediately Invoked Function Expression (IIFE) 是 JavaScript 中的一种常见的设计模式,它是一个立即执行的匿名函数表达式。IIFE 通常用于创建一个独立的作用域,避免变量污染全局作用域。

(function () {
// …
})();

(() => {
// …
})();

(async () => {
// …
})();

IIFE is usually used to create a separate scope to avoid polluting the global scope. In this blog I will instead talk about the use of IIFE in hosting web pages.

Traditionally, static websites are compiled into an index.html file as an entrypoint. css and js files are linked in the head and body tags.

Here is an example of a simple index.html file:

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Vite + Vue + TS</title>
<script type="module" crossorigin src="/assets/index-D7F47PqG.js"></script>
<link rel="stylesheet" crossorigin href="/assets/index-DRBiz0Jz.css" />
</head>
<body>
<div id="app"></div>
</body>
</html>

You can see that in such a client-side rendered web page, the HTML file doesn't contain any content. The content is rendered by the JavaScript file linked in the head tag.

The index.html is only used as a container for the JavaScript file. The JavaScript file is responsible for rendering the content of the web page.

Theoretically, we can ship a single JavaScript file contains all the logic and content of the web page (including styles and images (png doesn't work, but svg does)).;

Vite Config

For example, this is the original vite.config.ts file for a vue project:

import { defineConfig } from "vite";
import vue from "@vitejs/plugin-vue";

// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
});

Output dist folder:

dist
├── assets
│ ├── Vue.js_Logo_2.svg-BtIZHRhy.png
│ ├── index-CjgLCVzZ.css
│ └── index-CwnRthTM.js
├── index.html
└── vite.svg

Next, set output format to iife

import { defineConfig } from "vite";
import vue from "@vitejs/plugin-vue";
import path from "path";

// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
build: {
emptyOutDir: false,
rollupOptions: {
input: path.resolve(__dirname, "./src/main.ts"),
output: {
format: "iife",
dir: path.resolve(__dirname, "./dist"),
entryFileNames: "web.js",
},
},
},
});
dist
├── assets
│ └── Vue.js_Logo_2.svg-BtIZHRhy.png
├── vite.svg
└── web.js

Now how to use the web.js without an index.html file?

Let's serve the dist folder with a simple http server:

serve dist --cors

The web.js is at http://localhost:3000/web.js

In an HTML file,

<!DOCTYPE html>
<html lang="en">
<body>
<iframe></iframe>
<script>
fetch("http://localhost:3000/web.js", {
method: "GET",
})
.then((res) => res.text())
.then((data) => {
document
.querySelector("iframe")
.contentDocument.write(
"<div id='app'/><script>".concat(data, "<\/script>")
);
});
</script>
</body>
</html>

Here we are rendering the content of web.js in an iframe. It can also render the page directly as long as there is a <div id="app" />.

<!DOCTYPE html>
<html lang="en">
<body>
<script>
fetch("http://localhost:3000/web.js", {
method: "GET",
})
.then((res) => res.text())
.then((data) => {
document.write(
"<div id='app'></div><script>".concat(data, "<\/script>")
);
});
</script>
</body>
</html>

This is because the main.ts is like this

import { createApp } from "vue";
import "./style.css";
import App from "./App.vue";

createApp(App).mount("#app");

So what can be the use case of this?

  • The key benefit is that the website content doesn't have be to hosted on a server. It can be saved in a database and fetched by the client like a function.

Tauri Plugin System Design

· 8 min read
Huakun Shen
Website Owner

In Raycast Analysis and uTools Analysis I discussed the two successful app launchers and their plugin system designs. But both of them have big limitations. Raycast is mac-only. uTools is cross-platform (almost perfect), but it is built with Electron, thus large bundle size and memory consumption.

Tauri is a new framework that can build cross-platform desktop apps with Rust and Web. With much smaller bundle size and memory consumption. It’s a good choice for building a cross-platform app launcher.

Requirements

  • Plugins can be built with JS frontend framework, so it’s easier for develop to build
  • UI can be controlled by plugin
  • Sandbox preferred, never trust plugins not developed by official team, community plugin could be malicious. Neither of Raycast, Alfred, uTools used sandbox. So we can discuss this as well.

Solution

Plugins will be developed as regular single page application. They will be saved in a directory like the following.

plugins/
├── plugin-a/
│ └── dist/
│ ├── index.html
│ └── ...
└── plugin-b/
└── dist/
├── index.html
└── ...

Optionally use symbolic link to build the following structure (link dist of each plugin to the plugin name. You will see why this could be helpful later.

plugins-link/
├── plugin-a/
│ ├── index.html
│ └── ...
└── plugin-b/
├── index.html
└── ...

When a plugin is triggered, the main Tauri core process will start a new process running a http server serving the entire plugins or plugins-link folder as static asset. The http server can be actix-web.

Then open a new WebView process

const w = new WebviewWindow('plugin-a', {
url: 'http://localhost:8000/plugin-a'
});

If we didn’t do the dist folder symlink step, the url would be http://localhost:8000/plugin-a/dist

Do the linking could avoid some problem.

One problem is base url. A single page application like react and vue does routing with url, but the base url is / by default. i.e. index page is loaded on http://localhost:8000. If the plugin redirects to /login, it should redirect to http://localhost:8000/login instead of http://localhost:8000/plugin-a/login

In this case, https://vite-plugin-ssr.com/base-url, https://vitejs.dev/guide/build#public-base-path can be configured in vite config.

Another solution is to use proxy in the http server. Like proxy_pass in nginx config.

API

Now the plugin’s page can be loaded in a WebView window.

However, a plugin not only needs to display a UI, but also need to interact with system API to implement more features, such as interacting with file system. Now IPC is involved.

Tauri by default won’t allow WebView loaded from other sources to run commands or call Tauri APIs.

See this security config dangerousRemoteDomainIpcAccess

https://tauri.app/v1/api/config/#securityconfig.dangerousremotedomainipcaccess

"security": {
"csp": null,
"dangerousRemoteDomainIpcAccess": [
{
"domain": "localhost:8000",
"enableTauriAPI": true,
"windows": ["plugin-a"],
"plugins": []
}
]
},

enableTauriAPI determines whether the plugin will have access to the Tauri APIs. If you don’t want the plugin to have the same level of permission as the main app, then set it to false.

This not only work with localhost hosted plugins. The plugin can also be hosted on public web (but you won’t be able to access it if there is no internet). This will be very dangerous, as compromised plugin on public web will affect all users. In addition, it’s unstable. Local plugin is always safer.

There is another plugins attribute used to control which tauri plugin’s (The plugin here means plugin in rust for Tauri framework, not our plugin) command the plugin can call.

https://tauri.app/v1/api/config/#remotedomainaccessscope.plugins

plugins is The list of plugins that are allowed in this scope. The names should be without the tauri-plugin- prefix, for example "store" for tauri-plugin-store.

For example, Raycast has a list of APIs exposed to extensions (https://developers.raycast.com/api-reference/clipboard)

Raycast uses NodeJS runtime to run plugins, so plugins can access file system and more. This is dangerous. From their blog https://www.raycast.com/blog/how-raycast-api-extensions-work, their solution is to open source all plugins and let the community verify the plugins.

This gives plugins more freedom and introduces more risks. In our approach with Tauri, we can provide a Tauri plugin for app plugins with all APIs to expose to the extensions. For example, get list of all applications, access storage, clipboard, shell, and more. File system access can also be checked and limited to some folders (could be set by users with a whitelist/blacklist). Just don’t give plugin access to Tauri’s FS API, but our provided, limited, and censored API plugin.

How to give plugin full access to OS and FS?

Unlike Raycast where the plugin is run directly with NodeJS, and render the UI by turning React into Swift AppKit native components. The Tauri approach has its UI part in browser. There is no way to let the UI plugin access OS API (like FS) directly. The advantage of this approach is that the UI can be any shape, while Raycast’s UI is limited by its pre-defined UI components.

If a plugin needs to run some binary like ffmpeg to convert/compress files, the previous sandbox API method with a custom Tauri plugin won’t work. In this scenario, this will be more complicated. Here are some immature thoughts

  • The non-UI part of the plugin will need a JS runtime if written in JS, like NodeJS or bun.js
  • Include plugin script written in python, Lua, JS… and UI plugin runs them using a shell API command (like calling a CLI command)
  • If the plugin need a long running backend, a process must be run separately, but how can the UI plugin communicate with the backend plugin? The backend plugin will probably need to be an http server or TCP server.
    • And how to stop this long running process?

Implementation Design

User Interface

Raycast supports multiple user interfaces, such as list, detail, form.

To implement this in Jarvis, there are 2 options.

  1. The extension returns a json list, and Jarvis render it as a list view.
  2. Let the extension handles everything, including list rendering.

Option 1

Could be difficult in our case, as we need to call a JS function to get data, this requires importing JS from Tauri WebView or run the JS script with a JS runtime and get a json response.

To do this, we need a common API contract in JSON format on how to render the response.

  1. Write a command script cmd1.js

  2. Jarvis will call bun cmd1.js with argv, get response

    Example of a list view

    {
    "view": "list",
    "data": [
    {
    "title": "Title 1",
    "description": "Description 1"
    },
    {
    "title": "Title 2",
    "description": "Description 2"
    },
    {
    "title": "Title 3",
    "description": "Description 3"
    }
    ]
    }

This method requires shipping the app with a bun runtime (or download the runtime when app is first launched).

After some thinking, I believe this is similar to script command. Any other language can support this. One difference is, “script command” relies on users’ local dependency, custom libraries must be installed for special tasks, e.g. pandas in python. It’s fine for script command because users are coders who know what they are doing. In a plugin, we don’t expect users to know programming, and install libraries. So shipping a built JS dist with all dependencies is a better idea. e.g. bun build index.ts --target=node > index.js, then bun index.js to run it without installing node_modules.

https://bun.sh/docs/bundler

In the plugin’s package.json, list all commands available and their entrypoints (e.g. dist/cmd1.js, dist/cmd2.js).

{
"commands": [
{
"name": "list-translators",
"title": "List all translators",
"description": "List all available translators",
"mode": "cmd"
}
]
}

Option 2

If we let the extension handle everything, it’s more difficult to develop, but less UI to worry about.

e.g. translate input1 , press enter, open extension window, and pass the input1 to the WebView.

By default, load dist/index.html as the plugin’s UI. There is only one entrypoint to the plugin UI, but a single plugin can have multiple sub-commands with url path. e.g. http://localhost:8080/plugin-a/command1

i.e. Routes in single page app

All available sub-commands can be specified in package.json

{
"commands": [
{
"name": "list-translators",
"title": "List all translators",
"description": "List all available translators",
"mode": "cmd"
},
{
"name": "google-translate",
"title": "Google Translate",
"description": "Translate a text to another language with Google",
"mode": "view"
},
{
"name": "bing-translate",
"title": "Bing Translate",
"description": "Translate a text to another language with Bing",
"mode": "view"
}
]
}

If mode is view, render it. For example, bing-translate will try to load http://localhost:8080/translate-plugin/bing-translate

If mode is cmd, it will try to run bun dist/list-translators and render the response.

mode cmd can have an optional language field, to allow using Python or other languages.

Script Command

Script Command from Raycast is a simple way to implement a plugin. A script file is created, and can be run when triggered. The stdout is sent back to the main app process.

Supported languages by Raycast script command are

  • Bash
  • Apple Script
  • Swift
  • Python
  • Ruby
  • Node.js

In fact, let users specify an interpreter, any script can be run, even executable binaries.

Alfred has a similar feature in workflow. The difference is, Raycast saves the code in a separate file, and Alfred saves the code within the workflow/plugin (in fact also in a file in some hidden folder).

Reference

Tauri Universal Build for Mac (Solve SSL Problem)

· 3 min read
Huakun Shen
Website Owner

I had problem building a universal Tauri app for Mac (M1 pro).

rustup target add x86_64-apple-darwin
rustup target add aarch64-apple-darwin

npm run tauri build -- --target universal-apple-darwin

The problem was with OpenSSL. The error message was:

    Finished `release` profile [optimized] target(s) in 4.50s
Compiling openssl-sys v0.9.102
Compiling cssparser v0.27.2
Compiling walkdir v2.5.0
Compiling alloc-stdlib v0.2.2
Compiling markup5ever v0.11.0
Compiling uuid v1.8.0
Compiling fxhash v0.2.1
Compiling crossbeam-epoch v0.9.18
Compiling selectors v0.22.0
Compiling html5ever v0.26.0
Compiling indexmap v1.9.3
Compiling tracing-core v0.1.32
error: failed to run custom build command for `openssl-sys v0.9.102`

Caused by:
process didn't exit successfully: `/Users/hacker/Dev/projects/devclean/devclean-ui/src-tauri/target/release/build/openssl-sys-2efafcc1e9e30675/build-script-main` (exit status: 101)
--- stdout
cargo:rerun-if-env-changed=X86_64_APPLE_DARWIN_OPENSSL_LIB_DIR
X86_64_APPLE_DARWIN_OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=OPENSSL_LIB_DIR
OPENSSL_LIB_DIR unset
cargo:rerun-if-env-changed=X86_64_APPLE_DARWIN_OPENSSL_INCLUDE_DIR
X86_64_APPLE_DARWIN_OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=OPENSSL_INCLUDE_DIR
OPENSSL_INCLUDE_DIR unset
cargo:rerun-if-env-changed=X86_64_APPLE_DARWIN_OPENSSL_DIR
X86_64_APPLE_DARWIN_OPENSSL_DIR unset
cargo:rerun-if-env-changed=OPENSSL_DIR
OPENSSL_DIR unset
cargo:rerun-if-env-changed=OPENSSL_NO_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_ALLOW_CROSS_x86_64-apple-darwin
cargo:rerun-if-env-changed=PKG_CONFIG_ALLOW_CROSS_x86_64_apple_darwin
cargo:rerun-if-env-changed=TARGET_PKG_CONFIG_ALLOW_CROSS
cargo:rerun-if-env-changed=PKG_CONFIG_ALLOW_CROSS
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-apple-darwin
cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_apple_darwin
cargo:rerun-if-env-changed=TARGET_PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-apple-darwin
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_apple_darwin
cargo:rerun-if-env-changed=TARGET_PKG_CONFIG_SYSROOT_DIR
cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
run pkg_config fail: pkg-config has not been configured to support cross-compilation.

Install a sysroot for the target platform and configure it via
PKG_CONFIG_SYSROOT_DIR and PKG_CONFIG_PATH, or install a
cross-compiling wrapper for pkg-config and set it via
PKG_CONFIG environment variable.

--- stderr
thread 'main' panicked at /Users/hacker/.cargo/registry/src/index.crates.io-6f17d22bba15001f/openssl-sys-0.9.102/build/find_normal.rs:190:5:


Could not find directory of OpenSSL installation, and this `-sys` crate cannot
proceed without this knowledge. If OpenSSL is installed and this crate had
trouble finding it, you can set the `OPENSSL_DIR` environment variable for the
compilation process.

Make sure you also have the development packages of openssl installed.
For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.

If you're in a situation where you think the directory *should* be found
automatically, please open a bug at https://github.com/sfackler/rust-openssl
and include information about your system as well as this message.

$HOST = aarch64-apple-darwin
$TARGET = x86_64-apple-darwin
openssl-sys = 0.9.102


note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
Error failed to build x86_64-apple-darwin binary: failed to build app

Solution

Install OpenSSL with brew

brew install openssl
export export OPENSSL_DIR=$(brew --prefix openssl)

This problem was not fully solved. I had to run GitHub Action on macos-13 runners (intel CPU). The build passes but the resulting app won't run on x86_64 Macs. Keeps saying the openssl lib cannot be loaded. I will update this post when I find a solution.

Read more here https://github.com/tauri-apps/tauri/issues/9684#event-12728702751

The real source of problem was actually git2's dependency (https://crates.io/crates/git2/0.18.3/dependencies) openssl-sys. Removing git2 from my app fixed all problems. Running on macos-14 runner (M1 pro) worked fine.

openssl-sys is a OpenSSL bindings for rust. So it doesn't include the actual OpenSSL library. You need to install OpenSSL on your system.

My guess is, during the build process on GitHub Action, the openssl library location is different from the one on my local machine, and the path is burned into the binary. So the binary won't run on other machines. This is just a guess. There must be some solution. I will update this post when I find it.

NestJS + Neo4j + GraphQL Setup

· 5 min read
Huakun Shen
Website Owner

GitHub Repo: https://github.com/HuakunShen/nestjs-neo4j-graphql-demo

I haven't found a good update-to-date example of using Neo4j with NestJS and GraphQL. So I decided to write one myself.

Neo4j's graphql library has updated its API, some examples I found online were outdated (https://neo4j.com/developer-blog/creating-api-in-nestjs-with-graphql-neo4j-and-aws-cognito/). This demo uses v5.x.x.

GraphQL Schema

type Mutation {
signUp(username: String!, password: String!): String
signIn(username: String!, password: String!): String
}

# Only authenticated users can access this type
type Movie @authentication {
title: String
actors: [Actor!]! @relationship(type: "ACTED_IN", direction: IN)
}

# Anyone can access this type
type Actor {
name: String
movies: [Movie!]! @relationship(type: "ACTED_IN", direction: OUT)
}

# Only authenticated users can access this type
type User @authentication {
id: ID! @id
username: String!
# this is just an example of how to use @authorization to restrict access to a field
# If you list all users without the plaintextPassword field, you will see all users
# If you list all users with the plaintextPassword field, you will only see the user whose id matches the jwt.sub (which is the id of the authenticated user)
# in reality, never store plaintext passwords in the database
plaintextPassword: String!
@authorization(filter: [{ where: { node: { id: "$jwt.sub" } } }])
password: String! @private
}

NestJS Server Configuration

GraphQL Module

A GraphQL module can be generated with bunx nest g mo graphql.

Here is the configuration. In new Neo4jGraphQL(), authorization key is provided for JWT auth. Queries can be restricted by adding @authentication or @authorization to the type.

One important thing to note is the custom auth resolvers. Neo4jGraphQL auto-generate types, queries, mutations implementations for the types in the schema to provide basic CRUD operations, but custom functions like sign in and sign up must be implemented separately. Either as regular REST endpoints in other modules or provide a custom resolver to the Neo4jGraphQL instance.

Usually in NestJS, you would add resolvers to the providers list of the module, but in this case, the resolvers must be added to the Neo4jGraphQL instance. Otherwise you will see the custom queries defined in schema in the playground, but they always return null.

@Module({
imports: [
GraphQLModule.forRootAsync<ApolloDriverConfig>({
driver: ApolloDriver,
useFactory: async () => {
export const { NEO4J_URI, NEO4J_USERNAME, NEO4J_PASSWORD } =
envSchema.parse(process.env);
export const neo4jDriver = neo4j.driver(
NEO4J_URI,
neo4j.auth.basic(NEO4J_USERNAME, NEO4J_PASSWORD)
);

const typedefPath = path.join(RootDir, "src/graphql/schema.gql");
export const typeDefs = fs.readFileSync(typedefPath).toString();

const neoSchema = new Neo4jGraphQL({
typeDefs: typeDefs,
driver: neo4jDriver,
resolvers: authResolvers, // custom resolvers must be added to Neo4jGraphQL instead of providers list of NestJS module
features: {
authorization: {
key: "huakun",
},
},
});

const schema = await neoSchema.getSchema();
return {
schema,
plugins: [ApolloServerPluginLandingPageLocalDefault()],
playground: false,
context: ({ req }) => ({
token: req.headers.authorization,
}),
};
},
}),
],
providers: [],
})
export class GraphqlModule {}

The resolver must be provided to Neo4jGraphQL constructor. It must be an object, so NestJS's class-based resolver won't work.

You must provide regular apollo stype resolvers. See https://neo4j.com/docs/graphql/current/ogm/installation/ for similar example.

export const authResolvers = {
Mutation: {
signUp: async (_source, { username, password }) => {
...
return createJWT({ sub: users[0].id });
},
signIn: async (_source, { username, password }) => {
...
return createJWT({ sub: user.id });
},
},
};

Read the README.md of this repo for more details. Run the code and read the source code to understand how it works. It's a minimal example.

Codegen

https://the-guild.dev/graphql/codegen is used to generate TypeScript types and more from the GraphQL schema.

Usually you provide the graphql schema file, but in this demo, the schema is designed for neo4j and not recognized by the codegen tool.

You need to let Neo4jGraphQL generate the schema and deploy it to a server first, then provide the server's endpoint to the codegen tool. Then the codegen tool will introspect the schema from the server and generate the types.

Make sure the server is running before running codegen

cd packages/codegen
pnpm codegen

The generated files are in the packages/codegen/src/gql folder.

Sample operations can be added to packages/codegen/operations. Types and caller for operations will also be generated.

Read the documentation of codegen for more details.

Examples is always provided in the packages/codegen folder.

This is roughly how the generated code it works:

You get full type safety when calling the operations. The operations documents are predefined in a central place rather than in the code. This is useful when you have a large project with many operations. Modifying one operation will update all the callers. And if the type is no longer valid, the compiler will tell you.

The input variables are also protected by TypeScript. You won't need to guess what the input variables are. The compiler will tell you.

import { CreateMoviesDocument } from "./src/gql/graphql";

async function main() {
const client = new ApolloClient({
uri: "http://localhost:3000/graphql",
cache: new InMemoryCache(),
});
client
.mutate({
mutation: CreateMoviesDocument,
variables: {
input: [
{
actors: {
create: [
{
node: {
name: "jacky",
},
},
],
},
title: "fallout",
},
],
},
})
.then((res) => {
console.log(res);
});
}