Skip to content

Tech stack

Below can be outdated, it shows general gist of tech direction to solve the problems we have.

Desktop app (macOS/Windows/Linux)

Built with

On startup

  • Loads data from SQLite and passes it to WebView.
    • Uses rusqlite crate for all SQLite queries.
    • Data sent from Tauri into WebView, gets stored in Solid stores and can be used globally in all components of the app.
  • If user is authenticated and has an account, a GraphQL query will be sent to server to get all the latest changes.
    • The changes coming from the server will update the state of SQLite and thus update the UI too.

Running app

  • User actions made in app (WebView) sends messages to Tauri which writes everything to SQLite. Or reads things from SQLite wherever needed.
  • Tauri/Rust connects to a local folder in user OS file system. All topics are persisted as markdown files in the folder.
    • markdown-rs crate is used to convert from .md file content to Topic and vice versa.
    • File watcher is present that listens to any changes made to the folder. If any of the files get modified or deleted, it will update SQLite accordingly too. If file was deleted, it will do soft delete of the Topic in SQLite. Users can thus revert .md file deletes.
  • If user is authenticated and has an account, the app will optionally sync or publish to the server
    • How it happens is if sync is setup, if any change is made to SQLite, it will send GraphQL requests with the changes to the server too.
    • See this talk. The setup described in the talk goes over TinyBase tool, but our setup is essentially same, just in Rust.

Local language model inference

  • Rust/tauri binary will either embed a 7B or 13B LLaMA model with the binary. Users will be provided the choice to download app with the language model embedded or not.
    • is used to embed the language model into the binary and provide inference
  • Potentially the language model is provided separately as a download. And then inference is served via HTTP server
    • This can allow sharing LLaMA model inference with other apps too. TextSynth Server can potentially be used.
  • llm-chain is used to connect to the local LLaMA and make and chain prompts

Local language model fine tuning

  • The model is continuously fine tuned on user data
    • TODO:

Website (

Built with

  • Front end is built with Solid
  • All useful global state is persisted to local storage with TinyBase
  • Hanko is used for user authentication
    • on succesful auth, a cookie get saved and is then used to do authorised GraphQL queries
  • GraphQL Mobius used to do typed GraphQL queries
    • All data received back is then saved to Solid stores which then updates the UI

On startup

  • Depends on page but mostly it will:
    • load data from local storage via TinyBase into Solid stores, UI updates instantly
    • send GraphQL request to load fresh and missing data
      • load it into solid stores and update UI

Running website

  • Users do actions in the website, update local solid stores
    • on each solid store update, GraphQL request gets sent to persist the changes to the server
    • there are Server-Sent Events setup to live update the stores with data from the server (if there is any)

Server Database

  • EdgeDB is used to store all data
  • Upstash or Grafbase Cache is used for all caching

Server API layer

  • Grafbase is used to provide a GraphQL access layer to all the API.
  • Grafbase API is setup to do all CRUD operations on top of EdgeDB (creating, reading, updating, deleting data)
  • Grafbase is also setup to run any logic that needs to be ran on server too, such as creating Stripe checkout sessions, processing payments etc.

In Go

  • A lot of server code will be written in Go exposed as GraphQL too.
    • Stiched together with Grafbase to provide one GraphQL interface to everything
  • Go code is deployed as Docker container to Google Cloud with proper logging, observability setup

Mobile apps (iOS/Android)


  • Google like scraper is built in Go using Colly
    • it watches over many websites and ingests the data into the system
    • the data then gets processed and added to the database

Global language model inference


  • Payments can be done via Stripe or Solana
  • Any content can be paywalled, user sets the price

Solid UI

Text editor

  • Monaco Editor for pure text editing
  • TipTap for interactive text editing
  • Allow switching between Monaco and TipTap easily
    • everything seralises to same structure
    • interactive parts can be presented as JSX Solid components


Build system

Analytics / Observability

  • Most all logs are collected/sent to Tinybird

Images / media files

LLM processing

  • llm-chain locally or in Grafbase WASM resolvers with rust
    • can also use LangChain in Python/TS in some services depending on use case


Website deploy

  • Cloudflare for DNS, website analytics, web asset serving and more.



Data reporting


  • Inlang
    • for both solid and react native

Browser extension

  • TODO:

VSCode extension

  • TODO:


  • most likely in either go or rust

Potentially useful

  • Inngest for queues, background jobs
    • what problems can it solve?
  • vite-plugin-ssr
    • can be used to improve on solid start or potentially add Houdini Solid support with it for GraphQL