<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" version="2.0">
  <channel>
    <title>Toasty&apos;s Technical Posts</title>
    <link>https://wegmueller.it/</link>
    <atom:link href="https://wegmueller.it/feed.xml" rel="self" type="application/rss+xml"/>
    <description>Unix systems engineering, Rust development, and infrastructure insights by Till Wegmüller (@toasterson)</description>
    <lastBuildDate>Sun, 29 Mar 2026 21:03:18 GMT</lastBuildDate>
    <language>en</language>
    <generator>Lume v2.0.1</generator>
    <item>
      <title>What if SPARC&apos;s Throughput Philosophy Got a Second Chance on RISC-V?</title>
      <link>https://wegmueller.it/blog/neo-sparc-many-core-uma/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/neo-sparc-many-core-uma/</guid>
      <description>An AI-assisted deep research session into why Sun&apos;s UltraSPARC T-series design philosophy — many simple cores, hardware multithreading, unified memory — deserves a comeback on open RISC-V silicon.</description>
      <content:encoded>
        <![CDATA[<h1>What if SPARC's Throughput Philosophy Got a Second Chance on RISC-V?</h1>
        <p>Some backstory: I have been using Claude Code to plan out and research whether Sun's <a href="https://en.wikipedia.org/wiki/Sun_Ray">SunRay</a> thin-client architecture could be replicated as a modern Wayland compositor. During that process I kept getting this gut feeling that SPARC's throughput computing design could handle this kind of workload — and my neuro-symbolic AI experiment (<a href="https://github.com/Toasterson/akh-medu">akh-medu</a>) — more efficiently than what we have today. But I also knew that with all the improvements in modern CPUs and GPUs, a direct comparison was not really fair anymore. The old hardware is gone, the ISA is dead, and the ecosystem moved on.</p>
        <p>So I posed a side research question to Claude: What from SPARC's design philosophy is actually worth salvaging? And what does RISC-V already have that could carry those ideas forward? This post is the result of that research session. Claude helped me survey the RISC-V many-core landscape, pull together specs, summarize architectural details, and do the math on theoretical performance. I then wrote up what I found interesting and where I think the real opportunity is. The research legwork is AI-assisted, the opinions and the architectural instinct are mine.</p>
        <p>This is going to be a long one. Grab a coffee.</p>
        <h2>The Throughput Computing Thesis</h2>
        <p>In 2005, while Intel was chasing single-thread performance with ever-deeper out-of-order pipelines, Sun went the opposite direction. Their bet: <strong>most server workloads are memory-latency-bound, not compute-bound.</strong> Instead of making one thread fast, make many threads <em>always busy</em>.</p>
        <p>The T1 had 8 cores with 4 hardware threads each (32 total). Each core was deliberately simple — a 6-stage in-order pipeline. When thread A stalls on a cache miss, the core instantly switches to thread B. Zero-cycle context switch, because all thread state lives in dedicated hardware registers.</p>
        <p>The T2 doubled down: 8 cores, 8 threads each, 64 hardware threads. Added per-core FPUs, integrated crypto engines (AES, SHA, RSA in hardware), and dual 10GbE MACs on-die.</p>
        <p>Here is the thing that blew my mind when I actually looked at the scheduling logic: this is <strong>conceptually identical</strong> to how GPU warp scheduling works. An NVIDIA SM has 32-thread warps; when one warp stalls on memory, it switches to another. SPARC T-series was doing the same thing, except each thread ran a fully independent instruction stream (MIMD) rather than lockstep SIMT. Every thread could branch independently, chase pointers through graphs, run different code. No warp divergence penalty.</p>
        <h2>Why SPARC Lost (And Why It Doesn't Matter)</h2>
        <p>SPARC did not lose on architecture. It lost on ecosystem lock-in (x86 had all the software), single-thread expectations (most 2000s workloads were poorly parallelized), Oracle's mismanagement after the Sun acquisition, and economies of scale.</p>
        <p>But what has changed since then:</p>
        <ul>
        <li><strong>Parallelism is mandatory now.</strong> Games need 8+ threads. AI inference is embarrassingly parallel. The &quot;most workloads are single-threaded&quot; argument is dead.</li>
        <li><strong>ISA matters less.</strong> Everything compiles from portable source. Apple proved you can switch architectures and the world follows. Rust does not care what ISA you target.</li>
        <li><strong>Memory bandwidth is THE bottleneck.</strong> LLM inference is almost entirely memory-bandwidth-limited. This is exactly where SPARC's latency-hiding design excels.</li>
        <li><strong>The CPU-GPU split is painful.</strong> PCIe transfers, synchronization, separate memory spaces, different programming models. The industry is spending enormous effort papering over a fundamentally broken architecture.</li>
        </ul>
        <h2>The Unified Memory Argument</h2>
        <p>This is where things get really interesting for practical workloads. Every time data crosses the PCIe bus between CPU and discrete GPU, you pay a bandwidth tax (PCIe 5.0 x16 gives ~50 GB/s vs HBM3E's 3.35 TB/s — a 67x cliff), a latency tax (1-10 microseconds with DMA vs 30-100 nanoseconds for local memory), a synchronization tax, and a programming model tax (two languages, two toolchains, two sets of bugs).</p>
        <h3>Where This Hurts: Thin-Client Compositors</h3>
        <p>Consider a thin-client compositor pipeline — something like a modern Wayland-based successor to SunRay that renders, encodes, compresses and streams frames to remote clients. This is the kind of thing I have been researching with Claude Code. Every frame does something like:</p>
        <pre><code>Wayland clients submit buffers
        → Compositor composites (CPU or GPU)
        → Frame capture (if GPU rendered: GPU→CPU copy!)
        → H.264/AV1 encode (if GPU encode: CPU→GPU copy!)
        → Compressed bitstream
        → zstd compress (CPU)
        → QUIC packetize + TLS encrypt (CPU)
        → Network transmit
        </code></pre>
        <p>With a discrete GPU, the render-to-encode path crosses PCIe <strong>twice</strong> per frame unless you carefully keep everything on the GPU side — which means your compositor logic also needs to run on the GPU, in a different programming model.</p>
        <p>With true UMA, this becomes: everything reads and writes the same memory. Zero copies. One programming model. One address space. The entire pipeline is just functions passing references.</p>
        <h3>Where This Hurts: Neuro-Symbolic AI</h3>
        <p>The other workload that got me thinking about this is neuro-symbolic AI. I have been experimenting with <a href="https://github.com/Toasterson/akh-medu">akh-medu</a>, a system that combines LLM inference with knowledge graph traversal and symbolic reasoning. The compute profile of this kind of workload is exactly where the CPU-GPU split hurts the most:</p>
        <table>
        <thead>
        <tr>
        <th>Component</th>
        <th>Compute Profile</th>
        <th>GPU Fitness</th>
        <th>Many-Core UMA Fitness</th>
        </tr>
        </thead>
        <tbody>
        <tr>
        <td>LLM inference (attention)</td>
        <td>Matrix multiply, bandwidth-bound</td>
        <td>Excellent</td>
        <td>Good (vector units + HBM)</td>
        </tr>
        <tr>
        <td>Knowledge graph traversal</td>
        <td>Pointer-chasing, irregular</td>
        <td><strong>Terrible</strong></td>
        <td><strong>Excellent</strong></td>
        </tr>
        <tr>
        <td>SPARQL query execution</td>
        <td>Branch-heavy, integer</td>
        <td><strong>Terrible</strong></td>
        <td><strong>Excellent</strong></td>
        </tr>
        <tr>
        <td>Symbolic unification</td>
        <td>Pattern matching, recursive</td>
        <td><strong>Terrible</strong></td>
        <td><strong>Excellent</strong></td>
        </tr>
        <tr>
        <td>Result composition</td>
        <td>Mix of all above</td>
        <td>Requires PCIe round-trips</td>
        <td><strong>Native — zero copy</strong></td>
        </tr>
        </tbody>
        </table>
        <p>On a discrete GPU, a neuro-symbolic query involves <strong>four PCIe round-trips</strong> — embedding on GPU, back to CPU for graph traversal (can not do irregular access on GPU), back to GPU for language generation, back to CPU for the final answer. At 5-10 microseconds each with DMA overhead, that is 20-40+ microseconds of pure waste per query.</p>
        <p>On a many-core UMA chip, you just partition cores by task. LLM cores, graph cores, reasoning cores, all sharing the same HBM. Zero copies. Zero synchronization overhead. Functions call each other. It is just Rust.</p>
        <p>The graph traversal is particularly interesting here. Graph databases are notoriously GPU-hostile because memory access patterns are irregular (follow pointer, read node, follow next pointer), each step depends on the previous one, and nodes are scattered across memory. These are exactly the patterns that hardware multithreading was designed for. When a core's thread 0 chases a pointer and misses cache, thread 1 immediately runs. Eight threads per core means 8 concurrent graph walks in flight, hiding all those cache misses.</p>
        <h3>What Apple and NVIDIA Already Proved</h3>
        <p>Apple's M-series is the strongest existence proof that UMA works. M4 Ultra runs ~819 GB/s unified memory shared by CPU, GPU, Neural Engine, and media engines. Final Cut Pro on M-series beats discrete-GPU workstations costing 3x more precisely because the render-encode pipeline has zero copies.</p>
        <p>NVIDIA conceded the point too. Grace Hopper connects CPU and GPU via NVLink-C2C at 900 GB/s with cache coherency. AMD's MI300A fuses CPU and GPU dies sharing the same HBM pool.</p>
        <p>The industry is converging. The question is whether you can leapfrog to a cleaner architecture.</p>
        <h2>The 2K-Core Architecture Sketch</h2>
        <p>So what would this actually look like? Here is where I let myself dream a little:</p>
        <pre><code>2,048 RISC-V cores @ 1.5-2.0 GHz
        8 hardware threads per core = 16,384 hardware threads
        Per-core: 256-bit RVV vector FPU (8x FP32/cycle)
        Per-core: 8KB L1I + 8KB L1D (intentionally small)
        
        Clustered into 128 tiles of 16 cores each
        Per-tile: 512KB shared L2
        
        Tiles connected via 2D mesh Network-on-Chip
        Shared L3: 256MB (on-package HBM-backed)
        
        Memory: 4-8 HBM3E stacks
        Bandwidth: 2-4 TB/s
        Capacity: 64-128 GB
        UMA: every core sees the same physical address space
        </code></pre>
        <p>Theoretical FP32 throughput with 256-bit RVV and FMA: <strong>~65 TFLOPS</strong>. For reference, an NVIDIA H100 SXM does ~60 TFLOPS FP32 (but behind PCIe for CPU workloads), and Apple M4 Ultra GPU does ~27 TFLOPS (but with UMA). Our hypothetical gets competitive TFLOPS <em>with</em> UMA <em>and</em> fully general-purpose cores.</p>
        <p>The 16,384 hardware threads are all fully independent MIMD — no divergence penalty. For irregular workloads (graph traversal, symbolic reasoning, protocol processing), effective thread utilization could be 3-5x higher than GPU despite fewer threads.</p>
        <p>Why in-order cores? They are ~0.1mm² in 5nm versus ~1-2mm² for out-of-order. You fit 10-20x more per die. Power per thread is 5-10x lower. Latency is deterministic (important for real-time compositing and protocol deadlines). And the 8 hardware threads per core hide memory latency the same way GPU warps do.</p>
        <h2>What to Take from SPARC, What to Leave</h2>
        <p>The valuable innovations from SPARC T-series are <strong>microarchitectural, not ISA-level.</strong> Fine-grained hardware multithreading, crossbar memory interconnect, zero-cycle thread switching — none of these are ISA-dependent. They port cleanly to RISC-V.</p>
        <p>What to drop: Register windows (waste die area, no benefit with modern L1 caches), the SPARC ISA itself (RISC-V has better toolchains), SPARC VIS vectors (RVV 1.0 is strictly superior), and the GPL license (RISC-V's BSD license enables both FLOSS and commercial use).</p>
        <p>The OpenSPARC T1 source is unfortunately no longer easily available online (thanks Oracle). But the architectural ideas live on in <a href="https://github.com/PrincetonUniversity/openpiton">OpenPiton</a>, which started with OpenSPARC T1 cores and now supports RISC-V. If you want to study how SPARC's thread scheduling and crossbar design worked and see it applied to modern silicon, that is your starting point.</p>
        <h2>What You Can Hack On Today</h2>
        <p>This is not purely theoretical. The components exist in open source.</p>
        <p><strong>FPGA Proof-of-Concept (~$2K-$10K):</strong> Take a Xilinx Alveo U280 or U55C (HBM-equipped FPGA), pick an open RISC-V core (<a href="https://github.com/openhwgroup/cva6">CVA6</a> from ETH Zurich or <a href="https://github.com/pulp-platform/snitch_cluster">Snitch</a> which was designed for many-core), add hardware multithreading by duplicating the register file and PC 4-8 times with a thread scheduler. Then tile and replicate using <a href="https://github.com/PrincetonUniversity/openpiton">OpenPiton</a> from Princeton — which started with OpenSPARC T1 cores and now supports RISC-V. OpenPiton is the real gem here. It provides the mesh interconnect, cache coherence, and memory controller integration all in one framework. Boot Linux or illumos on 16-64 cores and measure scaling behavior.</p>
        <p><strong>Simulation at Scale:</strong> <a href="https://www.gem5.org/">gem5</a> can model thousands of RISC-V cores. <a href="https://fires.im/">FireSim</a> from UC Berkeley runs on AWS F1 FPGA instances and can simulate hundreds of cores at near-real-time speeds.</p>
        <p><strong>Ride Existing Silicon:</strong> <a href="https://www.esperanto.ai/">Esperanto Technologies</a> built a 1,088 RISC-V core chip (ET-SoC-1 on TSMC 7nm), founded by Dave Ditzel — former Sun chief SPARC architect. This is literally the spiritual successor to SPARC Niagara rebuilt on RISC-V. <a href="https://tenstorrent.com/">Tenstorrent</a> under Jim Keller is building RISC-V many-core with mesh NoC and matrix units. <a href="https://github.com/pulp-platform/occamy">ETH Zurich's Occamy</a> taped out 432 Snitch RISC-V cores in GlobalFoundries 12nm with open-source RTL.</p>
        <p><strong>illumos Thread Scheduler:</strong> illumos already has sophisticated thread scheduling (processor sets, lgroups, FSS scheduler) inherited from Solaris, which was tuned for SPARC T-series hardware threads. If someone builds RISC-V hardware with SPARC-style multithreading, illumos is arguably the best OS to run on it. The <code>cmt.c</code> and <code>pg.c</code> code already distinguishes hardware threads from cores from sockets. That is a head start nobody else has.</p>
        <h2>So How Crazy Am I?</h2>
        <p>Look, I am not going to pretend I am going to tape out a 2,048-core chip in my garage. But I also do not think this is pure fantasy. The core ideas — throughput computing, hardware multithreading, unified memory — are validated. SPARC T-series proved the architecture works. Apple proved UMA wins for mixed workloads. Esperanto and ETH Zurich proved you can build many-core RISC-V and it actually boots. The pieces exist.</p>
        <p>What I honestly do not know yet is whether the software side is tractable. Getting compilers to do the right thing for thousands of in-order cores is not a solved problem. OS scheduler support needs real work — though illumos has a head start there because Solaris was tuned for exactly this kind of hardware. And developer tooling for debugging 16,000 hardware threads... yeah, that is going to be interesting.</p>
        <p>If someone wanted to actually pursue this, a realistic path could look like: study the OpenSPARC design decisions through OpenPiton, prototype some hardware multithreading on a RISC-V core on an FPGA, and benchmark workloads like the SunRay-style compositor pipeline or neuro-symbolic queries against a discrete GPU setup. See if the numbers back up the gut feeling. And honestly, keep watching what Esperanto and Tenstorrent ship. If their silicon lands and performs well, you might not need custom hardware at all. Just adapt the software stack.</p>
        <p>But even if nobody builds this exact chip, I think the design direction matters. We keep bolting more heterogeneous accelerators onto systems and then spending half our engineering time on the glue between them. At some point it is worth asking: what if the cores were just good enough at everything that you did not need the split? Not the fastest at any one thing. But fast enough at all of them, in one address space, with one programming model.</p>
        <p>Not a GPU killer. Not a CPU killer. Something in between that makes the distinction irrelevant for the workloads that actually need both.</p>
        <p>If any of this makes you want to dig into <a href="https://github.com/PrincetonUniversity/openpiton">OpenPiton</a> or start tinkering with <a href="https://github.com/pulp-platform/snitch_cluster">Snitch clusters</a> — I would love to hear about it. And if you know people at Esperanto or Tenstorrent who want to talk about workload profiles for this kind of architecture, point them my way.</p>
        <p><em>A note on process: The workload analysis, detailed writeups, performance numbers, spec comparisons, and ecosystem survey in this post were compiled with help from Claude Code. I find AI is genuinely good for this kind of legwork — pulling together scattered specs, summarizing source code, doing back-of-envelope math, and writing up the detailed technical sections. The opinions and the conviction that this design philosophy deserves a second look are mine. As always, verify the numbers before betting hardware budgets on a blog post.</em></p>
        <p>Hope to talk to some folks on Socials and email.</p>
        <p>-- Toasty</p>
        ]]>
      </content:encoded>
      <pubDate>Sun, 29 Mar 2026 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Gemini DeepResearch document about a illumos Virtiofs driver in Rust</title>
      <link>https://wegmueller.it/blog/Implementing Virtiofs Driver in Illumos Rust/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/Implementing Virtiofs Driver in Illumos Rust/</guid>
      <description>A Gemini AI Generated Research Document about how to make a illumos Virtiofs driver in Rust</description>
      <content:encoded>
        <![CDATA[<h1>Prefix</h1>
        <p>In quite a few cases when I have a few specific questions about a topic, I find that a very specific guide has not been made by anyone for it. Like in this case where I wondered. So how actaually could you make a virtiofs driver for illumos in Rust. With Gemini DeepResearch mode I often found it made me such articles and that I enjoyed reading them. And since I enjoyed the read I figured other might aswell.</p>
        <h1><strong>An Architectural and Implementation Blueprint for a virtiofs Kernel Driver in Rust on illumos</strong></h1>
        <h2><strong>Architectural Foundations: The virtiofs Device and FUSE Protocol</strong></h2>
        <p>This report provides a detailed architectural blueprint for the implementation of a VIRTIO Filesystem (virtiofs) kernel driver for the illumos operating system. The implementation is predicated on the use of the Rust programming language, introducing specific challenges and design patterns related to kernel-space programming, Foreign Function Interface (FFI) management, and operating system-specific integration.</p>
        <h3><strong>Analysis of the VIRTIO v1.2 Specification</strong></h3>
        <p>The virtiofs device is a paravirtualized filesystem interface formally defined within the OASIS VIRTIO (Virtual I/O Device) 1.2 specification.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-1">1</a></sup> The VIRTIO standard, in general, provides a "straightforward, efficient, standard and extensible mechanism for virtual devices".<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-4">4</a></sup> The virtiofs device, specifically, is designed to provide high-performance, local filesystem semantics for guest virtual machines to access a directory tree on the host.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup><br>
        The virtiofs device is identified by the VIRTIO Device ID 26.9 Like all VIRTIO devices, it is transport-agnostic and can be exposed to the guest operating system over a virtual PCI bus 5 or via a Memory-Mapped I/O (MMIO) interface.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup><br>
        A critical architectural prerequisite for this project is the existing VIRTIO nexus framework within the illumos kernel, virtio(4D).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-11">11</a></sup> This generic driver is not a device driver itself, but a nexus (bus) driver. Its documented purpose is to "provide a framework for other device drivers" and to manage "feature negotiation, virtqueue management, used and available rings, interrupts, and more".<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-11">11</a></sup><br>
        Analysis of the illumos-gate source repository 12 confirms this structure; existing VIRTIO client drivers, such as vioblk.c for block devices 12 and vioif.c for network devices 12, are implemented as clients of this virtio(4D) framework.<br>
        Therefore, the Rust virtiofs driver must not attempt to implement the raw PCI or MMIO transport-level logic. To do so would be redundant and architecturally incorrect, bypassing the established kernel framework. The correct design is to implement the Rust driver as a client of the virtio(4D) nexus. This significantly simplifies the project, abstracting away the low-level hardware interaction. The driver's FFI bridge will target the internal C-language API exposed by the virtio(4D) framework, not the lower-level DDI/DKI functions for PCI device management.</p>
        <h3><strong>The FUSE-in-VIRTIO Protocol</strong></h3>
        <p>The fundamental design of virtiofs is the tunneling of the Linux FUSE (Filesystem in Userspace) protocol over the VIRTIO transport mechanism.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-7">7</a></sup> This design dictates a specific set of roles:</p>
        <ul>
        <li><strong>Guest Driver (Our Rust Code):</strong> Acts as the <strong>FUSE client</strong>.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup></li>
        <li><strong>Host-side Components (e.g., QEMU + virtiofsd):</strong> Act as the <strong>FUSE file system daemon</strong>.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup></li>
        </ul>
        <p>The filesystem session is initiated by the driver, after VIRTIO initialization, by sending a FUSE_INIT request message on one of the standard request virtqueues.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup><br>
        This architecture represents a "security inversion" from the traditional FUSE model. In a standard FUSE implementation (e.g., on Linux or FreeBSD) 17, the kernel driver is the trusted entity, and the userspace daemon that implements the filesystem logic is untrusted.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup> In the virtiofs model, this is reversed: the guest kernel (and our driver) is the <em>untrusted client</em>.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup> The host-side virtiofsd daemon is a hardened implementation that <em>must not trust the client</em>.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup><br>
        This security inversion has profound and beneficial implications for this project. The illumos driver is not responsible for complex filesystem logic, data integrity, or permission checks. That entire logical burden is offloaded to the host virtiofsd daemon.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup> The driver's sole responsibility is to be a correct <em>protocol translator</em>, converting illumos VFS (Virtual File System) requests into FUSE protocol opcodes 20 and VIRTIO transport messages. This constrained responsibility and minimal attack surface make virtiofs an ideal candidate for a novel implementation in Rust, as the most complex and potentially unsafe logic is handled externally by the host.</p>
        <h3><strong>Virtqueue Structure and Operation</strong></h3>
        <p>All communication between the driver (client) and the device (daemon) occurs via virtqueues.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-1">1</a></sup> This is a standard VIRTIO mechanism based on shared memory rings. The driver places descriptors in an "available" ring for the device to consume and asynchronously processes "used" descriptors that the device writes back after completing a request.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup><br>
        The virtiofs specification defines a specific set of virtqueues 9:</p>
        <ul>
        <li><strong>Virtqueue 0:</strong> hiprio (a high-priority request queue)</li>
        <li><strong>Virtqueue 1:</strong> notification queue</li>
        <li><strong>Virtqueues 2..n:</strong> request queues (standard FUSE requests)</li>
        </ul>
        <p>The notification queue (vq 1) exists only if the device offers, and the driver negotiates, the VIRTIO_FS_F_NOTIFICATION feature bit.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup> If this feature is active, the driver <em>must</em> populate this queue with empty buffers to receive asynchronous FUSE notify messages from the host, such as a &quot;file changed&quot; event.</p>
        <h3><strong>The High-Priority Queue (hiprio)</strong></h3>
        <p>The hiprio queue is an essential mechanism designed to solve a fundamental semantic mismatch between FUSE and VIRTIO.</p>
        <ul>
        <li><strong>The Problem:</strong> The standard FUSE interface (via /dev/fuse) allows a client to select which request to transfer, enabling prioritization of critical requests.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-8">8</a></sup> VIRTIO virtqueues, however, are strictly FIFO (First-In, First-Out) structures.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-25">25</a></sup></li>
        <li><strong>The Deadlock:</strong> This mismatch can lead to a kernel deadlock. For example, if a standard request queue (e.g., vq 2) is full of large, slow FUSE_READ operations, the VFS may need to reclaim a vnode. To do this, it must send a FUSE_FORGET message. If this message is enqueued on the <em>same</em> full request queue, it will be stuck behind the data operations. The vnode will never be reclaimed, and the system will deadlock. A similar priority-inversion problem occurs with FUSE_INTERRUPT (e.g., from a user pressing Ctrl+C).</li>
        <li><strong>The Solution:</strong> The hiprio queue (vq 0) provides an out-of-band, high-priority channel specifically for these non-data-path critical messages.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-8">8</a></sup> The driver <em>MUST</em> submit FUSE_INTERRUPT, FUSE_FORGET, and FUSE_BATCH_FORGET requests <em>only</em> on the hiprio queue.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-21">21</a></sup></li>
        </ul>
        <p>The Linux kernel's virtiofs driver implements its hiprio queue with a polling mechanism for responses, rather than relying on interrupts. This is justified by the specification, which notes these messages are "high importance and low bandwidth".<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-21">21</a></sup> This suggests that a complex, interrupt-driven state machine for this queue is unnecessary.<br>
        Consequently, the Rust driver should be designed with two distinct request-processing pipelines:</p>
        <ol>
        <li>An asynchronous, interrupt-driven, high-throughput pipeline for the main request queues (vq 2+).</li>
        <li>A simpler, potentially synchronous or polling-based pipeline for the hiprio queue (vq 0). This isolates the critical cache-management and cancellation logic from the bulk data path, simplifying the implementation of both.</li>
        </ol>
        <h2><strong>The Target Environment: illumos DDI/DKI and VFS</strong></h2>
        <p>The Rust driver must integrate natively with the illumos kernel, satisfying the interfaces defined by the DDI/DKI (Device Driver Interface / Driver-Kernel Interface) and the VFS.</p>
        <h3><strong>The illumos Driver Lifecycle (DDI/DKI)</strong></h3>
        <p>The illumos DDI/DKI provides a set of stable, long-term kernel ABIs.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-26">26</a></sup> This stability is what makes out-of-tree driver development feasible, as a driver written against the DDI/DKI will continue to function across kernel updates.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-29">29</a></sup><br>
        As a loadable kernel module, the driver must export three standard C-linkage symbols 31:</p>
        <ol>
        <li>_init(void): Called when the module is loaded. Its primary responsibility is to call mod_install(9F).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-33">33</a></sup></li>
        <li>_fini(void): Called when the module is to be unloaded. It must call mod_remove(9F).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-33">33</a></sup></li>
        <li>_info(struct modinfo *): Called by the system to retrieve information about the module.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-31">31</a></sup></li>
        </ol>
        <p>These functions are bundled into a struct modlinkage.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-34">34</a></sup> For a device driver, this modlinkage structure points to a struct modldrv, which in turn contains a struct dev_ops holding the driver's autoconfiguration entry points.</p>
        <h3><strong>Device Discovery and Attachment</strong></h3>
        <p>As established, the virtiofs driver will be a client of the virtio(4D) nexus. The dev_ops structure will define the driver's implementation of the DDI autoconfiguration routines, most importantly the attach(9E) entry point.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-35">35</a></sup><br>
        When the virtio(4D) nexus driver probes a VIRTIO device and identifies it as a virtiofs device (ID 26), it will invoke this driver's attach function. This function receives a dev_info_t *dip (device info pointer).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-35">35</a></sup> This dip is the opaque handle that the Rust FFI layer will use for all subsequent interactions with the DDI, and specifically with the virtio(4D) framework.<br>
        The implementation will require careful analysis of the attach routines in the C-based vioblk.c 13 and vioif.c 14 drivers within the illumos-gate repository. This analysis will reveal the <em>exact</em> internal virtio(4D) API functions (e.g., virtio_queue_setup, virtio_feature_negotiate) that must be bound via FFI to initialize the device.</p>
        <h3><strong>VFS Integration</strong></h3>
        <p>In parallel with its role as a device driver, the module must also register itself as a new filesystem type with the VFS.</p>
        <ul>
        <li><strong>vfs_switch Table:</strong> The driver must define a static struct vfssw (VFS switch table).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-36">36</a></sup> This C structure contains function pointers for VFS-level operations, most notably vsw_mount, and a string name for the filesystem type (e.g., &quot;virtiofs&quot;).</li>
        <li><strong>Registration:</strong> During the _init routine, after mod_install succeeds, the driver must call vfs_add(9F) 36, passing a pointer to its vfssw struct. This kernel-wide registration is what enables the mount(2) system call, and thus the mount(8) command, to recognize "virtiofs" as a valid filesystem type.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-15">15</a></sup></li>
        </ul>
        <h3><strong>The vnodeops Interface: VFS-to-FUSE Translation</strong></h3>
        <p>This is the logical core of the driver. The VFS performs all operations on files and directories through vnode (virtual node) objects.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-37">37</a></sup> Every vnode contains a v_op function vector, which points to a struct vnodeops.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-39">39</a></sup> The virtiofs driver must provide a complete implementation of this vnodeops structure.<br>
        This implementation will consist of a table of Rust functions exposed with a C ABI. Each function will be a translator: it will receive VFS arguments (e.g., a uio(9S) structure for a read), translate them into the corresponding FUSE request message, dispatch that message on a request virtqueue, and then block until a response is received from the host.<br>
        This VFS-to-FUSE translation is a solved problem and should not be re-engineered from first principles. The implementation must draw from two primary reference sources:</p>
        <ol>
        <li><strong>illumos-fusefs:</strong> This existing FUSE driver for illumos 42 contains the C-language fuse_vfs.c and fuse_vnode.c.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-44">44</a></sup> These files provide the <em>exact</em> logic for translating illumos vnodeops into FUSE opcodes. This is the primary reference for the illumos VFS side of the bridge.</li>
        <li><strong>Linux virtiofs Driver:</strong> The Linux kernel's virtiofs driver 8 and its source code (e.g., fs/fuse/virtio_fs.c 47) provide the reference model for integrating FUSE client logic with the VIRTIO transport.</li>
        </ol>
        <p>The following table defines the primary translation work-list, mapping illumos vnodeops to their virtiofs FUSE opcode counterparts.</p>
        <table>
        <thead>
        <tr>
        <th style="text-align:left">illumos vnodeops (VOP_*)</th>
        <th style="text-align:left">FUSE Opcode</th>
        <th style="text-align:left">Description &amp; Implementation Notes</th>
        </tr>
        </thead>
        <tbody>
        <tr>
        <td style="text-align:left">VOP_LOOKUP</td>
        <td style="text-align:left">FUSE_LOOKUP</td>
        <td style="text-align:left">Translate a pathname component to a vnode. Core path-walking operation.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_CREATE</td>
        <td style="text-align:left">FUSE_CREATE</td>
        <td style="text-align:left">Create a new file.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_OPEN</td>
        <td style="text-align:left">FUSE_OPEN</td>
        <td style="text-align:left">Open a file. Must receive a file handle (fh) from the host for use in subsequent I/O.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_CLOSE</td>
        <td style="text-align:left">FUSE_RELEASE</td>
        <td style="text-align:left">Close a file. This operation precedes VOP_INACTIVE.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_READ</td>
        <td style="text-align:left">FUSE_READ</td>
        <td style="text-align:left">Read data. Translates the uio(9S) structure into a FUSE_READ request.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_WRITE</td>
        <td style="text-align:left">FUSE_WRITE</td>
        <td style="text-align:left">Write data. Translates the uio(9S) structure into a FUSE_WRITE request.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_GETATTR</td>
        <td style="text-align:left">FUSE_GETATTR</td>
        <td style="text-align:left">Get file attributes (e.g., for stat(2)).</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_SETATTR</td>
        <td style="text-align:left">FUSE_SETATTR</td>
        <td style="text-align:left">Set file attributes (e.g., for chmod(2), chown(2)).</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_READDIR</td>
        <td style="text-align:left">FUSE_READDIR</td>
        <td style="text-align:left">Read directory entries.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_MKDIR</td>
        <td style="text-align:left">FUSE_MKDIR</td>
        <td style="text-align:left">Create a new directory.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_RMDIR</td>
        <td style="text-align:left">FUSE_RMDIR</td>
        <td style="text-align:left">Remove a directory.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_REMOVE</td>
        <td style="text-align:left">FUSE_UNLINK</td>
        <td style="text-align:left">Remove (unlink) a file.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_RENAME</td>
        <td style="text-align:left">FUSE_RENAME</td>
        <td style="text-align:left">Rename a file or directory.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_FSYNC</td>
        <td style="text-align:left">FUSE_FSYNC</td>
        <td style="text-align:left">Synchronize file data and metadata.</td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_INACTIVE</td>
        <td style="text-align:left">FUSE_FORGET</td>
        <td style="text-align:left">Called by VFS when a vnode's reference count drops to zero. This <em>must</em> send a FUSE_FORGET on the <strong>hiprio</strong> queue.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-21">21</a></sup></td>
        </tr>
        <tr>
        <td style="text-align:left">VOP_MAP / segmap(9E)</td>
        <td style="text-align:left">FUSE_SETUPMAPPING</td>
        <td style="text-align:left">(Advanced) Implements mmap(2) for DAX support.[48, 49] See Section 5.1.</td>
        </tr>
        </tbody>
        </table>
        <h2><strong>The Rust-in-Kernel Bridge: FFI and no_std Implementation</strong></h2>
        <p>The primary challenge of this project is the use of Rust for kernel-space development. This requires a no_std environment and a meticulously crafted FFI bridge.</p>
        <h3><strong>The no_std Kernel Environment</strong></h3>
        <p>The Rust standard library (std) cannot be used in kernel space, as it relies on OS-provided services (e.g., userspace memory allocation, threads) that are not available.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-50">50</a></sup></p>
        <ul>
        <li><strong>Crate Structure:</strong> The driver will be a Rust crate built with the #![no_std] attribute and a crate-type of [&quot;cdylib&quot;, &quot;rlib&quot;]. It will depend on the core library (for Rust primitives) and the alloc library (for dynamic memory structures like Box and Vec).</li>
        <li><strong>Global Allocator:</strong> The alloc crate requires a global allocator to be defined. The driver must provide an implementation of the GlobalAlloc trait that wraps the illumos kernel memory allocator: kmem_alloc(9F).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-53">53</a></sup></li>
        </ul>
        <p>This allocator presents a significant and subtle safety challenge. The kmem_alloc(9F) API is <em>context-dependent</em>, controlled by its flag parameter.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-53">53</a></sup></p>
        <ul>
        <li>KM_SLEEP: The call is allowed to block (sleep) if memory is not immediately available. This is safe to use from kernel thread context (e.g., a vnodeop entry point) but will panic the system if called from interrupt context.</li>
        <li>KM_NOSLEEP: The call must not block. It will return NULL if memory is not available. This is the <em>only</em> flag allowed in high-level interrupt context.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-53">53</a></sup></li>
        </ul>
        <p>The Rust GlobalAlloc trait is synchronous and has no concept of execution context. A naive implementation (e.g., one that <em>always</em> calls kmem_alloc with KM_SLEEP) will be unsafe and will panic the system if <em>any</em> part of the driver (e.g., an interrupt-handler callback) attempts to allocate memory. Conversely, a KM_NOSLEEP-only allocator would be highly inefficient and prone to failure.<br>
        This means a naive, global allocator is unusable. The driver's Rust code must be <em>explicitly</em> aware of its execution context. Standard Rust collections (e.g., Box, Vec, String) which rely on the GlobalAlloc can only be safely used from &quot;thread context&quot; (e.g., vnodeops, attach, taskq workers). Code running in &quot;interrupt context&quot; (e.g., the virtqueue &quot;bottom half&quot; handler) must be written to <em>not allocate</em> or to use a special, non-default KM_NOSLEEP-based allocator. This is a fundamental design constraint for ensuring memory safety.</p>
        <h3><strong>Creating the DDI/DKI FFI Layer</strong></h3>
        <p>A safe, idiomatic Rust driver must be built upon a set of unsafe FFI bindings to the illumos kernel.</p>
        <ul>
        <li><strong>bindgen:</strong> The rust-bindgen tool 54 will be used to generate raw Rust bindings from the primary DDI/DKI C headers: &lt;sys/ddi.h&gt; 56, &lt;sys/sunddi.h&gt;, &lt;sys/modctl.h&gt; 33, &lt;sys/kmem.h&gt; 53, &lt;sys/vnode.h&gt; 57, &lt;sys/vfs.h&gt;, and the internal virtio.h header from the illumos-gate source.</li>
        <li><strong>#[repr(C)] Structs:</strong> All C structs that are passed across the FFI boundary must be mirrored in Rust using the #[repr(C)] attribute.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-58">58</a></sup> This includes the top-level modlinkage 34, cb_ops 60, vfssw 36, and vnodeops 41 structures. This requirement is recursive; any nested struct must also be #[repr(C)].<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-61">61</a></sup></li>
        </ul>
        <p>It is important to note that the existing illumos/rust-illumos crate 62 is intended for <em>userspace</em> development. Its dependent crates, such as doors 64, kstat-rs 65, and zone 66, wrap userspace libraries (libdoor, libkstat, etc.), not kernel-space DDI/DKI functions.<br>
        The correct precedent for this project is the work done by Oxide Computer.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-30">30</a></sup> Oxide's propolis VMM 67 contains internal crates, bhyve-api and viona-api 68, which are <em>exactly</em> what this project requires: no_std Rust FFI bindings for illumos kernel ioctls and C data structures. The FFI layer for the virtiofs driver should be architected following this pattern.</p>
        <h3><strong>The FFI Binding Crate Plan</strong></h3>
        <p>To manage complexity and promote safety, the raw unsafe bindings should be organized into logical Rust modules, which will then expose a safer, more idiomatic API to the main driver logic.</p>
        <table>
        <thead>
        <tr>
        <th style="text-align:left">Internal FFI Module</th>
        <th style="text-align:left">C Headers to Bind</th>
        <th style="text-align:left">Key Functions/Structs to Wrap</th>
        <th style="text-align:left">Purpose</th>
        </tr>
        </thead>
        <tbody>
        <tr>
        <td style="text-align:left">illumos_ffi::kmem</td>
        <td style="text-align:left">&lt;sys/kmem.h&gt;</td>
        <td style="text-align:left">kmem_alloc, kmem_free 53</td>
        <td style="text-align:left">Foundation for the context-aware GlobalAlloc implementation.</td>
        </tr>
        <tr>
        <td style="text-align:left">illumos_ffi::mod</td>
        <td style="text-align:left">&lt;sys/modctl.h&gt;</td>
        <td style="text-align:left">mod_install, mod_remove, mod_info, modlinkage, modldrv 33</td>
        <td style="text-align:left">Module lifecycle (_init, _fini) and DDI registration.[31]</td>
        </tr>
        <tr>
        <td style="text-align:left">illumos_ffi::ddi</td>
        <td style="text-align:left">&lt;sys/ddi.h&gt;, &lt;sys/sunddi.h&gt;</td>
        <td style="text-align:left">dev_info_t, dev_ops, cb_ops 60, attach(9E) 35, segmap(9E) [49], ddi_devmap_segmap 69</td>
        <td style="text-align:left">Core DDI/DKI framework for device attachment and resource mapping.</td>
        </tr>
        <tr>
        <td style="text-align:left">illumos_ffi::virtio</td>
        <td style="text-align:left">uts/common/io/virtio/virtio.h (from gate)</td>
        <td style="text-align:left">virtio_devi_t, virtio_queue_setup, virtio_queue_notify, virtio_dma_alloc</td>
        <td style="text-align:left">The <em>essential</em> API from the virtio(4D) nexus driver.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-11">11</a></sup></td>
        </tr>
        <tr>
        <td style="text-align:left">illumos_ffi::vfs</td>
        <td style="text-align:left">&lt;sys/vfs.h&gt;, &lt;sys/vnode.h&gt;</td>
        <td style="text-align:left">vfs_add, vfssw 36, vnodeops 41, vnode_t, VOP_* macros</td>
        <td style="text-align:left">The VFS/vnode subsystem interface for filesystem registration.</td>
        </tr>
        <tr>
        <td style="text-align:left">illumos_ffi::sync</td>
        <td style="text-align:left">&lt;sys/mutex.h&gt;, &lt;sys/condvar.h&gt;</td>
        <td style="text-align:left">mutex_t, kcondvar_t, mutex_enter, cv_wait, cv_signal</td>
        <td style="text-align:left">Kernel synchronization primitives for blocking and I/O coordination.</td>
        </tr>
        <tr>
        <td style="text-align:left">illumos_ffi::taskq</td>
        <td style="text-align:left">&lt;sys/taskq.h&gt;</td>
        <td style="text-align:left">taskq_t, taskq_dispatch</td>
        <td style="text-align:left">illumos-native workqueue for deferring work from interrupt context.</td>
        </tr>
        </tbody>
        </table>
        <h2><strong>Implementation Blueprint: A virtiofs Driver in Rust</strong></h2>
        <p>This implementation plan synthesizes the VIRTIO contract, the illumos DDI/VFS interfaces, and the Rust FFI bridge into a cohesive, phased driver implementation.</p>
        <h3><strong>Phase 1: Driver Loading and VIRTIO Initialization (Rust attach)</strong></h3>
        <ol>
        <li><strong>_init:</strong> The driver's lib.rs will export a #[no_mangle] pub extern &quot;C&quot; function named _init.</li>
        <li><strong>mod_install:</strong> This _init function will, via FFI, call mod_install with a pointer to a #[repr(C)] static modldrv struct. This struct will point to the driver's dev_ops.</li>
        <li><strong>vfs_add:</strong> The _init function will also call vfs_add with a pointer to a #[repr(C)] static vfssw struct, registering &quot;virtiofs&quot; as a filesystem.</li>
        <li>attach: The kernel will invoke the attach function (also #[no_mangle] pub extern &quot;C&quot;) defined in the dev_ops. This Rust function will:<br>
        a. Receive the dev_info_t* dip.<br>
        b. Use this dip to call into the illumos_ffi::virtio wrapper.<br>
        c. The wrapper's unsafe Rust code will call the C virtio(4D) framework functions to perform feature negotiation. It will assert that the host offers VIRTIO_F_VERSION_1 and request critical features like VIRTIO_FS_F_NOTIFICATION.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup><br>
        d. It will then initialize all required virtqueues (hiprio, notification, and one or more request queues) using the virtio_queue_setup FFI function.<br>
        e. Upon success, it will instantiate the primary VirtioFsDriver Rust struct, populate it with the virtqueue handles and device state, and store it as the driver's private data.</li>
        </ol>
        <h3><strong>Phase 2: VFS Mounting and FUSE Session (Rust vsw_mount)</strong></h3>
        <ol>
        <li>A user process executes mount -t virtiofs....<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-15">15</a></sup></li>
        <li>The kernel VFS layer follows the vfssw pointer and invokes the driver's vsw_mount Rust function.</li>
        <li>This function is responsible for initiating the FUSE session by sending the FUSE_INIT message.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-10">10</a></sup></li>
        <li>It constructs the FUSE_INIT request, allocates a DMA-capable buffer using a wrapped virtio_dma_alloc or similar function from the virtio(4D) API, and places it on request vq 2.</li>
        <li>It notifies the device via virtio_queue_notify and then <em>blocks</em>.</li>
        <li>The block will be implemented by an FFI call to cv_wait on a kcondvar_t specific to this FUSE_INIT request.</li>
        <li>The interrupt handler (Phase 3) will process the FUSE_INIT response and call cv_signal to wake this thread.</li>
        <li>The vsw_mount function wakes up, validates the FUSE_INIT reply, creates the root vnode for the new filesystem, and returns success to the VFS.</li>
        </ol>
        <h3><strong>Phase 3: The vnodeops Request-Response Engine (Rust VOP_READ)</strong></h3>
        <p>This is the steady-state operation of the driver. VOP_READ is used as the canonical example for all I/O vnodeops.</p>
        <ol>
        <li>A user process calls read(2) on a virtiofs file. The kernel traces this call through the vnode's v_op vector to the driver's vop_read Rust function.</li>
        <li>Request State: The vop_read function (running in kernel thread context) executes:<br>
        a. It generates a unique, 64-bit request ID.<br>
        b. It creates a &quot;waiter&quot; struct. This struct contains a kcondvar_t, a pointer to the response buffer, and a status field.<br>
        c. It inserts this waiter struct into a global, concurrent hash map (e.g., DashMap or a Mutex-guarded HashMap) keyed by the request ID.<br>
        d. It serializes the FUSE_READ request, acquires a virtqueue descriptor, and enqueues the request on a request virtqueue.<br>
        e. It calls virtio_queue_notify.<br>
        f. It calls cv_wait (via FFI) on the kcondvar_t inside its waiter struct, putting the user thread to sleep.</li>
        <li>The Interrupt Handler (Bottom Half):<br>
        a. The host virtiofsd processes the request, places the response (the file data) in the used ring, and raises a virtual interrupt.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-22">22</a></sup><br>
        b. The illumos virtio(4D) framework routes this to the driver's registered interrupt handler—a Rust function.<br>
        c. This handler executes in high-priority interrupt context. It must not sleep or perform KM_SLEEP allocations.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-53">53</a></sup></li>
        <li>The Interrupt-to-Taskq Pipeline:<br>
        a. The interrupt handler's only job is to iterate the &quot;used&quot; ring, find the response descriptor, and extract the request ID from it.<br>
        b. It must not perform the hash map lookup or cv_signal itself, as these operations are too complex and may block (on a mutex) for an interrupt context.<br>
        c. Instead, the handler dispatches a &quot;work item&quot; (containing the response buffer and its request ID) to a kernel taskq(9F). This is done via a taskq_dispatch FFI call, which safely defers the work to a lower-priority kernel thread.</li>
        <li>The taskq Worker (Middle Half):<br>
        a. A generic kernel worker thread (in a safe, sleep-able context) picks up the work item.<br>
        b. This Rust function (the taskq callback) performs the hash map lookup using the request ID.<br>
        c. It finds the corresponding &quot;waiter&quot; struct, copies the response data into the waiter's buffer, and sets the status to &quot;success.&quot;<br>
        d. It calls cv_signal (via FFI) on the waiter's kcondvar_t.</li>
        <li>VOP_READ Completion:<br>
        a. Back in the vop_read function, the cv_wait call returns. The thread is now awake.<br>
        b. It sees the status is &quot;success&quot; and its response buffer is populated.<br>
        c. It uses an FFI call to uiomove to copy the data from the kernel response buffer into the user-space uio_t provided by the VFS.<br>
        d. It removes the waiter from the DashMap and returns 0 (success) to the VFS.</li>
        </ol>
        <h2><strong>Advanced Feature Implementation and Strategic Recommendations</strong></h2>
        <h3><strong>Implementing DAX: The segmap Entry Point</strong></h3>
        <p>The virtiofs Direct Access (DAX) feature is its primary performance advantage.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup> It allows a guest to mmap(2) a file and map its contents <em>directly</em> from the host's page cache, eliminating the guest page cache and all FUSE_READ message overhead.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-71">71</a></sup><br>
        This requires a highly specific implementation in illumos, bridging the VFS mmap path with the DDI device-mapping path.</p>
        <ol>
        <li><strong>The DAX Window:</strong> The virtiofs device exposes the DAX-capable memory as a shared memory region, typically a PCI BAR.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup></li>
        <li><strong>The illumos API:</strong> The DDI/DKI entry point for mapping device memory (like a PCI BAR) into a user process's address space is segmap(9E).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-49">49</a></sup></li>
        <li><strong>The Protocol:</strong> The Linux virtiofs driver (which pioneered DAX) extended the FUSE protocol with FUSE_SETUPMAPPING and FUSE_REMOVEMAPPING opcodes.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-48">48</a></sup> The illumos driver must implement this part of the protocol.</li>
        </ol>
        <p><strong>Implementation Blueprint:</strong></p>
        <ol>
        <li>The driver's dev_ops structure will point to a cb_ops (char/block ops) structure 60, which will provide a function pointer for the segmap entry point.</li>
        <li>A user process calls mmap(2) 75 on a virtiofs file.</li>
        <li>The VFS determines this is a device-mapped file and calls the driver's segmap Rust function.</li>
        <li>This segmap function (like vop_read) sends a FUSE_SETUPMAPPING request on a request virtqueue and blocks, awaiting the response.</li>
        <li>The host virtiofsd performs the mapping on its side and returns a reply containing the <em>offset</em> and <em>length</em> of the file's data <em>within the DAX Window (PCI BAR)</em>.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-6">6</a></sup></li>
        <li>The segmap function's interrupt/taskq handler populates the response. The segmap thread wakes up.</li>
        <li>Now holding the device-relative offset and length, the segmap function makes a final FFI call to ddi_devmap_segmap(9F) 69 or devmap_setup(9F).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-73">73</a></sup></li>
        <li>This DDI function performs the final, critical step: it takes the physical address (PCI BAR base + DAX offset) and instructs the kernel to map those physical pages directly into the user process's address space (struct as*).</li>
        <li>The mmap(2) call returns a pointer to the user. Subsequent memory access to that pointer goes directly to the host's memory, requiring zero driver intervention or VM exits.</li>
        </ol>
        <h3><strong>Build and Test Strategy</strong></h3>
        <p>This driver cannot be built with a simple cargo build. It must be integrated into the illumos-gate build system.<br>
        The only proven model for building Rust kernel modules for illumos is the one pioneered by Oxide Computer.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-30">30</a></sup> cargo is not the primary build tool; it is a <em>subprocess</em> invoked by make.</p>
        <ol>
        <li><strong>Build System:</strong> The illumos-gate Makefile system 78 will be modified.</li>
        <li><strong>Rust Compilation:</strong> A Makefile rule will be added to invoke cargo build --release on the Rust crate, configured to produce a <em>static library</em> (e.g., libvirtiofs.a).</li>
        <li><strong>C Stub:</strong> A minimal C stub file (virtiofs_mod.c) will be created. This file will contain <em>only</em> the modlinkage boilerplate 34 and will reference the external Rust functions (e.g., virtiofs_attach, virtiofs_vop_read, etc.).</li>
        <li><strong>Linking:</strong> The illumos build system's linker 79 will be invoked to link the compiled C stub object (virtiofs_mod.o) against the Rust static library (libvirtiofs.a) to produce the final virtiofs loadable kernel module.</li>
        </ol>
        <p><strong>Testing and Debugging:</strong></p>
        <ol>
        <li><strong>Host Environment:</strong> A QEMU 80 or bhyve 67 VM must be configured to run an illumos guest. The hypervisor must be configured to pass through a virtio-fs device and run the virtiofsd daemon on the host, sharing a directory.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-7">7</a></sup></li>
        <li><strong>Guest Environment:</strong> The compiled virtiofs module must be loaded into the guest kernel, e.g., via add_drv(1M).<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-83">83</a></sup></li>
        <li><strong>Debugging:</strong> The primary illumos debugging tools, mdb(1) (Modular Debugger) 84 and DTrace, will be essential. mdb -K 85 allows for live debugging of the kernel. For initial driver loading, QEMU's GDB stub (-s -S) 86 can be used to connect GDB at boot and step through the _init and attach routines.</li>
        </ol>
        <h2><strong>Conclusions and Recommendations</strong></h2>
        <p>The implementation of a virtiofs kernel driver in Rust for illumos is a complex but highly feasible and valuable project. The core complexity is not in the virtiofs protocol itself (which is a straightforward translation task), but rather in the creation of a safe, robust, and context-aware FFI bridge between the Rust no_std environment and the illumos DDI/DKI.<br>
        <strong>Key Recommendations:</strong></p>
        <ol>
        <li><strong>Adopt the Oxide Precedent:</strong> The project <em>must</em> use the work from Oxide Computer (e.g., propolis, helios, and their networking stack drivers) as the primary and foundational reference.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-30">30</a></sup> Their solutions for FFI, no_std builds, and C-stub-plus-Rust-archive linking 30 are the only proven models for this environment.</li>
        <li><strong>Build a HAL, Not Just a Driver:</strong> The project should be architecturally partitioned. The primary deliverable is not just the virtiofs driver, but a reusable &quot;illumos-hal&quot; crate.
        <ul>
        <li><strong>illumos_ffi crate:</strong> An unsafe crate containing raw, bindgen-generated bindings.</li>
        <li><strong>illumos_hal crate:</strong> A safe, idiomatic, no_std Rust crate that wraps the FFI layer. It would provide safe abstractions for kernel-space Mutex, Condvar, Taskq, and a context-aware kmem_alloc API.</li>
        <li><strong>virtiofs_driver crate:</strong> The virtiofs driver itself, written in pure, safe Rust against the illumos_hal API.</li>
        </ul>
        </li>
        <li><strong>Prioritize FFI and Concurrency:</strong> The most difficult design challenges are the context-aware kmem_alloc wrapper (Insight 3.1.1) and the interrupt-to-taskq request-response pipeline (Insight 4.3.2). These concurrency and safety primitives must be designed and validated <em>before</em> FUSE logic is written.</li>
        <li><strong>Reference C Implementations:</strong> The logic for the VFS-to-FUSE translation <em>must</em> be ported from the existing illumos-fusefs C implementation 42 to ensure illumos-specific VFS semantics are preserved. The logic for the FUSE-to-VIRTIO transport <em>must</em> be ported from the Linux kernel virtiofs driver 47 to ensure protocol-level correctness.</li>
        </ol>
        <p>By correctly leveraging the existing virtio(4D) nexus and the security-inversion model of FUSE, the driver's logic is simplified to that of a protocol translator. This bounded scope, combined with the memory safety guarantees of Rust, makes this an ideal project for demonstrating and hardening the use of Rust for kernel-space development on illumos.</p>
        <h4><strong>Works cited</strong></h4>
        <ol>
        <li id="ref-1">Virtio 1.2 is Coming! - Alibaba Cloud Community, accessed on November 4, 2025, <a href="https://www.alibabacloud.com/blog/virtio-1-2-is-coming_599615">https://www.alibabacloud.com/blog/virtio-1-2-is-coming_599615</a></li>
        <li id="ref-2">VIRTIO 1.2 is out! · KVM, QEMU, and more - Red Hat People, accessed on November 4, 2025, <a href="https://people.redhat.com/~cohuck/2022/07/18/virtio-12-is-out.html">https://people.redhat.com/~cohuck/2022/07/18/virtio-12-is-out.html</a></li>
        <li id="ref-3">What's coming in VIRTIO 1.2 - Stefan Hajnoczi, accessed on November 4, 2025, <a href="https://vmsplice.net/~stefan/stefanha-fosdem-2022.pdf">https://vmsplice.net/~stefan/stefanha-fosdem-2022.pdf</a></li>
        <li id="ref-4">oasis-tcs/virtio-docs: OASIS Virtual I/O Device TC: Development of formatted documents for the VIRTIO (Virtual I/O) Specification maintained by the OASIS VIRTIO Technical Committee - GitHub, accessed on November 4, 2025, <a href="https://github.com/oasis-tcs/virtio-docs">https://github.com/oasis-tcs/virtio-docs</a></li>
        <li id="ref-5">oasis-tcs/virtio-spec: OASIS Virtual I/O Device TC - GitHub, accessed on November 4, 2025, <a href="https://github.com/oasis-tcs/virtio-spec">https://github.com/oasis-tcs/virtio-spec</a></li>
        <li id="ref-6">Virtiofs Design Document, accessed on November 4, 2025, <a href="https://virtio-fs.gitlab.io/design.html">https://virtio-fs.gitlab.io/design.html</a></li>
        <li id="ref-7">Virtio-FS, accessed on November 4, 2025, <a href="https://virtio-fs.gitlab.io/">https://virtio-fs.gitlab.io/</a></li>
        <li id="ref-8">virtiofs: virtio-fs host&lt;-&gt;guest shared file system - The Linux Kernel documentation, accessed on November 4, 2025, <a href="https://docs.kernel.org/filesystems/virtiofs.html">https://docs.kernel.org/filesystems/virtiofs.html</a></li>
        <li id="ref-9">Virtual I/O Device (VIRTIO) Version 1.2 - OASIS Open, accessed on November 4, 2025, <a href="https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html">https://docs.oasis-open.org/virtio/virtio/v1.2/csd01/virtio-v1.2-csd01.html</a></li>
        <li id="ref-10">Virtual I/O Device (VIRTIO) Version 1.2 - Index of / - OASIS Open, accessed on November 4, 2025, <a href="https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.pdf">https://docs.oasis-open.org/virtio/virtio/v1.2/cs01/virtio-v1.2-cs01.pdf</a></li>
        <li id="ref-11">illumos: manual page: virtio.4d - SmartOS, accessed on November 4, 2025, <a href="https://smartos.org/man/4D/virtio">https://smartos.org/man/4D/virtio</a></li>
        <li id="ref-12">Illumos-gate - GitHub, accessed on November 4, 2025, <a href="https://github.com/illumos/illumos-gate">https://github.com/illumos/illumos-gate</a></li>
        <li id="ref-13">illumos-gate - Niksula, accessed on November 4, 2025, <a href="https://www.niksula.hut.fi/~ltirkkon/webrev/4330/">https://www.niksula.hut.fi/~ltirkkon/webrev/4330/</a></li>
        <li id="ref-14">Package Catalogue - OmniOS, accessed on November 4, 2025, <a href="https://pkg.omniosce.org/r151022/core/en/catalog.shtml">https://pkg.omniosce.org/r151022/core/en/catalog.shtml</a></li>
        <li id="ref-15">virtio-fs - Stefan Hajnoczi, accessed on November 4, 2025, <a href="https://vmsplice.net/~stefan/virtio-fs_%20A%20Shared%20File%20System%20for%20Virtual%20Machines%20%28FOSDEM%29.pdf">https://vmsplice.net/~stefan/virtio-fs_%20A%20Shared%20File%20System%20for%20Virtual%20Machines%20%28FOSDEM%29.pdf</a></li>
        <li id="ref-16">virtiofs: virtio-fs host&lt;-&gt;guest shared file system - The Linux Kernel Archives, accessed on November 4, 2025, <a href="https://www.kernel.org/doc/html/v5.14/filesystems/virtiofs.html">https://www.kernel.org/doc/html/v5.14/filesystems/virtiofs.html</a></li>
        <li id="ref-17">libfuse/libfuse: The reference implementation of the Linux FUSE (Filesystem in Userspace) interface - GitHub, accessed on November 4, 2025, <a href="https://github.com/libfuse/libfuse">https://github.com/libfuse/libfuse</a></li>
        <li id="ref-18">Filesystem in Userspace - Wikipedia, accessed on November 4, 2025, <a href="https://en.wikipedia.org/wiki/Filesystem_in_Userspace">https://en.wikipedia.org/wiki/Filesystem_in_Userspace</a></li>
        <li id="ref-19">FUSE — The Linux Kernel documentation, accessed on November 4, 2025, <a href="https://www.kernel.org/doc/html/next/filesystems/fuse.html">https://www.kernel.org/doc/html/next/filesystems/fuse.html</a></li>
        <li id="ref-20">DOCA SNAP Virtio-fs Application Guide - NVIDIA Docs, accessed on November 4, 2025, <a href="https://docs.nvidia.com/doca/sdk/doca+snap+virtio-fs+application+guide/index.html">https://docs.nvidia.com/doca/sdk/doca+snap+virtio-fs+application+guide/index.html</a></li>
        <li id="ref-21">Virtual I/O Device (VIRTIO) Version 1.1 - GitHub Pages, accessed on November 4, 2025, <a href="https://stefanha.github.io/virtio/virtio-fs.html">https://stefanha.github.io/virtio/virtio-fs.html</a></li>
        <li id="ref-22">Virtqueues and virtio ring: How the data travels - Red Hat, accessed on November 4, 2025, <a href="https://www.redhat.com/en/blog/virtqueues-and-virtio-ring-how-data-travels">https://www.redhat.com/en/blog/virtqueues-and-virtio-ring-how-data-travels</a></li>
        <li id="ref-23">Improved Linux filesystem sharing for simulated devices with extended Virtio support in Renode, accessed on November 4, 2025, <a href="https://renode.io/news/improved-filesystem-sharing-with-virtiofs-support-in-renode/">https://renode.io/news/improved-filesystem-sharing-with-virtiofs-support-in-renode/</a></li>
        <li id="ref-24">DOCA DevEmu Virtio-FS - NVIDIA Docs Hub, accessed on November 4, 2025, <a href="https://docs.nvidia.com/doca/sdk/doca+devemu+virtio-fs/index.html">https://docs.nvidia.com/doca/sdk/doca+devemu+virtio-fs/index.html</a></li>
        <li id="ref-25">virtiofs: virtio-fs host&lt;-&gt;guest shared file system — The Linux Kernel ..., accessed on November 4, 2025, <a href="https://www.kernel.org/doc/html/v5.9/filesystems/virtiofs.html">https://www.kernel.org/doc/html/v5.9/filesystems/virtiofs.html</a></li>
        <li id="ref-26">Development Titles - OpenIndiana Docs, accessed on November 4, 2025, <a href="https://docs.openindiana.org/books/develop/">https://docs.openindiana.org/books/develop/</a></li>
        <li id="ref-27">illumos | Univrs, accessed on November 4, 2025, <a href="https://book.univrs.io/">https://book.univrs.io/</a></li>
        <li id="ref-28">illumos: manual page: intro.9e - SmartOS, accessed on November 4, 2025, <a href="https://smartos.org/man/9e/intro">https://smartos.org/man/9e/intro</a></li>
        <li id="ref-29">Multi-Kernel Drifting - Hacker News, accessed on November 4, 2025, <a href="https://news.ycombinator.com/item?id=33337086">https://news.ycombinator.com/item?id=33337086</a></li>
        <li id="ref-30">Rust in illumos - Hacker News, accessed on November 4, 2025, <a href="https://news.ycombinator.com/item?id=41505665">https://news.ycombinator.com/item?id=41505665</a></li>
        <li id="ref-31">Device Driver Entry Points, accessed on November 4, 2025, <a href="https://docs.oracle.com/cd/E23824_01/html/819-3196/eqbqy.html">https://docs.oracle.com/cd/E23824_01/html/819-3196/eqbqy.html</a></li>
        <li id="ref-32">Driver Module Entry Points (Writing Device Drivers), accessed on November 4, 2025, <a href="https://docs.oracle.com/cd/E19683-01/806-5222/drvovr-fig-20/index.html">https://docs.oracle.com/cd/E19683-01/806-5222/drvovr-fig-20/index.html</a></li>
        <li id="ref-33">_fini - man pages section 9: DDI and DKI Driver Entry Points, accessed on November 4, 2025, <a href="https://docs.oracle.com/cd/E88353_01/html/E37854/u-fini-9e.html">https://docs.oracle.com/cd/E88353_01/html/E37854/u-fini-9e.html</a></li>
        <li id="ref-34">how to make loadable kernel module on solaris? no linux - Stack Overflow, accessed on November 4, 2025, <a href="https://stackoverflow.com/questions/50733459/how-to-make-loadable-kernel-module-on-solaris-no-linux">https://stackoverflow.com/questions/50733459/how-to-make-loadable-kernel-module-on-solaris-no-linux</a></li>
        <li id="ref-35">ATTACH(9E) - OmniOS, accessed on November 4, 2025, <a href="https://man.omnios.org/man9e/attach">https://man.omnios.org/man9e/attach</a></li>
        <li id="ref-36">Steps in developing your own File system in Solaris 10 - Stack Overflow, accessed on November 4, 2025, <a href="https://stackoverflow.com/questions/41115181/steps-in-developing-your-own-file-system-in-solaris-10">https://stackoverflow.com/questions/41115181/steps-in-developing-your-own-file-system-in-solaris-10</a></li>
        <li id="ref-37">VFS - OSDev Wiki, accessed on November 4, 2025, <a href="http://wiki.osdev.org/VFS">http://wiki.osdev.org/VFS</a></li>
        <li id="ref-38">Virtual File Systems - IBM, accessed on November 4, 2025, <a href="https://www.ibm.com/docs/en/aix/7.2.0?topic=concepts-virtual-file-systems">https://www.ibm.com/docs/en/aix/7.2.0?topic=concepts-virtual-file-systems</a></li>
        <li id="ref-39">vnode(9) - OpenBSD manual pages, accessed on November 4, 2025, <a href="https://man.openbsd.org/vnode.9">https://man.openbsd.org/vnode.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-9">9</a></sup></a></li>
        <li id="ref-40">vnode(9), accessed on November 4, 2025, <a href="https://www.daemon-systems.org/man/vnode.9.html">https://www.daemon-systems.org/man/vnode.9.html</a></li>
        <li id="ref-41">vnodeops(9) - NetBSD Manual Pages, accessed on November 4, 2025, <a href="https://man.netbsd.org/vnodeops.9">https://man.netbsd.org/vnodeops.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-9">9</a></sup></a></li>
        <li id="ref-42">alhazred/illumos-sshfs: illumos FUSE driver and library + ... - GitHub, accessed on November 4, 2025, <a href="https://github.com/alhazred/illumos-sshfs">https://github.com/alhazred/illumos-sshfs</a></li>
        <li id="ref-43">[OmniOS-discuss] FW: Re: Mount NTFS USB under OmniOS, accessed on November 4, 2025, <a href="https://omnios.org/ml-archive/2013-January/000405.html">https://omnios.org/ml-archive/2013-January/000405.html</a></li>
        <li id="ref-44">accessed on January 1, 1970, <a href="https://github.com/jurikm/illumos-fusefs">https://github.com/jurikm/illumos-fusefs</a></li>
        <li id="ref-45">accessed on January 1, 1970, <a href="https://github.com/alhazred/illumos-sshfs/tree/master/kernel">https://github.com/alhazred/illumos-sshfs/tree/master/kernel</a></li>
        <li id="ref-46">[RFC] virtio-fs: shared file system for virtual machines - LWN.net, accessed on November 4, 2025, <a href="https://lwn.net/Articles/774495/">https://lwn.net/Articles/774495/</a></li>
        <li id="ref-47">fs/fuse/virtio_fs.c - kernel/common - Git at Google - Android GoogleSource, accessed on November 4, 2025, <a href="https://android.googlesource.com/kernel/common/+/refs/heads/android12-5.4/fs/fuse/virtio_fs.c">https://android.googlesource.com/kernel/common/+/refs/heads/android12-5.4/fs/fuse/virtio_fs.c</a></li>
        <li id="ref-48">virtiofs: Add DAX support - LWN.net, accessed on November 4, 2025, <a href="https://lwn.net/Articles/813807/">https://lwn.net/Articles/813807/</a></li>
        <li id="ref-49">illumos: manual page: segmap.9e - SmartOS, accessed on November 4, 2025, <a href="https://www.smartos.org/man/9E/segmap">https://www.smartos.org/man/9E/segmap</a></li>
        <li id="ref-50">Rust Without the Standard Library A Deep Dive into no_std Development | Leapcell, accessed on November 4, 2025, <a href="https://leapcell.io/blog/rust-without-the-standard-library-a-deep-dive-into-no-std-development">https://leapcell.io/blog/rust-without-the-standard-library-a-deep-dive-into-no-std-development</a></li>
        <li id="ref-51">Use cases for `no_std` on tier 1 targets - libs - Rust Internals, accessed on November 4, 2025, <a href="https://internals.rust-lang.org/t/use-cases-for-no-std-on-tier-1-targets/20592">https://internals.rust-lang.org/t/use-cases-for-no-std-on-tier-1-targets/20592</a></li>
        <li id="ref-52">Writing FreeBSD Kernel Modules in Rust | NCC Group, accessed on November 4, 2025, <a href="https://www.nccgroup.com/research-blog/writing-freebsd-kernel-modules-in-rust/">https://www.nccgroup.com/research-blog/writing-freebsd-kernel-modules-in-rust/</a></li>
        <li id="ref-53">illumos: manual page: kmem_alloc.9f - SmartOS, accessed on November 4, 2025, <a href="https://www.smartos.org/man/9F/kmem_alloc">https://www.smartos.org/man/9F/kmem_alloc</a></li>
        <li id="ref-54">Binding a Linux API library : r/rust - Reddit, accessed on November 4, 2025, <a href="https://www.reddit.com/r/rust/comments/gt0j0q/binding_a_linux_api_library/">https://www.reddit.com/r/rust/comments/gt0j0q/binding_a_linux_api_library/</a></li>
        <li id="ref-55">Bridging Rust and C Generating C Bindings and Headers with Cbindgen and Cargo-c, accessed on November 4, 2025, <a href="https://leapcell.io/blog/bridging-rust-and-c-generating-c-bindings-and-headers-with-cbindgen-and-cargo-c">https://leapcell.io/blog/bridging-rust-and-c-generating-c-bindings-and-headers-with-cbindgen-and-cargo-c</a></li>
        <li id="ref-56">illumos: manual page: ddi_flsll.9f - SmartOS, accessed on November 4, 2025, <a href="https://www.smartos.org/man/9f/ddi_flsll">https://www.smartos.org/man/9f/ddi_flsll</a></li>
        <li id="ref-57">NOTES.txt - Z IN ASCII, accessed on November 4, 2025, <a href="https://zinascii.com/pub/illumos/gate/1017/NOTES.txt">https://zinascii.com/pub/illumos/gate/1017/NOTES.txt</a></li>
        <li id="ref-58">Other reprs - The Rustonomicon - Rust Documentation, accessed on November 4, 2025, <a href="https://doc.rust-lang.org/nomicon/other-reprs.html">https://doc.rust-lang.org/nomicon/other-reprs.html</a></li>
        <li id="ref-59">question on `repr(C)` guarantees : r/rust - Reddit, accessed on November 4, 2025, <a href="https://www.reddit.com/r/rust/comments/1ap47sj/question_on_reprc_guarantees/">https://www.reddit.com/r/rust/comments/1ap47sj/question_on_reprc_guarantees/</a></li>
        <li id="ref-60">CB_OPS(9S) - OmniOS, accessed on November 4, 2025, <a href="https://man.omnios.org/man9s/cb_ops">https://man.omnios.org/man9s/cb_ops</a></li>
        <li id="ref-61">`#[repr(C)]` on nested structs - help - The Rust Programming Language Forum, accessed on November 4, 2025, <a href="https://users.rust-lang.org/t/repr-c-on-nested-structs/110654">https://users.rust-lang.org/t/repr-c-on-nested-structs/110654</a></li>
        <li id="ref-62">Rust wrappers for various illumos-specific system libaries - GitHub, accessed on November 4, 2025, <a href="https://github.com/illumos/rust-illumos">https://github.com/illumos/rust-illumos</a></li>
        <li id="ref-63">illumos repositories - GitHub, accessed on November 4, 2025, <a href="https://github.com/orgs/illumos/repositories">https://github.com/orgs/illumos/repositories</a></li>
        <li id="ref-64">doors - Rust - Docs.rs, accessed on November 4, 2025, <a href="https://docs.rs/doors">https://docs.rs/doors</a></li>
        <li id="ref-65">kstat_rs - Rust - Docs.rs, accessed on November 4, 2025, <a href="https://docs.rs/kstat-rs">https://docs.rs/kstat-rs</a></li>
        <li id="ref-66">zone - Rust - Docs.rs, accessed on November 4, 2025, <a href="https://docs.rs/zone">https://docs.rs/zone</a></li>
        <li id="ref-67">Operating System and Virtualization Engineer - Oxide Computer, accessed on November 4, 2025, <a href="https://oxide.computer/careers/sw-host-virt">https://oxide.computer/careers/sw-host-virt</a></li>
        <li id="ref-68">oxidecomputer/propolis: VMM userspace for illumos bhyve - GitHub, accessed on November 4, 2025, <a href="https://github.com/oxidecomputer/propolis">https://github.com/oxidecomputer/propolis</a></li>
        <li id="ref-69">illumos: manual page: ddi_segmap_setup.9f - SmartOS, accessed on November 4, 2025, <a href="https://www.smartos.org/man/9f/ddi_segmap_setup">https://www.smartos.org/man/9f/ddi_segmap_setup</a></li>
        <li id="ref-70">Implementing a virtio-blk driver in my own operating system - Stephen Brennan, accessed on November 4, 2025, <a href="https://brennan.io/2020/03/22/sos-block-device/">https://brennan.io/2020/03/22/sos-block-device/</a></li>
        <li id="ref-71">virtio-fs - Stefan Hajnoczi, accessed on November 4, 2025, <a href="https://vmsplice.net/~stefan/virtio-fs_%20A%20Shared%20File%20System%20for%20Virtual%20Machines.pdf">https://vmsplice.net/~stefan/virtio-fs_%20A%20Shared%20File%20System%20for%20Virtual%20Machines.pdf</a></li>
        <li id="ref-72">Using virtio-fs on a unikernel - QEMU, accessed on November 4, 2025, <a href="https://www.qemu.org/2020/11/04/osv-virtio-fs/">https://www.qemu.org/2020/11/04/osv-virtio-fs/</a></li>
        <li id="ref-73">Mapping Device Memory (Writing Device Drivers), accessed on November 4, 2025, <a href="https://docs.oracle.com/cd/E19683-01/806-5222/character-27110/index.html">https://docs.oracle.com/cd/E19683-01/806-5222/character-27110/index.html</a></li>
        <li id="ref-74">Mapping Device Memory - Writing Device Drivers, accessed on November 4, 2025, <a href="https://docs.oracle.com/cd/E18752_01/html/816-4854/character-16543.html">https://docs.oracle.com/cd/E18752_01/html/816-4854/character-16543.html</a></li>
        <li id="ref-75">manual page: mmap.<sup><a href="https://wegmueller.it/blog/Implementing%20Virtiofs%20Driver%20in%20Illumos%20Rust/#ref-2">2</a></sup> - illumos - SmartOS, accessed on November 4, 2025, <a href="https://www.smartos.org/man/2/mmap">https://www.smartos.org/man/2/mmap</a></li>
        <li id="ref-76">I'm part of the illumos core team and I'm quite keen to use Rust in the base of ... - Hacker News, accessed on November 4, 2025, <a href="https://news.ycombinator.com/item?id=41506892">https://news.ycombinator.com/item?id=41506892</a></li>
        <li id="ref-77">oxidecomputer/helios: Helios: Or, a Vision in a Dream. A Fragment. - GitHub, accessed on November 4, 2025, <a href="https://github.com/oxidecomputer/helios">https://github.com/oxidecomputer/helios</a></li>
        <li id="ref-78">new/usr/src/cmd/intrd/Makefile - illumos - code review, accessed on November 4, 2025, <a href="https://cr.illumos.org/~webrev/0xffea/intrd-kernel-01/illumos-gate.pdf">https://cr.illumos.org/~webrev/0xffea/intrd-kernel-01/illumos-gate.pdf</a></li>
        <li id="ref-79">So you want to cross compile illumos, accessed on November 4, 2025, <a href="https://artemis.sh/2023/02/21/so-you-want-to-cross-compile-illumos.html">https://artemis.sh/2023/02/21/so-you-want-to-cross-compile-illumos.html</a></li>
        <li id="ref-80">virtiofs - shared file system for virtual machines / Standalone usage - GitLab, accessed on November 4, 2025, <a href="https://virtio-fs.gitlab.io/howto-qemu.html">https://virtio-fs.gitlab.io/howto-qemu.html</a></li>
        <li id="ref-81">Host Operating System &amp; Hypervisor / RFD / Oxide: 26, accessed on November 4, 2025, <a href="https://26.rfd.oxide.computer/">https://26.rfd.oxide.computer/</a></li>
        <li id="ref-82">Virtio-fs is amazing! (plus how I set it up) : r/VFIO - Reddit, accessed on November 4, 2025, <a href="https://www.reddit.com/r/VFIO/comments/i12uyn/virtiofs_is_amazing_plus_how_i_set_it_up/">https://www.reddit.com/r/VFIO/comments/i12uyn/virtiofs_is_amazing_plus_how_i_set_it_up/</a></li>
        <li id="ref-83">Device Driver Tutorial - filibeto.org, accessed on November 4, 2025, <a href="https://www.filibeto.org/aduritz/truetrue/solaris10/device-driver-819-3159.pdf">https://www.filibeto.org/aduritz/truetrue/solaris10/device-driver-819-3159.pdf</a></li>
        <li id="ref-84">illumos tools for observing processes - Dave Pacheco's Blog, accessed on November 4, 2025, <a href="https://www.davepacheco.net/blog/post/2012-08-04-illumos-tools-for-observing-processes/">https://www.davepacheco.net/blog/post/2012-08-04-illumos-tools-for-observing-processes/</a></li>
        <li id="ref-85">Illumos: Getting Started with MDB | Johann 'Myrkraverk' Oskarsson, accessed on November 4, 2025, <a href="http://www.myrkraverk.com/blog/2014/04/illumos-getting-started-with-mdb/">http://www.myrkraverk.com/blog/2014/04/illumos-getting-started-with-mdb/</a></li>
        <li id="ref-86">How to debug the Linux kernel with GDB and QEMU? - Stack Overflow, accessed on November 4, 2025, <a href="https://stackoverflow.com/questions/11408041/how-to-debug-the-linux-kernel-with-gdb-and-qemu">https://stackoverflow.com/questions/11408041/how-to-debug-the-linux-kernel-with-gdb-and-qemu</a></li>
        </ol>
        ]]>
      </content:encoded>
      <pubDate>Tue, 04 Nov 2025 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Notes on how pkgdepend works</title>
      <link>https://wegmueller.it/blog/pkgdepend-dependency-resolution/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/pkgdepend-dependency-resolution/</guid>
      <description>Some notes about how pkgdepend works.</description>
      <content:encoded>
        <![CDATA[<h1>Notes on how pkgdepend works</h1>
        <p>pkgdepend dependency resolution overview (ELF, Python, JAR)</p>
        <p>This document describes how pkgdepend analyzes files to infer package dependencies, based on the current source code in the pkg(5) repository. It is intended to guide a reimplementation of equivalent checks in Rust.</p>
        <h2>High-level Flow</h2>
        <ul>
        <li>File classification: <code>src/modules/portable/os_sunos.py:get_file_type()</code> reads
        the first bytes of each payload and classifies as one of:
        <ul>
        <li>ELF for ELF objects (magic 0x7F 'ELF').</li>
        <li>EXEC for text files starting with a shebang (#!).</li>
        <li>SMF_MANIFEST for XML files recognized as SMF manifests.</li>
        <li>UNFOUND or unknown for other cases. There is no specific JAR type.</li>
        </ul>
        </li>
        <li>Dispatch: <code>src/modules/publish/dependencies.py:list_implicit_deps_for_manifest()</code>
        maps file types to analyzers:
        <ul>
        <li>ELF -&gt; <code>pkg.flavor.elf.process_elf_dependencies</code></li>
        <li>EXEC -&gt; <code>pkg.flavor.script.process_script_deps</code></li>
        <li>SMF_MANIFEST -&gt; <code>pkg.flavor.smf_manifest.process_smf_manifest_deps</code>
        Unknown types are recorded in a &quot;missing&quot; map but not analyzed.</li>
        </ul>
        </li>
        <li>The analyzers return a list of PublishingDependency objects (see
        src/modules/flavor/base.py) and a list of analysis errors. These are later
        resolved to package-level DependencyAction objects.</li>
        <li>Bypass rules: If pkg.depend.bypass-generate is set (manifest or action),
        dependency generation can be skipped or filtered (details below).</li>
        <li>Internal pruning: After file-level dependencies are generated, pkgdepend can
        drop dependencies that are satisfied by files delivered by the same package.</li>
        <li>Resolution to packages: Finally, dependencies on files are mapped to package
        FMRIs by locating which packages (delivered or already installed) provide
        the target files, following links where necessary.</li>
        </ul>
        <h2>Controlling Run Paths and Bypass</h2>
        <ul>
        <li><code>pkg.depend.runpath</code> (<code>portable.PD_RUN_PATH</code>): A colon-separated string.
        <ul>
        <li>May be set at manifest level (applies to all actions) and/or per action.</li>
        <li>Verified by __verify_run_path(): must be a single string and not empty.</li>
        <li>Per-action value overrides manifest-level value for that action.</li>
        <li>For ELF analysis, the provided runpath interacts with defaults via the
        PD_DEFAULT_RUNPATH token (see below).</li>
        </ul>
        </li>
        <li><code>pkg.depend.bypass-generate</code> (<code>portable.PD_BYPASS_GENERATE</code>): a string or list of
        strings controlling path patterns to ignore when generating dependencies.
        <ul>
        <li>In <code>list_implicit_deps_for_manifest()</code>:
        <ul>
        <li>If bypass contains a match-all pattern <code>.*</code> or <code>^.*$</code>, analysis for that action is skipped entirely. A debug attribute is recorded: <code>pkg.debug.depend.bypassed=&quot;&lt;action path&gt;:.*&quot;</code>.</li>
        <li>Otherwise, <code>__bypass_deps()</code> filters out any matching file paths from the generated dependencies. Patterns are treated as regex; bare filenames are expanded to <code>.*/&lt;name&gt;</code> and patterns are anchored with <code>^...$</code>. Matching paths are recorded in <code>pkg.debug.depend.bypassed</code>; dependencies are updated to only contain the remaining full paths.</li>
        </ul>
        </li>
        </ul>
        </li>
        </ul>
        <h2>ELF Analysis (<code>pkg.flavor.elf</code>)</h2>
        <p>Reference: <code>src/modules/flavor/elf.py</code></p>
        <h3>Inputs</h3>
        <ul>
        <li>Action (file) with attributes:
        <ul>
        <li><code>path</code>: installed path (no leading slash in manifests; code often prepends &quot;/&quot;).</li>
        <li><code>portable.PD_LOCAL_PATH</code>: proto/build file to read.</li>
        <li><code>portable.PD_PROTO_DIR</code>: base dir of the proto area.</li>
        </ul>
        </li>
        <li><code>pkg_vars</code>: package variant template (propagated to dependencies).</li>
        <li><code>dyn_tok_conv</code>: map of dynamic tokens to expansion lists (e.g. <code>$PLATFORM</code>).</li>
        <li><code>run_paths</code>: optional run path list from <code>pkg.depend.runpath</code> (colon-split).</li>
        </ul>
        <h3>Steps</h3>
        <ol>
        <li>Verify file exists and is an ELF object (<code>pkg.elf.is_elf_object</code>). If not, return no deps.</li>
        <li>Parse headers and dynamic info:
        <ul>
        <li><code>elf.get_info(proto_file)</code> -&gt; bits (32/64), arch (<code>i386</code>/<code>sparc</code>).</li>
        <li><code>elf.get_dynamic(proto_file)</code> -&gt;
        <ul>
        <li>deps: list of <code>DT_NEEDED</code> entries; code uses <code>[d[0] for d in deps]</code>.</li>
        <li>runpath: <code>DT_RUNPATH</code> string (may be empty).</li>
        </ul>
        </li>
        </ul>
        </li>
        <li>Build default search path <code>rp</code>:
        <ul>
        <li>Start with <code>DT_RUNPATH</code> split by <code>:</code>. Empty string becomes <code>[]</code>.</li>
        <li><code>dyn_tok_conv[&quot;$ORIGIN&quot;]</code> is set to <code>&quot;/&quot; + dirname(installed_path)</code> so <code>$ORIGIN</code> can be expanded in paths.</li>
        <li>Kernel modules (installed_path under <code>kernel/</code>, <code>usr/kernel</code>, or <code>platform/&lt;platform&gt;/kernel</code>):
        <ul>
        <li>If runpath is set to anything except the specific <code>/usr/gcc/&lt;n&gt;/lib</code> case, raise <code>RuntimeError</code>. Otherwise runpath for kernel modules is derived as:
        <ul>
        <li>For platform paths, append <code>/platform/&lt;platform&gt;/kernel</code>; otherwise for each <code>$PLATFORM</code> in <code>dyn_tok_conv</code> append <code>/platform/&lt;plat&gt;/kernel</code>.</li>
        <li>Append default kernel paths: <code>/kernel</code> and <code>/usr/kernel</code>.</li>
        <li>If 64-bit, a <code>kernel64</code> subdir is used to assemble candidate paths when constructing dependencies: arch -&gt; <code>i386</code> =&gt; <code>amd64</code>; <code>sparc</code> =&gt; <code>sparcv9</code>.</li>
        </ul>
        </li>
        </ul>
        </li>
        <li>Non-kernel ELF:
        <ul>
        <li>Ensure <code>/lib</code> and <code>/usr/lib</code> are present; for 64-bit also add <code>/lib/64</code> and <code>/usr/lib/64</code>.</li>
        </ul>
        </li>
        </ul>
        </li>
        <li>Merge caller-provided <code>run_paths</code>:
        <ul>
        <li>If <code>run_paths</code> is provided, <code>base.insert_default_runpath(rp, run_paths)</code> is used. This replaces any <code>PD_DEFAULT_RUNPATH</code> token in <code>run_paths</code> with the default <code>rp</code>. If the token is absent, the provided <code>run_paths</code> fully override <code>rp</code>. Multiple <code>PD_DEFAULT_RUNPATH</code> tokens raise an error.</li>
        </ul>
        </li>
        <li>Expand dynamic tokens in <code>rp</code>:
        <ul>
        <li><code>expand_variables()</code> recursively replaces <code>$TOKENS</code> using <code>dyn_tok_conv</code>.</li>
        <li>Unknown tokens produce <code>UnsupportedDynamicToken</code> errors (non-fatal) which are returned in the error list.</li>
        </ul>
        </li>
        <li>For each <code>DT_NEEDED</code> library name <code>d</code>:
        <ul>
        <li>For each expanded run path <code>p</code>, form a candidate directory by joining <code>p</code> and <code>d</code>; for kernel64 cases, insert <code>amd64</code>/<code>sparcv9</code> as appropriate; drop the final filename to retain only directories (run_paths for this dependency).</li>
        <li>Create an <code>ElfDependency(action, base_name=basename(d), run_paths=dirs, pkg_vars, proto_dir)</code>.</li>
        </ul>
        </li>
        </ol>
        <h3>Semantics of <code>ElfDependency</code></h3>
        <ul>
        <li>Inherits PublishingDependency (see below). It resolves against delivered files
        by joining each run_path with base_name to form candidates.</li>
        <li>resolve_internal() is overridden to treat the case where no path resolves but
        a file with the same base name is delivered by this package as a WARNING
        instead of an ERROR (assumes external runpath will make it available).
        That sets pkg.debug.depend.*.severity=warning and marks variants accordingly.</li>
        </ul>
        <h2>Python and Script Analysis (<code>pkg.flavor.script</code> + <code>pkg.flavor.python</code>)</h2>
        <h3>References</h3>
        <ul>
        <li>src/modules/flavor/script.py</li>
        <li>src/modules/flavor/python.py</li>
        </ul>
        <h3>Shebang handling (<code>script.py</code>)</h3>
        <ul>
        <li>For any file with a shebang (#!) and the executable bit set:
        <ul>
        <li>Extract interpreter path (first token after #!). If not absolute, record
        ScriptNonAbsPath error.</li>
        <li>Normalize /bin/... to /usr/bin/... and add a ScriptDependency on that
        interpreter path (base_name = last component; run_paths = directory).</li>
        </ul>
        </li>
        <li>If the shebang line contains the substring &quot;python&quot; (e.g. <code>#!/usr/bin/python3.9</code>),
        python-specific analysis is triggered by calling
        <code>python.process_python_dependencies(action, pkg_vars, script_path, run_paths)</code>,
        where script_path is the full shebang line and run_paths is the effective
        pkg.depend.runpath for the action.</li>
        </ul>
        <h3>Python dependency discovery (<code>python.py</code>)</h3>
        <ul>
        <li>Version inference:
        <ul>
        <li>Installed path starting with <code>usr/lib/python&lt;MAJOR&gt;.&lt;MINOR&gt;/</code> implies a
        version (dir_major/dir_minor).</li>
        <li>Shebang matching <code>^#!/usr/bin/(&lt;subdir&gt;/)?python&lt;MAJOR&gt;.&lt;MINOR&gt;</code> implies a
        version (file_major/file_minor).</li>
        <li>If the file is executable and both imply versions that disagree, record a
        PythonMismatchedVersion error and use the directory version for analysis.</li>
        <li>Analysis version selection:
        <ul>
        <li>If installed path implies version, use that.</li>
        <li>Else if shebang implies version, use that.</li>
        <li>Else if executable but no specific version (e.g. <code>#!/usr/bin/python</code>),
        record PythonUnspecifiedVersion and skip analysis.</li>
        <li>Else if not executable but installed under <code>usr/lib/pythonX.Y</code>, analyze
        with that version.</li>
        </ul>
        </li>
        </ul>
        </li>
        <li>Performing analysis:
        <ul>
        <li>If the selected version equals the currently running interpreter
        (sys.version_info), use in-process analysis:
        <ul>
        <li>Construct DepthLimitedModuleFinder with the install directory as the
        base and pass through run_paths (pkg.depend.runpath). The finder executes
        the local proto file (action.attrs[PD_LOCAL_PATH]) to discover imports.</li>
        <li>For each loaded module, obtain the list of file names (basenames of the
        modules) and the directories searched (m.dirs). Create
        PythonDependency(action, base_names=module file names, run_paths=dirs,...).</li>
        <li>Any missing imports are reported as PythonModuleMissingPath errors.</li>
        <li>Syntax errors are reported as PythonSyntaxError.</li>
        </ul>
        </li>
        <li>If the selected version differs from the running interpreter:
        <ul>
        <li>Spawn a subprocess: &quot;python<MAJOR>.<MINOR> depthlimitedmf.py &lt;install_dir&gt;
        &lt;local_file&gt; [run_paths ...]&quot;.</li>
        <li>Parse stdout lines:
        <ul>
        <li>&quot;DEP &lt;repr((names, dirs))&gt;&quot; -&gt; add PythonDependency for those.</li>
        <li>&quot;ERR &lt;module_name&gt;&quot; -&gt; record PythonModuleMissingPath.</li>
        <li>Anything else -&gt; PythonSubprocessBadLine.</li>
        </ul>
        </li>
        <li>Nonzero exit -&gt; PythonSubprocessError with return code and stderr.</li>
        </ul>
        </li>
        </ul>
        </li>
        </ul>
        <h2>JAR Archives</h2>
        <ul>
        <li>There is no special handling of JAR files in the current implementation.
        <ul>
        <li>get_file_type() does not classify JARs and there is no flavor/jar module.</li>
        <li>The historical doc/elf-jar-handling.txt mentions the idea of tasting JARs,
        but this has not been implemented in pkgdepend.</li>
        </ul>
        </li>
        <li>Consequently, pkgdepend does not extract dependencies from .jar manifests or
        classpaths. Any Java/JAR dependency tracking must be handled out-of-band
        (e.g., manual packaging dependencies or future tooling).</li>
        </ul>
        <h2>PublishingDependency Mechanics (<code>flavor/base.py</code>)</h2>
        <ul>
        <li>A PublishingDependency represents a dependency on one or more files located
        via a list of run_paths and base_names, or via an explicit full_paths list.</li>
        <li>It stores debug attributes under the pkg.debug.depend.* namespace:
        <ul>
        <li>.file (base names), .path (run paths) or .fullpath (explicit paths)</li>
        <li>.type (elf/python/script/smf/link), .reason, .via-links, .bypassed, etc.</li>
        </ul>
        </li>
        <li>possibly_delivered():
        <ul>
        <li>For each candidate path (join of run_path and base_name, or each full_path),
        calls resolve_links() to account for symlinks and hardlinks and to find
        real provided paths.</li>
        <li>If a path resolves and the resulting path is among delivered files, the
        dependency is considered satisfied under the relevant variant combination.</li>
        </ul>
        </li>
        <li>resolve_internal():
        <ul>
        <li>Checks if another file delivered by the same package satisfies the
        dependency (via possibly_delivered against the package’s own files/links).</li>
        <li>If so, the dependency is pruned. Otherwise, the error is recorded, subject
        to ELF’s special warning downgrade noted above.</li>
        </ul>
        </li>
        </ul>
        <h2>Resolving Dependencies to Packages (<code>dependencies.py</code>)</h2>
        <ul>
        <li>add_fmri_path_mapping(): builds maps from paths to (PFMRI, variant
        combinations) for both the currently delivered manifests and the installed
        image (if used).</li>
        <li>resolve_links(path, files_dict, links, path_vars, attrs):
        <ul>
        <li>Recursively follows link chains to real paths, accumulating variant
        constraints along the way and generating conditional dependencies when a
        link from one package points to a file delivered by another.</li>
        </ul>
        </li>
        <li>find_package_using_delivered_files():
        <ul>
        <li>For each dependency, computes all candidate paths (make_paths()), resolves
        them through links (resolve_links), groups results by variant combinations,
        and then constructs either:
        <ul>
        <li>type=require if exactly one provider package resolves the dependency, or</li>
        <li>type=require-any if multiple packages could satisfy it.</li>
        </ul>
        </li>
        <li>Debug attributes include:
        <ul>
        <li>pkg.debug.depend.file/path/fullpath</li>
        <li>pkg.debug.depend.via-links (colon-separated link chain per resolution)</li>
        <li>pkg.debug.depend.path-id (a stable id grouping related path attempts)</li>
        </ul>
        </li>
        <li>Link-derived conditional dependencies (type=conditional) are emitted to
        encode that a dependency is only needed when a particular link provider is
        present.</li>
        </ul>
        </li>
        <li>find_package(): tries delivered files first; if not fully satisfied and
        allowed, tries files installed in the current image.</li>
        <li>combine(), __collapse_conditionals(), __remove_unneeded_require_and_require_any():
        <ul>
        <li>Perform simplification and deduplication of the emitted dependencies and
        collapse conditional groups where possible.</li>
        </ul>
        </li>
        </ul>
        <h2>Variants and Conversion to Actions</h2>
        <ul>
        <li>Each dependency carries variant constraints (VariantCombinations). After
        generation and internal pruning, convert_to_standard_dep_actions() splits
        dependencies by unsatisfied variant combinations, producing standard
        actions.depend.DependencyAction instances ready for output.</li>
        </ul>
        <h2>Run Path Insertion Rule (<code>PD_DEFAULT_RUNPATH</code>)</h2>
        <ul>
        <li>base.insert_default_runpath(default_runpath, run_paths) merges default
        analyzer-detected search paths with user-provided run_paths:
        <ul>
        <li>If run_paths includes the PD_DEFAULT_RUNPATH token, the default_runpath is
        spliced at that position.</li>
        <li>If the token is absent, run_paths replaces the default entirely.</li>
        <li>Multiple tokens raise MultipleDefaultRunpaths.</li>
        </ul>
        </li>
        </ul>
        <h2>Notes for a Rust Implementation</h2>
        <ul>
        <li>ELF:
        <ul>
        <li>Parse DT_NEEDED and DT_RUNPATH. Handle $ORIGIN (directory of installed
        path) and $PLATFORM expansion. Implement kernel module path rules and
        64-bit subdir logic. Merge user run paths via PD_DEFAULT_RUNPATH rules.</li>
        <li>Build dependencies keyed by base name with a directory search list.</li>
        <li>When pruning internal deps, downgrade to warning if base name is delivered
        by the same package but no path matches.</li>
        </ul>
        </li>
        <li>Python:
        <ul>
        <li>Determine Python version from installed path or shebang. Flag mismatches.</li>
        <li>Execute import discovery with a depth-limited module finder; if the target
        version differs, spawn the matching interpreter to run a helper script and
        parse outputs. Include run_paths in module search.</li>
        </ul>
        </li>
        <li>JAR:
        <ul>
        <li>No current implementation. Decide whether to add support or retain current
        behavior (no automatic JAR dependency extraction).</li>
        </ul>
        </li>
        <li>General:
        <ul>
        <li>Implement bypass rules and debug attributes to aid diagnostics.</li>
        <li>Implement link resolution and conditional dependency emission.</li>
        <li>Respect variant tracking and final conversion to concrete dependency
        actions.</li>
        </ul>
        </li>
        </ul>
        <h2>Cross-reference</h2>
        <ul>
        <li>Historical note in doc/elf-jar-handling.txt discusses possible JAR handling,
        but the current codebase does not implement JAR dependency analysis.</li>
        </ul>
        ]]>
      </content:encoded>
      <pubDate>Sat, 30 Aug 2025 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Rust on illumos</title>
      <link>https://wegmueller.it/blog/rust-on-illumos/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/rust-on-illumos/</guid>
      <description>Talking about the rust in C code discussion, some illumos insight and an Invite to participate in the linux Rust work</description>
      <content:encoded>
        <![CDATA[<h1>Rust in illumos</h1>
        <p>With the recent rust in Linux events in the last couple of days, It's a good time to write up Rust in illumos. Both to spread the word a bit and also to set expectations for both sides (Rust and illumos/OpenIndiana devs) what is currently possible and what work would need to be invested to make things smooth. And also to let the rust community know about what illumos people were talking about.</p>
        <p>What most of the talk currently is about, are the technical details. But we must not leave the social aspects out of it. Software distributions are not made by lone walkers but by groups of people. Bringing in a new language means facilitating change. And that means there are more topics to discuss than just API design. We are talking about impacts on the whole software lifecycle.</p>
        <h2>Linux DRM API design</h2>
        <p>Looking at the things people like Asahi Lina want to address inside the Linux Kernel with the Rust bindings and how she describes the issues with Locking I get the feeling something with DRM is not consistent. Looking into our code and our <a href="https://illumos.org/books/wdd/mt-17026.html#mt-17026">docs</a> for this topic I can already see that things are more complex in the whole kernel than just &quot;do X&quot; We have some general recommendations but it's a case by case issue when looking at it over the whole kernel sources.  When looking at the illumos DRM fork I can see that a lot of X11 code seems to have wandered into the Kernel. And not many files are created by the same people. I am not surprised that this has gotten messy. The illumos kernel docs talk about multiple cases where Data can be accessed and different needs for locking with them. I assume the Linux kernel has similar cases, hence at least mentioning what locks the driver needs to do and which the DRM API does needs to be documented.</p>
        <h2>Rust in the illumos Kernel</h2>
        <p>The development model of illumos is different from Linux and thus there are no Rust drivers in upstream illumos yet. But that is to be expected for new things. In our model, we take the time to mature new tech in a fork, and for rust, the Oxide fork has taken that role. In there, we have several drivers for the Oxide Networking stack that are in rust. Based on that some experience could be gained. The current state is, that making things in Rust takes more time compared to C for a trained developer. There is Bindgen which has an overhead to learn and use and there is Language training that people need to become productive. It's one thing to understand the language but becoming productive usually means quite a bit more training on top of that. So far userland tools have proven to be small enough to get to a working result within a reasonable time. OPTE and fast path Networking exist but they still need an integration into the MAC network framework. So more work needs to happen on that front. Smaller drivers are also a possibility to do I am currently unaware of somebody that had an interest though.</p>
        <h2>The lack of Systems package manager support in rust</h2>
        <p>Ever since npm started the packaging ecosystem has changed drastically in focus. Where package managers originally installed Software as a System they now shifted to install software on different Systems in the same way. This also changed how responsibilities are handled and how people now develop software. Distributions have become obsolete and are not of interest to people. And that leads to a couple of interesting issues. Because at the end of the day, you need a distribution to start using a Computer. And people that make distributions also need to make some income for that work. But software developers now only need to focus on their software for things to work. Compiling from source has become trivial. But only if you follow the software developer's workflow and know the tools. For people needing to read themselves into how cargo works, some quirks make sense for software developer workflows but are a hindrance for system development workflows. Keep in mind these are different requirements than the ones the kernel needs. Systems Distributors usually do the following:</p>
        <ul>
        <li>Download the sources and make an archive with all the patches they want to bring</li>
        <li>patch the sources from the patch files</li>
        <li>build the binaries</li>
        <li>pack them into an archive for distribution (depends on the package manager)</li>
        </ul>
        <p>Several of cargo's features are now counterintuitive and Are the ones I find people criticising. Cargo wants to secure and check that the vendor folder has not been modified. There is no central vendor folder.  Rust software can easily have upwards of 100 dependencies and micro dependencies. So if people want audit software it becomes a lot of work. I don't mind. I am long enough in this industry that I have lived through the Java dependency problems on Linux systems. Cargo improves upon that situation. Several other OS's believe that there needs to be a clear differentiation between system and Third-party software. And I agree with that. The Debian approach of putting everything into one system and only having one version of each dependency is not feasible for a huge international community of people that develop together but never meet. Most of the time the devs do not talk to each other at all. I am personally of the opinion, that most of rust builds and their tier system works. And I am happy to rely on that and not just my tools. As a side note. None of this topic requires Harsh words. Systems packagers and Software developers (especially the folks in Debian and traditional Linux distros) have very different ways of thinking and cargo is not a tool for system packagers. It would be nice if it grew some support for that but that does not require harsh words. System source repos like illumos-gate can easily vendor and produce binaries with a small list of dependencies to deliver the binaries via a Package manager. The tools work.</p>
        <h2>Missing support for shared libraries</h2>
        <p>A feature not wanted by Software developers, developing for multiple platforms. For systems packagers, this is a required feature. Shared libraries delimit the boundary between two responsibilities. And if those people coordinate then that works well. It becomes a system. There are limits to where this can happen. So I don't know what the perfect solution is. If shared libraries are needed at all or if that feature can fade out. But it is worth a try to build systems and to give people the possibility to do so. So I would wish Rust and Cargo gained shared library support so that we can build such componentized systems easily.</p>
        <h2>An invite</h2>
        <p>With all this said I would love to have some more rust folks in the illumos community. And I know this has been expressed by others as well. Userland tools are easy to make in Rust and I for one would love to have people help me with the new <a href="https://github.com/Toasterson/illumos-installer">Installer</a> and with the package <a href="https://github.com/toasterson/forge">Forge</a> We have gained rust crates for our unique API's such as <a href="https://github.com/illumos/libcontract-sys">libcontract</a> Our new image builder for ISO's is in rust <a href="https://github.com/illumos/image-builder">image-builder</a> and we are always looking for driver developers. Check out the <a href="https://github.com/orgs/illumos/repositories?type=all">illumos</a> for all repos Including a config manager. Want to write complete Wifi kernel parts? A small serial Adapter that you have lying around? Want to integrate with an existing Kernel without rewriting it? If anything of that makes you want to head over to https://illumos.org/books/wdd/preface.html#preface and https://illumos.org/books/dev/ or Simply just interests you. Then we would love to have your contribution.</p>
        <p>Hope to talk to some folks on Socials and email</p>
        <p>-- Toasty</p>
        ]]>
      </content:encoded>
      <pubDate>Mon, 02 Sep 2024 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>One year of streaming</title>
      <link>https://wegmueller.it/blog/one-year-of-streaming/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/one-year-of-streaming/</guid>
      <description>One year of streaming!!!</description>
      <content:encoded>
        <![CDATA[<h1>One year of streaming</h1>
        <p>Last week when I wanted to plan the stream and code projects I am interested in I realized, I am already doing this for 1 year!!!! Woot. Actually one year and one month according to twitch but close enough.</p>
        <p>Since I am not quite happy about how streaming is going It's a good time to make some changes :)
        So first of all what is it that I wanted from streaming. Originally I wanted to try it out and see what I can do with it. When you can put in a lot of work into that it works quite well but now that I have a part-time Job it got less priority than before. And now that I got some statistics I can also make some observations on what people like and what actually is worth the effort of making. And when I look at my Youtube statistics of the vods it's very clear what is going on.</p>
        <p>So some learnings from those statistics:</p>
        <ul>
        <li>My stream is to early :) According to twitch the Number of viewers on the English language peaks at 20:00 and thats when I stop or have already finished. So if I want to reach more viewers I definetly need to change times.</li>
        <li>Youtube access is mostly from the algorithm and they do not like the long form stream VOD's at all. I think I even got delisted for the last one. However the short demo's and sneak peeks do very well.</li>
        <li>Activitypub in Rust is a hot google search topic :)</li>
        <li>Tutorials do well on youtube. So maybe I actually take some of my learnings and make som tutorials for people.</li>
        <li>During events it's easy to get a couple of people to drop by and have some event fun.</li>
        <li>SolARM streaming does well. Although I really need a Raspberry PI4 for that. Or start to bringup another Single board computer with an ARM chip.</li>
        </ul>
        <p>More important for me however is: What are the types of things I like to do with Streaming and Youtube.</p>
        <ul>
        <li>Collaborate and talk to people while coding</li>
        <li>Share usefull information with people</li>
        <li>Share my progress on my projects and generally talks about the Projects.</li>
        </ul>
        <p>Looking through the community of people that I follow I can see that a lot of people also still use blog posts, as opposed to Video or Audio. But I like Audio and Video. So I'll see how I can use that to make some devlogs. However I want to use streaming differently than I am using it now. Currently I have a fixed Schedule and try to work on one topic to appease the algorithm. This is however not my preffered style of working as I like having multiple projects at once and switching between them when I get inspired to do so or when my thoughts dwell on it. Thus I will be reducing the streams to special event or purpose streams on topics of interest maybe mostly on topics I want to explore and document but not actaully start a complete project. It also allows me to talk about things people find too fringe for conferences. Maybe I'll find some people that want to stream some gaming sessions but that will probably mostly be in Swiss German as that is my native language. So even for my German speaking audience that will be a challenge to understand. But if I am curious if people would like to have some Swiss gaming content.</p>
        <p>So to summarize:</p>
        <ul>
        <li>No more fixed stream schedule</li>
        <li>Dedicated content for videos (thus, start less projects hopefully)</li>
        <li>More vlogs/blogs</li>
        <li>More project schowcases</li>
        <li>Less working on things on stream</li>
        <li>Meh Schwitzerdütsch?</li>
        </ul>
        <p>--- Toasty</p>
        ]]>
      </content:encoded>
      <pubDate>Mon, 11 Sep 2023 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Stream changes</title>
      <link>https://wegmueller.it/blog/stream-changes/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/stream-changes/</guid>
      <description>Moving the Stream to Tuesday and some changes to format</description>
      <content:encoded>
        <![CDATA[<h1>Stream changes</h1>
        <p>People may have already noticed, but here it is in long form so I can link this to all Platforms.</p>
        <p>tl;dr; the Saturday Stream is moving to Tuesday 16:00 CET - 20:00 CET (thats: 07:00 PT or 10:00 ET) due to more Scheduling conflicts</p>
        <p>Since I moved back to Switzerland end of December the Stream got a bit more quiet and I needed to reschedule things. As I have changed both location and
        Timezone to CET it also dropped people. Since now the Discovery on Twitch is based on CET and not any of the US times that makes a lot of sense
        So I looked into my schedule and noticed it was now at a very inconvene time for me anyway. And since the Stream is not bound to that time I decided to move it.
        Since not many people watch it repeatedly it was also a good opportunity to move it now before people make it a fixed Calendar entry.</p>
        <p>With this change I am also making a Format change. Previously it was a Stream where I talked and others joined in and maybe asked some questions in Chat. I want to
        strengthen the Join in aspect a bit more. Thus I will be using <a href="https://discord.gg/3JK95B62">my Discord</a> to host a Room for people to come ind and chat.
        I want this to be more like Twitter spaces than Twitch but will be using Twitch as Broadcasting. On Screen, I'll be looking at a couple of smaller
        things on stream like Learning Blender or looking things up or whetever is fun at the time. I wanted to depart more from that aproach since I am already developing things
        outside of the Stream and I have a different style of working that does not exactly fit into that format. So for small things and tutorials etc. its a nice format
        but for the forge and ARM work I am doing now it is a bad fit. I need to have a focus on stream and not be in an exploratory phase.</p>
        <p>For next stream I will be working in Blender. I have a project that I want to make and this gives me a slice of time to look at something like Blender next to
        all the things going on. The Project I want to try is to try use Blender as a CAD tool to make a bit of an Amateur Digitalisation of one of the Houses my Mother designed
        the plans for. (My Mother is a Trained Architect). This House currently belongs to my Stepdad and will go up for sale sometime mid this Year as he is getting to old to finish it.
        Blender has a nice Constraint based toolkit called <a href="https://blender-archipack.org/">Archipack</a> Which allows one to make fast and prety beautifull Visualisations.</p>
        <p>Hope you all wil join the Hangout</p>
        <p>--- Toasty</p>
        ]]>
      </content:encoded>
      <pubDate>Fri, 27 Jan 2023 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>New Website with SvelteKit</title>
      <link>https://wegmueller.it/blog/hello-world/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/hello-world/</guid>
      <description>Read some details about the new site</description>
      <content:encoded>
        <![CDATA[<h1>New Website with SvelteKit</h1>
        <p>Hello everyone. So I recently had the desire to make a new Blog post since I now write with helix but hugo did not like me.</p>
        <p>So I did what all time conscious people do, I redid my blog from scratch...</p>
        <p>As I want to get finally more into making Webdev stuff i decided to give one of the new server based
        frameworks a try. So I chose <a href="https://kit.svelte.dev/">SvelteKit</a> to have to possibility to later add nice features and portfolio and other
        parts to the site. I have copied all my blog posts and formatted them in the new svelte markdown extensions <a href="https://mdsvex.com/">MDsvex</a>
        The theme is still experimental so feedback is welcome. But I already like that I can include svelte components and render them like
        this:</p>
        <p><em>[Note: Interactive component would go here in the original Svelte version]</em></p>
        <p>So when I make a fancy code highlighter I can include it directly into my blog posts. Also I can see to add a slides page
        to my system to display the talks I have given before.</p>
        <p>So for now enjoy this little update and more to come soon :)</p>
        <p>--- Toasty</p>
        ]]>
      </content:encoded>
      <pubDate>Tue, 29 Nov 2022 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>Devlog #1</title>
      <link>https://wegmueller.it/blog/devlog-1/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/devlog-1/</guid>
      <description>First devlog</description>
      <content:encoded>
        <![CDATA[<h1>Devlog #1</h1>
        <h2>Preamble</h2>
        <p>Welcome to my first devlog. In the Tech sector Public speaking has always been a big part of how we share knowledge and
        communicate. I personally have talked to people and held some talks but I never really got started with blogging.
        Although there are multiple attempts, all of them ended with me not liking the format or not wanting to spend the time
        needed for it to have a good purpose. Recently I got more into Indie game development and the practices in that community.
        One of which is, that the Game you make in your free time is the fun you have making it. So it's less about the final product
        and more about the process. The same is true for my OpenSource work. I worked many times on projects for a long time
        just to realize I do not like the Language (C++) or that in the end it will be better to make another project.
        And everytime I do that without writing about the process I bury the knowledge about the things I learnt in the code.
        That is not useful for me nor others. I decided to just write some things I find important about the work of the week.
        So without further ado here is my first Devlog. Leave me some comments on my Social media or via mail. Hope you like it.</p>
        <h2>OpenIndiana</h2>
        <h3>Cloney</h3>
        <p>Starting with a bit of the boring but necessary. This week I made a change to Cloney, the OpenIndiana script helper to copy the downloaded package sources to the build
        directory. To do that it uses symlinks but several packages like golang 1.18 are using file detection to check the type
        of file during build. Assembler embedding is the one function that got added in 1.18 specifically. So the build fails,
        because the symlinks are not followed. The simple fix was to switch to hardlinks but now some other tools require symlinks....
        So I will be introduce <code>CLONEY_MODE</code> variable this week, so we can easily switch the build between symlink, hardlinks and
        recursive copy. For now we have Golang 1.18 packaged an available. It is now the default when installing the golang
        meta-package.</p>
        <h3>(Automated) Installer for x86</h3>
        <p>In more interesting news I have started work on the installer part of the system install and configuration utilities.
        All the utilities and supporting libraries can be found on <a href="https://github.com/Toasterson/illumos-installer">GitHub</a>
        Design wise I made a couple of decisions that can be useful later. All main plumbing and instruction handling is
        built as libraries. We can use those to either easily build tools to configure and install systems or other use cases
        we can come up with. In any case it is a comprehensive gathering of configuration and setup tasks. I don't know yet how
        most people will want to use the automated installer but that is decidable at a later stage. For now the usecase i think
        will be most useful one, is that people can freely define a step by step instruction file. This file together with
        the templates of files to be rendered or copied in place make up an installation bundle. One can then use git or a webserver to
        store that install bundle and the tar archive of the image to be installed. I later want to support imgapi from smartos
        as storage aswell which is why I started with the <code>libimgapi</code> crate in the installer workspace. At the moment it can only
        parse the manifests returned by https://images.smartos.org but that will be enough for the start.
        I do at the moment liberaly clone code from @jclulows image builder. Mostly because the steps he used to build the image
        also fit the installer use-case. Installed imaged are after all also only images in the illumos ips world.</p>
        <h2>Aurora OpenCloud</h2>
        <p>A friend of mine has shared recently the garage S3 Server with me. And based on that together with some services like
        nextcloud and <a href="https://docs.vyos.io/en/equuleus/">vyos</a> I think I have the start of a capable cloud backplane and some
        initial services people like. ownClouds ocis is also nice to have as it allows providers to provide within their ecosystem.</p>
        <p>As this cloud software will be heavily based on the cli I had a basic idea on how that should be seperated.
        The workflow I am aming for is <code>configure -&gt; commit -&gt; deploy</code> with that one can easily manage configuration versions
        but also use the ClockOps workflows. As it's simply a commit without needing a review. I want to wrap that into 3 CLI's</p>
        <ul>
        <li>cloudcfg (or cldcfg) which will handle the writing of configurations and applying them to the cloud. The config files will
        live in the cloud and modified by a GRPC API. I don't know yet how I want to handle the storage. But most things will have Postgres
        as database backend for the Configuration that is currently active. I want something simple though. Nothing that computes
        state or does more things than the operator explicitly says.</li>
        <li>cloudadm (or cldadm) will be responsible to administer and maintain resources. Say reboot a service that you are managing
        inside a tenant or switching of a machine.</li>
        <li>cloudsh (or cldsh) will be the main shell to read data from the cloud and process it. I want to make it based on
        <a href="https://www.nushell.sh/">nushell</a>. This will allows admins to do fancy data analysis on their management VM's or local
        computers without needing to fall back to text parsing. Monitored alerts and data can be printed to the data channel by
        plugins and the builtin commands and filters from nushell do the heavy filter lifting making development easier and giving
        the most flexibility one can get from any tool that outputs data.</li>
        </ul>
        <h2>Indiedev</h2>
        <p>In personal news, as I am getting rid of all my Maker tools, I limited myself to two Hobbies. Working on illumos and
        Indie game development. I will be doing this with two friends. So today we sat down for a Virtual Coffee and Brainstormed
        what we would like to do.</p>
        <p>We ended up with the following synthesized list:</p>
        <ul>
        <li>Sci-fi Ancient Greece / Gothic styles (to starts)</li>
        <li>Vermintidelike mass enemy action</li>
        <li>Buildup of Cities and Trade</li>
        <li>Different worlds accessible via Portals</li>
        <li>Snappy Battle system</li>
        <li>Fancy sound with a wild mix of Genres (My friends want to play with their synthezisers and Instruments and see what they can make)</li>
        <li>Modding support</li>
        </ul>
        <p>More on this will come in the next weeks when we see how this ideas develop. For now it gives me a very elegant excuse to
        have a look into <a href="https://godotengine.org/article/multiplayer-changes-godot-4-0-report-1">Godot 4's Multiplayer support</a>.</p>
        <p>Hope you liked this devlog and leave me note on the Socials what you like to read up more in details.</p>
        <p>So long
        -Toasty</p>
        ]]>
      </content:encoded>
      <pubDate>Sun, 05 Jun 2022 00:00:00 GMT</pubDate>
    </item>
    <item>
      <title>New Blog</title>
      <link>https://wegmueller.it/blog/new-blog/</link>
      <guid isPermaLink="false">https://wegmueller.it/blog/new-blog/</guid>
      <description>Original new Blog post about having an now old Hugo blog</description>
      <content:encoded>
        <![CDATA[<h1>New Blog Site</h1>
        <p>Hello everyone! As I haven't written anything in quite a while,
        it became apparent that I either have to switch my site
        to something more usefull or to start writing again.</p>
        <p>As I am always doing something but never talking about it,
        I figured this is as good an opportunity to start writing
        as any other. And I actually would like to do that.</p>
        <p>So to grab that opportunity and have some fun, I decided
        to restart my blog and make it basically my status update site.
        Write what software I am writing, why I am writing it
        and what I am thinking about things.</p>
        <p>For one this gives some insight for others that do
        packaging and development work. And on the other side
        I think and design quite some things that I am passionate about and
        would like to share more. But since writing themed articles takes
        a lot of time I want this to be more freestyle.</p>
        <p>If you like this more freestyle kind of content enjoy the reading :)
        And let me know on the Socials what you guys think.</p>
        ]]>
      </content:encoded>
      <pubDate>Mon, 30 May 2022 00:00:00 GMT</pubDate>
    </item>
  </channel>
</rss>