Toastys Rambles


This blog post documents how to run a Linux VM inside of Qemu inside the KVM branded zone. While illumos can run almost any application these days, some more exotic runtimes like mono have removed solaris and thus illumos support. To be able to run applcations that are based on these frameworks you will need either an lx branded zone or a Linux VM. As lx branded zones have still have not made it upstream into the illumos sources (hint @illumos-developers) I will use a VM to host some pre made docker containers.


We will need a openindiana host with the following installed:

pkg install system/qemu/kvm system/zones/brand/kvm

Additionally you will need the ISO from your Linux distro of choice mine is my custom arch linux iso with zfs pre installed. Once I have that image finished to my liking I will blog about it too. I will use a Zvol as disk backend to leverage all ZFS features. File based images are also available although they add no additional features compared to Zvol's

Zone creation

First let's create a zone for a VM that will install from a VM. In this step of the set-up we will create: * a Virtual nic connected to the Hosts Network interface (rge0) * a ZFS Block device (Volume) to host the Vm's Disk * the zone that will Host our VM

### Variables
ZONE_NAME="arch" # Limited to 65 Characters Alphanumeric
VNIC_NAME="${ZONE_NAME}0" # Limited to 16 Characters Alhpanumeric. Must end in a Number
ZFS_ZONES_PATH="rpool/zones" # ZFS Dataset under which all zones will be created
ZONES_PATH="$(zfs get -Hp -o value mountpoint ${ZFS_ZONES_PATH})" # Path in the VFS where all child Datasets will be located
VM_DISK_NAME="disk0" # Limited to Alphanumerc Characters
#### END Variables

dladm create-vnic -l ${PARENT_NIC} ${VNIC_NAME}
zfs create "${ZFS_ZONES_PATH}/${ZONE_NAME}"
zonecfg -z ${ZONE_NAME} <<EOF
create -b
set brand=kvm
set zonepath=${ZONES_PATH}/${ZONE_NAME}
set ip-type=exclusive
add net
    set allowed-address=${VM_IP_CIDR}
    set physical=${VNIC_NAME}
add device
    set match=/dev/zvol/rdsk/${ZFS_ZONES_PATH}/${ZONE_NAME}/${VM_DISK_NAME}
add attr
    set name=bootdisk
    set type=string
    set value=${ZFS_ZONES_PATH}/${ZONE_NAME}/${VM_DISK_NAME}
add attr
    set name=vnc
    set type=string
    set value=on
add fs
    set dir=${ISO_PATH}
    set special=${ISO_PATH}
    set type=lofs
    add options ro
    add options nodevices
add attr
    set name=cdrom
    set type=string
    set value=${ISO_PATH}
zoneadm -z ${ZONE_NAME} install
zoneadm -z ${ZONE_NAME} boot

Now you have a VM booting from ISO in a zone. To connect the VM's Serial Console use

zlogin -C ${ZONE_NAME}

VNC access

To access the VNC console you will need to forward the Socket loacted under /tmp/vm.vnc to where you want to access it. I am using the Hosts network Interface so that I can access it via my local Network/VPN. The socat utility is not provided by any package in OpenIndiana. But there is a simplified version bundled with the zone brand.

/usr/lib/brand/kvm/socat "${ZONES_PATH}/${ZONE_NAME}/root/tmp/vm.vnc"

And there you go. The Vm will boot first from harddisk and if nothing is installed from cdrom. You can also modify other settings like memory allocation etc. Documentation is thanks to the people behind the OmniOS distribution. Full attribute docs here

Sombody asked for help to compile traefik on illumos today, so I figured I'll write up a quick guide

You will need the following packages * npm * nodejs * go 1.13+

I am using pkgsrc to be compatible with most illumos distros

Critical: npm must be run as unprivileged and /opt/local/bin must be in PATH

# Ensure npm works
export PATH=/usr/gnu/bin:$GOPATH/bin:$PATH:/opt/local/bin

# On Openindiana install go 1.13
pkg install golang-113

# Install required packages
pkgin in nodejs-10.16.3 npm

# Get the sources
git clone
cd traefik
git checkout v2.0.5

# based on the Dockerfiles get the module dependencies
go mod download

# Compile the WebUI
# note that I had to do this on a linux host as node-sass did not want to compile for me..... 
pushd webui
npm install
npm run build
mv dist/pwa/* ../static/

# Here you will need to patch dockers client code a bit for traefik to build
# add "solaris illumos" to $GOPATH/pkg/mod/
# see
# a Pull Request is opened for this with the guys from docker 

GO111MODULE=off go get


And that gets you a working traefik binary.

EDIT: thanks to Wonko on IRC we now have a patch that takes out docker support from traefik as well.

Works with 2.1.3, untested with any other version:

diff --git a/pkg/config/static/static_config.go b/pkg/config/static/static_config.go
index af3eb8e7..0c5cc342 100644
--- a/pkg/config/static/static_config.go
+++ b/pkg/config/static/static_config.go
@@ -10,7 +10,7 @@ import (
        acmeprovider ""
-       ""
+       // ""
@@ -155,7 +155,7 @@ func (t *Tracing) SetDefaults() {
 // Providers contains providers configuration
 type Providers struct {
        ProvidersThrottleDuration types.Duration          `description:"Backends throttle duration: minimum duration between 2 events from providers before applying a new configuration. It avoids unnecessary reloads if multiples events are sent in a short amount of time." json:"providersThrottleDuration,omitempty" toml:"providersThrottleDuration,omitempty" yaml:"providersThrottleDuration,omitempty" export:"true"`
-       Docker                    *docker.Provider        `description:"Enable Docker backend with default settings." json:"docker,omitempty" toml:"docker,omitempty" yaml:"docker,omitempty" export:"true" label:"allowEmpty"`
+       // Docker                    *docker.Provider        `description:"Enable Docker backend with default settings." json:"docker,omitempty" toml:"docker,omitempty" yaml:"docker,omitempty" export:"true" label:"allowEmpty"`
        File                      *file.Provider          `description:"Enable File backend with default settings." json:"file,omitempty" toml:"file,omitempty" yaml:"file,omitempty" export:"true"`
        Marathon                  *marathon.Provider      `description:"Enable Marathon backend with default settings." json:"marathon,omitempty" toml:"marathon,omitempty" yaml:"marathon,omitempty" export:"true" label:"allowEmpty"`
        KubernetesIngress         *ingress.Provider       `description:"Enable Kubernetes backend with default settings." json:"kubernetesIngress,omitempty" toml:"kubernetesIngress,omitempty" yaml:"kubernetesIngress,omitempty" export:"true" label:"allowEmpty"`
@@ -187,11 +187,11 @@ func (c *Configuration) SetEffectiveConfiguration() {
-       if c.Providers.Docker != nil {
-               if c.Providers.Docker.SwarmModeRefreshSeconds <= 0 {
-                       c.Providers.Docker.SwarmModeRefreshSeconds = types.Duration(15 * time.Second)
-               }
-       }
+//     if c.Providers.Docker != nil {
+//             if c.Providers.Docker.SwarmModeRefreshSeconds <= 0 {
+//                     c.Providers.Docker.SwarmModeRefreshSeconds = types.Duration(15 * time.Second)
+//             }
+//     }
        if c.Providers.Rancher != nil {
                if c.Providers.Rancher.RefreshSeconds <= 0 {
diff --git a/pkg/provider/aggregator/aggregator.go b/pkg/provider/aggregator/aggregator.go
index 7bcdee70..6e7c796f 100644
--- a/pkg/provider/aggregator/aggregator.go
+++ b/pkg/provider/aggregator/aggregator.go
@@ -25,9 +25,9 @@ func NewProviderAggregator(conf static.Providers) ProviderAggregator {
-       if conf.Docker != nil {
-               p.quietAddProvider(conf.Docker)
-       }
+//     if conf.Docker != nil {
+//             p.quietAddProvider(conf.Docker)
+//     }
        if conf.Marathon != nil {

In order to get me to write more and to simplify my blogging experience, I have a new Blog.

Ths Blog is now Powered by WriteFreely.

Follow me on the Fediverse via to for updates

When one of your servers has a ton of storage and the others all the RAM, then comes the time to share that storage via the network.

NFS is the first choice in most cases but as backing store of zones it has some drawbacks compared to ZFS. Unfortunately ZFS is not network shared like Ceph. But no problem ZFS is just a Filesystem and Volume Manager, it can be used ontop of any block storage. Thankfully exporting Block storage is easy in illumos with “scsi target mode framework (smtf)”. With it you can export any Block storage including of cource our beloved ZFS Volumes over the network. smtf also supports iSCSI. As iSCSI on OpenIndiana lacks guides I figured I would write up this post to guide people through the Process of setting up smtf with iSCSI.


We will have two nodes. A Target (Host/Server) with all the storage we could need and a Initiator (Client) where we want to run the zones.

Target Setup

Step 1 Create a LUN (Exported Disk)

First we need a block storage to export. Create a ZFS Volume for that. Please note that we will have to sync block sizes between stmf and zfs. Zfs by default uses 8K for Performance but stmf can only handle up to 4K. Thus we need to specify -o volblocksize=4K manually here. Otherwise you will get poor performance. I am planing on putting zones on the Lun so compression and deduplication here will count for the whole pool on the consumer/initiator side. Compression and dedup are up to you however. Use what you need. Only volblocksize is important.

zfs create -V 1000G -o volblocksize=4K -o compression=lz4 -o dedup=on rpool/zonelun0

This will create a Volume of one Terrabyte on the Rootpool with the name zonelun0. Now we need to let smtf know of the Block storage. Note the blk argument. this must be equal to the volblksize in the zfs volume.

stmfadm create-lu -p blk=4096 /dev/zvol/dsk/rpool/zonelun0

Now we can have a look at what smtf sees with list-lu. We should see something like the following.

~# stmfadm list-lu -v
LU Name: 600144F0AD85453600005B9188B40001
    Operational Status: Offline
    Provider Name     : sbd
    Alias             : /dev/zvol/dsk/rpool/zonelun0
    View Entry Count  : 0
    Data File         : /dev/zvol/dsk/rpool/zonelun0
    Meta File         : not set
    Size              : 1073741824000
    Block Size        : 512
    Management URL    : not set
    Vendor ID         : SUN     
    Product ID        : COMSTAR         
    Serial Num        : not set
    Write Protect     : Disabled
    Writeback Cache   : Disabled
    Access State      : Active

Note that our new LUN has a name of 600144F0AD85453600005B9188B40001. This will be used later on.

Step 2 Make LUN visible (View)

Now that we have a LUN we need to define who can see it. As we have no security needs atm we make it visible to everybody.

stmfadm add-view 600144F0AD85453600005B9188B40001 #Note the LUN name from step 1

~# stmfadm list-view -l 600144F0AD85453600005B9188B40001
View Entry: 0
    Host group   : All
    Target group : All
    LUN          : 0

Step 3 Create the Target

Now that we have the basic LUN setup we can create the iSCSI Target itself

First check if STMF service is running:

~# svcs -l stmf
fmri         svc:/system/stmf:default
name         STMF
enabled      true
state        online
next_state   none
state_time   September  7, 2018 at 12:09:11 AM CEST
logfile      /var/svc/log/system-stmf:default.log
restarter    svc:/system/svc/restarter:default
dependency   require_all/none svc:/system/filesystem/local:default (online)

if not enable it

svcadm enable stmf

Install and enable the iSCSI Target Service

pkg install network/iscsi/target

svcadm enable -r svc:/network/iscsi/target:default

create a new target

itadm create-target
~# itadm list-target -v
TARGET NAME                                                  STATE    SESSIONS online   0        
        alias:                  -
        auth:                   none (defaults)
        targetchapuser:         -
        targetchapsecret:       unset
        tpg-tags:               default

create target portal group (TGP) and stmf target group

itadm create-tpg iscsi01 #Note use your ip here

stmfadm create-tg iscsi-tg01

svcadm disable stmf
stmfadm add-tg-member -g iscsi-tg01
svcadm enable stmf

Initiator Setup

Step 1

Now that we have the target we just need to login into the target and get the LUN mapped.

iscsiadm add discovery-address

iscsiadm modify discovery -t enable

Now we have a list of targets exposed by our Server

~# iscsiadm list target
        Alias: -
        TPGT: 1
        ISID: 4000002a0000
        Connections: 1

And we see our LUN

~# iscsiadm list target -S -v
        Alias: -
        TPGT: 1
        ISID: 4000002a0000
        Connections: 1
                CID: 0
                  IP address (Local):
                  IP address (Peer):
                  Discovery Method: SendTargets 
                  Login Parameters (Negotiated):
                        Data Sequence In Order: yes
                        Data PDU In Order: yes
                        Default Time To Retain: 20
                        Default Time To Wait: 2
                        Error Recovery Level: 0
                        First Burst Length: 65536
                        Immediate Data: yes
                        Initial Ready To Transfer (R2T): yes
                        Max Burst Length: 262144
                        Max Outstanding R2T: 1
                        Max Receive Data Segment Length: 32768
                        Max Connections: 1
                        Header Digest: NONE
                        Data Digest: NONE

        LUN: 0
             Vendor:  SUN     
             Product: COMSTAR         
             OS Device Name: /dev/rdsk/c0t600144F0AD85453600005B9188B40001d0s2

Step Format and Profit

zpool create zonespool c0t600144F0AD85453600005B9188B40001d0

And now have fun with your iSCSI backed ZFS Pool

This initial post to see if the theme renders properly and to say I have started blogging.