Arcan

A12 Directory
Login

A12 Directory

This page summarises completed tasks, and current round of tasks within the NLnet grant for the A12-Directory improvements.

The overall purpose is to get the A12 protocol, reference tooling and implementation to a point of making the visions of a. "Many Devices, One Desktop" and b. "Your Desktop, reaching out" a practical reality. This means that the set of devices, ranging from full blown desktop, to home servers, to single-board computers should be able to work in unison - share load and individual capabilities. With that, it should also open up for extended collaboration, letting you compartment work and invite others to participate in a secure and accessible manner.

Milestone 1:

Status: Completed

Linking local and remote development

Before being able to let proof-of-concept applications drive feature selection and design details, we needed some basic building blocks. First was to make sure that the script collection running on a user facing device (appl) could communicate with others within your device network, particularly the home server used for coordination (directory).

This was implemented by extending the 'open_nonblock' function call namespace facility. This was previously used through the arcan_db tool:

arcan_db add_appl_kv arcan ns_myns Home:rw:/home/someuser

Then the scripts could call list_namespaces and see 'Home' as the user presentable label, and 'myns' as the reference for the namespace. This could then be used with functions like open_nonblock("myns:/something"). The extension made here is that there is a reserved 'a12', with a . prefix for special files. To list files exposed by a controller (directory server side set of scripts with a matching name to the local), one can open_nonblock("a12:/.index") and then read out a listing and then use further calls to open_nonblock("a12:/somefileorhash") to stream a file from the network of directory servers.

There are also user private ones, available by using the reference handle from net_open("@stdin"):

a = net_open("@stdin")
nbio = open_nonblock(a, ".index")

This would retrieve the index for the private store tied to the authentication key used when running from the directory, e.g.

arcan-net --put-file mydirectory@ some.file
arcan-net mydirectory@ myappl

(inside script of myappl using net_open as above), open_nonblock(a, "some.file")

Modify protocol and reference implementation to support linking directory servers together

This was implemented by an extension to the 'config.lua' script one would use to configure the directory server, e.g. arcan-net -c config.lua. We added an entry-point for when the server was finished configuring, called ready, and a function link_directory.

function ready()
    link_directory("dd", function(source, status) end)
end

Where 'dd' was previously defined in the keystore (e.g. arcan-net dd arcan.divergent-desktop.org). This requires the remote end to both permit someone to make a link to it: config.permissions.link = 'sometag'. The handler function provides feedback on link status (if a connection couldn't be made, or was dropped) so that the configuration script can react accordingly.

We ended up with two types of links, a referential and a unified. The referential requires less permission as it simply lets one directory server route traffic to another:

arcan-net myserv@ dd/someappl

Which would connect through 'myserv', access the 'dd' link and run 'someappl' from there. The unified form doesn't expose that there is a connection, all file access and appl- messaging is handled transparently.

Configuration and tool modification to permit or revoke access for specific directory server links

After several failed prototypes, we settled on exposing admin functions via another entrypoint in the config script, admin_command to let the server administration have one access interface for all current and future administration features.

config.lua:
function admin_command(client, command)
    if command == "link" then
        link_directory(string.sub(command, 6), some_handler)
        client:write("ok\n")
    end
end

If the authentication key has config.permissions.admin, the arcan-net tool can be used to route commands there:

arcan-net --admin-ctrl myserv@

Then inputs on stdio would be routed to the admin_command handler and any written results be sent back to stdout. This can then be used to modify permissions, assign or remove tags to an active client (available via the register and register_unknown entrypoints).

Milestone 2

Status: Completed

Extend protocol and reference implementation with support for signed file and state store

The 'REKEY' facility in the protocol that is used for stepping the ratchet that provides forward secrecy got a mode where the client can assign a signing key identity to complement the authentication one. This is done by signing a challenge that the server provided after authentication with a signing key.

The tooling side of this looks simple:

arcan-net --sign-tag sometag --push-appl myappl mydir@

The sign-tag argument will first complete the REKEY part to prove ownership of the key, then transfer operations made will apply a signature to the header. For the --push-appl form above, this will extend the manifest (version, permissions, ...) for the appl with the public part of the key, a signature of the header and a signature of the data block.

When running an appl, arcan-net mydir@ myappl will then verify that the signatures match the key, and refuse to run if it doesn't.

Add debugging controls for synchronous stepping local application execution with server side processing

After trying, and failing, to implement the beefy 'debug adapter protocol' spec (which we have a UI and client implementation for in Cat9) we decided to modify the monitor (src/arcan_monitor.c) interface to the main engine to implement a simpler protocol, as well as an implementation for that in Cat9:

builtin dev
debug launch arcan someappl

(or attaching via an established socket, debug attach arcan /path/to/socket). The same interface was then added to the server-side controller (protocol-wise it's a datastream via the developer permission '.debug' resource along with some added VM / process control to communicate across the sandbox).

The protocol covers all the expected 'stepnext', 'stepinstruction', 'stepcall', 'stepend', 'locals', 'breakpoint', 'eval', 'dumpkeys', 'backtrace', 'source' and so on (src/a12/net/dir_lua_support).

The arcan-net tool can then use arcan-net --debug-appl mydir@ myappl and stdio is rerouted across this.

Server side application support for launching dynamic sources

Both the server config script and the appl controller scripts got a launch_target call for running a database defined target:

arcan_db add_target mybin BIN /usr/bin/Xarcan -redirect -exec chromium

Something like launch_target("somename", "mybin") would generate ephemeral keys and mark them as temporarily trusted for acting as a data source, then launch the binary over arcan-net as a loopback connection.

Then depending on if "somename" is a user presentable name or a reference to an existing connection, it'll either register a publicly available source, or a scoped one only visible and accessible by a specific user. In that case the client will also be notified that there is a dynamic source available for immediate sinking.

We also added a server config to 'host' Arcan appls themselves. This requires the server to have the arcan_lwa executable (which is a simplified form of the engine that can't control display). If the server has permission, a client can:

arcan-net dd@ "|myappl"

Instead of downloading and running myappl locally, the server will spin up an instance of arcan_lwa running myappl, with access to the user's private state store. This connects as a new restricted source directed toward the connection that made the request, and arcan-net will source it. This lets the simplified 'smash' viewer stream any arcan appl without having the rest of the stack available.

Server side support for triaging and collecting client side crash dumps and snapshots

When a client running an appl runs into a failed exit (script crash), arcan-net collects information, packages and sends to a pre-reserved server-side private store slot.

This has been combined with a flush_report function call available to the admin script, as well as '.report' file available to a developer or controller script that is generated dynamically by combining all user submitted reports, together with log reports from the server VM.

Milestone 3

Status: nearing completion

Support source-sink crash/disconnect resumption

This took a substantial refactor of how arcan-net hosts sources, e.g.

arcan-net -l 6680 -- /some/arcan/shmif/client

It now splits into a separate arcan-net-session binary. This tracks the connection status for a hosted source, and if the source is alive when a connection is terminated, it is kept alive in a dormant state but paired to the authentication key used by the sink.

When a new connection arrives, the authentication key is checked against the set of pending sources, and if there is a match, the source is told to reset to a 'wm tracking lost' state (renegotiate colours, subwindows and so on).

Allowing multiple sources to access a single sink (broadcast)

This extends on the 'crash/disconnect resumption' feature by adding a --cast argument:

arcan-net --cast -l 6680 -- /some/arcan/shmif/client

The first client that connects gets the /some/arcan/shmif/client source to sink and 'drive' the connection. Internally this spins up a framecache that tracks video buffer encoding state. When new clients connect, they are routed through this framecache (which also instructs the primary connection to try and quickly get to new keyframes to reduce initial delay).

API for server-side application key/value store access

This has been implemented for the config script scope and for the controller script scope. The later was more complicated as all calls has to go across the sandbox barrier since it doesn't have file-system access.

The functions themselves look and behave like local Arcan appl match_keys, store_key, get_key. The big change for the controller script side is that the lookups are asynchronous. This is necessary due to the sandbox and that the keys themselves may be distributed across a network of linked directories.

API for application driven resource indexing

Normally the open_nonblock(ref, ".index") call routes through a glob as per the first milestone. This request can be intercepted by the controller script and remapped to a server defined name transparently. The plan for this is to combine with pluggable services for routing/caching through other means, e.g. IPFS, torrent or regular https.

If the controller script implements the _index hook:

function myappl_index(client, nbio)
end

The actual stream returned is now entirely controlled by nbio:write calls. Other get and put requests are handled similarly so that the scripts can run it through higher level description generation like a LLM creating a textual representation of an image or OCR retrieval of text.

Search and retrieve resources based on description, hash and signature

(Ongoing, prototyping)

API for publishing / unpublishing / mirroring a resource across the directory network

(Ongoing, prototyping)