fileloft

fileloft

Resumable uploads for Rust. Compose it into your server, or run it standalone.

Rust tus 1.0.0 Axum · Actix · Rocket MIT

Why fileloft

fileloft is a Rust implementation of the tus resumable upload protocol. It is designed as a small set of composable crates that you can drop into an existing Rust HTTP server, or run as a standalone binary in Docker or any other container runtime.

Framework agnostic

A protocol core with no transport assumptions, plus thin adapters for Axum, Actix Web, and Rocket. Bring your own router.

Pluggable storage

A DataStore trait with first-party filesystem, S3, GCS, and Azure Blob Storage backends. Each backend ships as its own crate and Docker image variant.

Standalone or embedded

Use it as a library inside a custom Rust server, or run the prebuilt binary in Docker when you just need a tus endpoint.

Safe by default

#![forbid(unsafe_code)] across the workspace. Conservative defaults for limits, locking, and checksums.

Use as a library

fileloft is published as a workspace of small crates. The protocol logic lives in fileloft-core; framework adapters and storage backends are separate crates so you only pull in what you use.

Add the dependencies

[dependencies]
fileloft-core         = "0.1"
fileloft-store-fs     = "0.1"
fileloft-axum         = "0.1"   # or fileloft-actix / fileloft-rocket
tokio                 = { version = "1", features = ["full"] }

Mount it on your router

The example below uses Axum, but the shape is the same for the other adapters — construct a TusHandler from a store, optional locker, and Config, then mount the adapter’s router under whatever path you like.

use std::sync::Arc;
use fileloft_core::{Config, TusHandler};
use fileloft_store_fs::{FileLocker, FileStore};
use fileloft_axum::tus_router;

#[tokio::main]
async fn main() {
    let root = "/var/lib/fileloft";
    let store = FileStore::new(root);
    let locker = FileLocker::new(format!("{root}/locks"));
    let handler = Arc::new(TusHandler::new(store, Some(locker), Config::default()));

    let app = axum::Router::new()
        .nest("/files", tus_router(handler));

    let listener = tokio::net::TcpListener::bind("0.0.0.0:8080")
        .await
        .unwrap();
    axum::serve(listener, app).await.unwrap();
}

What you get

  • Full tus 1.0.0 core protocol: creation, expiration, checksum, termination, and concatenation extensions are opt-in via Config.
  • A DataStore trait you can implement to back uploads with your own storage.
  • A HookSender for observing upload lifecycle events without coupling to a specific message bus.

See the crate docs on docs.rs for the full API.

Run as a standalone binary

If you do not need to embed fileloft in an existing Rust service, you can run the prebuilt binary as a self-contained tus server. A separate Docker image is published for each storage backend.

Image variants

TagBackendBase
latest, fs, X.Y.Z, X.Y.Z-fsLocal filesystemdebian:trixie-slim
s3, X.Y.Z-s3Amazon S3 / S3-compatible (MinIO, R2, …)debian:trixie-slim
gcs, X.Y.Z-gcsGoogle Cloud Storagedebian:trixie-slim
azure, X.Y.Z-azureAzure Blob Storagedebian:trixie-slim

All images are available from docker.io/soundsystems/fileloft.

Common configuration

Every variant reads these environment variables:

VariableDefaultDescription
FILELOFT_BIND0.0.0.0:8080Address the HTTP server binds to.
FILELOFT_MAX_SIZEunset (no limit)Maximum allowed upload size, in bytes.
FILELOFT_BASE_PATH/files/URL path the tus endpoints are mounted under.
RUST_LOGinfoTracing filter (e.g. debug, fileloft_server=trace).

Production hardening

The standalone server does not include authentication or authorization. Put it behind an auth gateway, signed URL layer, or trusted private network boundary before exposing it to untrusted clients.

For production deployments:

  • Set FILELOFT_MAX_SIZE to the largest upload you intend to allow.
  • Set FILELOFT_CORS_ALLOW_ORIGIN to an explicit origin for browser clients.
  • Only enable FILELOFT_BEHIND_PROXY behind a trusted proxy that strips client-supplied forwarded headers; prefer FILELOFT_BASE_URL for public URLs.
  • Terminate TLS at fileloft or at a trusted reverse proxy.
  • Disable termination or downloads when clients do not need those capabilities.
  • Object storage (S3, GCS, Azure): the server only coordinates a given upload inside one process. If you run multiple replicas, use a shared per-upload lock (or similar), or sticky routing on the load balancer so all requests for the same upload go to the same instance. Otherwise, two nodes could write the same upload at once and corrupt the object.

Filesystem (default)

docker run --rm \
  -p 8080:8080 \
  -v fileloft-data:/var/lib/fileloft \
  docker.io/soundsystems/fileloft:latest

The server stores uploads under /var/lib/fileloft. Mount a volume or host path there to persist data across restarts.

VariableDefaultDescription
FILELOFT_DATA_DIR/var/lib/fileloftDirectory used by the filesystem store.

Amazon S3

docker run --rm \
  -p 8080:8080 \
  -e FILELOFT_S3_BUCKET=my-uploads \
  -e AWS_ACCESS_KEY_ID \
  -e AWS_SECRET_ACCESS_KEY \
  -e AWS_REGION=us-east-1 \
  docker.io/soundsystems/fileloft:s3

Authentication uses the standard AWS SDK credential chain: environment variables, ~/.aws/credentials, IMDS, web identity, etc.

VariableDefaultDescription
FILELOFT_S3_BUCKET(required)S3 bucket name.
FILELOFT_S3_PREFIXemptyObject key prefix (e.g. uploads/).
FILELOFT_S3_ENDPOINTunsetCustom endpoint for S3-compatible services (MinIO, R2).
FILELOFT_S3_REGIONfrom SDK configOverride the signing region.
FILELOFT_S3_FORCE_PATH_STYLEfalseSet to true for path-style addressing (often needed for MinIO).

Standard AWS SDK variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, AWS_PROFILE, etc.) are also respected.


Google Cloud Storage

docker run --rm \
  -p 8080:8080 \
  -e FILELOFT_GCS_BUCKET=my-uploads \
  -v /path/to/keyfile.json:/credentials.json:ro \
  -e GOOGLE_APPLICATION_CREDENTIALS=/credentials.json \
  docker.io/soundsystems/fileloft:gcs

Authentication uses Application Default Credentials. On GCE/GKE the attached service account is used automatically. Outside Google Cloud, mount a service account key file and set GOOGLE_APPLICATION_CREDENTIALS.

VariableDefaultDescription
FILELOFT_GCS_BUCKET(required)GCS bucket name.
FILELOFT_GCS_PREFIXemptyObject name prefix (e.g. uploads/).

Azure Blob Storage

docker run --rm \
  -p 8080:8080 \
  -e FILELOFT_AZURE_CONTAINER=my-uploads \
  -e AZURE_STORAGE_CONNECTION_STRING \
  docker.io/soundsystems/fileloft:azure

The Azure image supports two authentication modes:

  1. Connection string — set FILELOFT_AZURE_CONNECTION_STRING or AZURE_STORAGE_CONNECTION_STRING.
  2. Default credential — set FILELOFT_AZURE_ACCOUNT (or AZURE_STORAGE_ACCOUNT) and let the Azure Identity SDK resolve credentials (managed identity, Azure CLI, environment variables).
VariableDefaultDescription
FILELOFT_AZURE_CONTAINER(required)Blob container name.
FILELOFT_AZURE_PREFIXemptyBlob name prefix (e.g. uploads/).
FILELOFT_AZURE_CONNECTION_STRINGunsetAzure Storage connection string (takes priority).
FILELOFT_AZURE_ACCOUNTunsetStorage account name (used with default credentials).

Building from source

The repository includes a multi-stage Dockerfile. Select a backend with the BACKEND build arg:

docker build --build-arg BACKEND=s3 -t fileloft:s3 .

Or use the Makefile targets:

make docker-build-fs       # builds :latest and :fs
make docker-build-s3       # builds :s3
make docker-build-gcs      # builds :gcs
make docker-build-azure    # builds :azure
make docker-build-all      # builds all variants

Verifying it works

Any tus 1.0.0 client will work. For a quick smoke test:

curl -i -X POST http://localhost:8080/files \
  -H "Tus-Resumable: 1.0.0" \
  -H "Upload-Length: 11" \
  -H "Upload-Metadata: filename aGVsbG8udHh0"

A 201 Created with a Location header means the server is healthy and ready to accept upload chunks.