Skip to content

Self-hosting S3

What this tutorial will cover?

This tutorial will show how to set up minio to work with capgo.

This is not technically required to run capgo.

Setting S3 allows for uploading bundles from the CLI.


  1. Docker

Getting started

First, create a new directory.

Then create a folder named data inside.

Then run the following command:

Terminal window
docker run \
-p 9000:9000 \
-p 9090:9090 \
--user $(id -u):$(id -g) \
--name minio1 \
-v PATH_TO_DATA_FOLDER_CREATED_IN_PREVIOUS_STEP:/data \ server /data --console-address ":9090"

If you ever close the console window with this container you can start it with:

Terminal window
docker start minio1

If you ever need to change the configuration of minio you can remove the container by running:

Terminal window
docker rm minio1

⚠️ This command does not remove minio data

Setting up edge functions

Now that we have a S3 server running we need to set up capgo edge functions to use our S3 server.

To do that we need to create an ENV file in capgo/supabase named .env.local

This file should look like this:

Terminal window
# Below is the accually important setup for S3

The ip host.docker.internal is a docker ip that can be both reached only by docker internally, but we replace it in the code with so that minio can be reached from localhost.

To run edge functions with our new env file use

Terminal window
supabase functions serve --env-file ./supabase/.env.local

Setting up CLI to use S3

The CLI will not work by default with minio. The following change to capacitor.config.ts1 is required.

const config: CapacitorConfig = {
appId: '',
appName: 'demoApp',
webDir: 'dist',
bundledWebRuntime: false,
plugins: {
CapacitorUpdater : {
// Without this localS3 the upload command will fail
localS3: true


  1. File is located in your app directory