Brewblox docker install?

I feel like I’m missing something obvious, but I can’t seem to find anything in the documentation or using a forum search.

I currently have brewpi running on a raspberry pi (OSMC OS) using docker. I’d like to switch to brewblox in a reversible fashion (brewing tomorrow so want to be able to revert back quickly if anything doesn’t work) by stopping the brewpi instance, them pull/build/start-ing Brewblox.

However, while the advanced tutorial documentation page seem to suggest that everything is still docker based, I’m not sure if I can just run the brewblox-ctl script or if that only works on vanilla Rasbian lite systems.

I’m guessing I can just extract the commands I need from the install script, but I wanted to check to see if there is a recommended procedure for docker installs not on a fresh rasbian lite system.

Thanks,
Austin

Nevermind this question.

I realized I a) don’t really ever use that OSMC for anything other than brewpi anyway and b) accidentally deleted the ssh private key for logging into it.

So… it’s going to be a clean install after all!

The process documented on brewblox.netlify.com is a docker based install.

I’ll go ahead and answer the question anyway, for the benefit of other users with the same question.

You can have multiple installs on the Pi, but not on the Spark.

BrewBlox is almost fully containerized: the only exception is the brewblox-ctl management tool, which is installed using Pip, the Python package manager.

Configuration and database files are fully contained in the install directory.

From the perspective of the Pi, you can have Brewpi and BrewBlox simultaneously installed. They won’t conflict if they’re not running concurrently.

The Spark uses different firmware for Brewpi and BrewBlox, and can only have one installed at a time.

Edit: as long as your system is linux-based, and has Docker, docker-compose, and Python >=3.5 installed, BrewBlox will probably work. It’s been succesfully installed on a Synology NAS.

I’m attempting to install BrewBlox on a QNAP NAS using the Container Station. I created a container running ubuntu xenial with Docker. I got as far as first time set up of brewblox-ctl. I got this message after running
brewblox-ctl setup: ERROR: no matching manifest for linux/arm64/v8 in the manifest list entries.

The short answer is that we currently don’t support the arm64 architecture (we build amd and arm32 versions).

We can have a look at how much effort it would be to build arm64 versions, but this would likely require some manual edits on your part.

Well it was worth a try! Thanks.

Maybe I am not looking at the correct documents, but I do not find any information on how to install the docker images without the script.

The reason why this is relevant is that I would like to install Brewblox on an appliance (cpu compatible with a Rasberry Pi) which gives me access to a Portainer control software for, but not to a shell. So ideally, I would just need to know which image to pull from Docker Hub and which parameters to pass on to it in Portainer.

Any hints on this?

The moving parts of a deployment are:

  • compose file for shared servicee
  • compose file for user services
  • .env file with deployment variables
  • volume containing SSL certificate for reverse proxy
  • volume for influx database
  • volume for couchdb database

If you don’t have access to docker-compose, you’ll have to manually create a docker network, and add your containers to it.

If you’re familiar with python: here is where we create/copy all this in brewblox-ctl: https://github.com/BrewBlox/brewblox-ctl-lib/blob/develop/brewblox_ctl_lib/setup_command.py

If not, do let me know, and I’ll draw up a quick overview.

Thanks, I can use the portainer interface to do most (if not all) things you would normally ask docker-compose to do on the command line, so this should not be the issue.

I will have a look and get it a go.

I have no problems regarding the volumes. I created the three volumes and could technically mount them at any given point in the file system of the container. Also, I can set environment variables without difficulty in the container setup page. But I have no clue how I can work around the first two bullets. In fact, if I am not mistaken, I would need some kind of base image for the container (which would normally be pulled from docker hub). See the attached screen shot of the container creation screen (the advanced container settings are not quite alle visible).


contains the two config files. You can copy the container settings from there (image, command, ports, etc). Env values are also used here. You can resolve those yourself if you want, with the following defaults:

BREWBLOX_RELEASE=edge
BREWBLOX_PORT_MDNS=5000
BREWBLOX_PORT_HTTP=80
BREWBLOX_PORT_HTTPS=443

You will also need to set container name to service name (found in compose files), and add all containers to a network. They need to be able to find each other by name (eg. http://eventbus).

I think at that point the traefik container has enough information to correctly forward requests to containers. To confirm: https://host_address/history/api/doc should show the debug page for the history container endpoints.

Having no shell access is a pita.

Can you not enable SSH?

https://wiki.qnap.com/wiki/How_to_SSH_into_your_QNAP_device
Sorry confused your system with another poster.
Why can’t you have SSH on a raspberry pi like system?

I can access the console itself via the portainer pages. But in order for that to work, I first have to have a container up and running.

@Bob_Steers The ports you mention are only TCP or (also UDP)?

I know, but that’s the console inside the container. We don’t really use that.
The application is split into multiple containers that are managed by a cli tool on the host (brewblox-ctl) that starts, stops updates and runs commands inside the containers.

Doing all of that separately and manually for each container will be a pain to maintain, so being able to SSH into the host will make your life a lot easier.

The actual ports you have to publish are listed in the compose files.

The env values mentioning “port” I listed are something you’ll have to manually replace in your config.

For example, the spark-one service in the docker-compose.yml file includes the command

--mdns-port=${BREWBLOX_PORT_MDNS}

This should be converted to

--mdns-port=5000

The traefik container is the only one with published ports: 80 and 443.

Other than that, I do agree with @Elco: manually managing this is not something you do if you don’t have to. Why exactly can’t you access the commandline for your managed device? Could you provide some more info on how it’s set up?

Thanks guys for the explanations. I think I get what would technically be necessary to make this work. But I agree that managing (and more importantly maintaining) this manually would not really be feasible. I might give it a try.

Generally speaking, I wonder why you de facto still insist on a Raspberry Pi for the BrewBlox software. It could run on about any hardware, as for example a NAS (which was already mentioned). Ideally, everything was just in one image (and therefore container) that could simply be pulled in toto from Docker Hub.

We mention (but don’t insist on) the Pi as primary deployment target to keep things simple.

We picked Docker because it enormously simplifies installing and managing an extensible system that by default uses 8 interdependent applications.
Pi’s are cheap, available pretty much anywhere, and have enough computing power to run multiple Docker containers.

There are currently two big restrictions on what platforms support BrewBlox:

  • We build for two processor architectures: amd64 (desktop linux), and arm32v7 (Pi 2 or later).
  • Docker on Mac/Windows are implemented inside a VM, and have trouble accessing USB devices.
    • WSL seems to change this, but is not yet widely available.

Do you have a linux-based machine with a processor architecture that’s either amd64 or arm32v7+? BrewBlox works out of the box.

If you answered the previous question with either “no”, or “I have a what now?”, then buying a Pi is probably the simplest and cheapest solution.

Running everything inside a single docker container solves some problems, but creates others. It adds overhead, does not remove the requirement of the host having a docker runtime, and needs additional configuration to access USB.

If you can’t access the terminal on your host, this may be a valid approach for you. Build a docker image with python / docker installed, give it access to the host docker socket, and enable SSHD.