External Plugin basics

I have a BBQ Controller that has a terrible interface and associated phone app, I have worked out how to get the data from it and want to feed it into Brewblox purely for UI/ graphing and metrics tracking.

Is https://github.com/BrewBlox/brewblox-boilerplate still the right place to start?

I will also draw a lot from https://github.com/j616/brewblox-tilt which is does the same as it seems to be well maintained.

Yes, brewblox-boilerplate is the right place to get started for backend services.

The Plaato plugin is another boilerplate-based service simple enough to be used as example.

gold, Plaato seams like it is pretty clean, great starting point.

I have my Repeater collecting and pushing metrics to influx, which was super easy to setup.

So I have:

           await mqtt.publish(
                self.app,
                self.topic,
                {
                    'key': f"ShareMyCook[{device_data.name}]",
                    'data': device_data.serialize()
                }
            )

Where:
key = ShareMyCook[UltraQ]
data is:

{
    'active': 1, 
    'fan_duty[%]': 100, 
    'pit_temp[c]': 17.0, 
    'pit_target[c]': 210.0, 
    'food1_temp[c]': 18.0, 
    'food1_target[c]': 65.0, 
    'food2_temp[c]': 20.0, 
    'food2_target[c]': 65.0
}

I would like to have it like how a spark is shown where the main container is ShareMyCook, then each of my devices has the keys that are in the data. This way I can detail multiple devices under the one data source.

If you understand you correctly, you want to publish to key=ShareMyCook, and data={'UltraQ': {...} }.

https://brewblox.netlify.app/dev/reference/event_logging.html#history describes how nested data is flattened.

You can publish different contents of data, and they will be merged automatically.

Side note: our system automatically recognizes degC / degF in brackets.

Cheers for all of that, I have a functional docker image now.

What platform are you building on?

Some Interesting things with building the docker image:

  • For the life of me, I couldn’t get buildx to become available on unbuntu 20 LTS, enabling experimental mode via the env variable or docker json just wouldn’t work, so I ended up doing it on my Mac.

  • readline -f doesnt work the same as gnu readline, so I ended up installing greadline via homebrew, which is the same.

  • cp dist/ docker/ doesn’t seem to copy anything on a mac, but cp dist docker/ does, which should also work under most linux distros too.

Not sure what you mean by platform. Our CI builds are on Azure, using x86.

What version Docker are you using on ubuntu?

There are two experimental flags in docker: one for the engine, one for the CLI. As far as I know, only the CLI flag is required, but you can try setting them both.

https://github.com/BrewBlox/brewblox-firmware/blob/develop/docker/enable-experimental.sh is the script in firmware we use to set the flags.

https://github.com/BrewBlox/brewblox-firmware/blob/develop/docker/prepare-buildx.sh sets up buildx. This is pretty much the same as the commands described in boilerplate.

Note that in Azure CI (and probably others), buildx setup is subtly different, because they use Docker EE.
The required config for that can be found in the boilerplate pipelines file.

I’ll change the dist vs. dist/ call. Where does the readline -f happen?

I’ll look into the firmware setup stuff and see.

Not sure what you mean by platform.

I meant development. I was following your guide, and lots of little things didn’t match up or work out of the box, so I was just wondering what your development environment looks like, as I assume you iterate with docker locally, prior to committing and running the full CI process.

Where does the readline -f happen

Sorry, readlink -f

What version Docker are you using on ubuntu?

$ lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 20.04 LTS
Release:	20.04
Codename:	focal

$ DOCKER_CLI_EXPERIMENTAL=enabled docker version
Client:
 Version:           19.03.8
 API version:       1.40
 Go version:        go1.13.8
 Git commit:        afacb8b7f0
 Built:             Tue Jun 23 22:26:12 2020
 OS/Arch:           linux/amd64
 Experimental:      true

Server:
 Engine:
  Version:          19.03.8
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.8
  Git commit:       afacb8b7f0
  Built:            Thu Jun 18 08:26:54 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.3.3-0ubuntu2
  GitCommit:
 runc:
  Version:          spec: 1.0.1-dev
  GitCommit:
 docker-init:
  Version:          0.18.0
  GitCommit:

$ DOCKER_CLI_EXPERIMENTAL=enabled docker buildx version
docker: 'buildx' is not a docker command.
See 'docker --help'

Elco and me both use Ubuntu 20.04, and we both have separate machines at home and at the office.

We use a few shortcuts to save time, and our CI pipeline is set to build feature branches, but yes, we do commonly start local image builds.

According to docs, buildx is available in docker 19.03, which you have. How did you install Docker? (Snap, install guide, get.docker script, etc)

I suspect any omissions in the install guide are caused by the “it works on my machine” issue, where we forgot about one or more config changes, or happen to have a required package pre-installed for an unrelated purpose.
Pyenv / poetry typically are the most fiddly to get set up. Does this match your experiences?

So far, we’ve spent quite a lot of time improving and automating dependency/config management in dev, CI, and production - and it has been more than worth it.
I’ll make an issue to do a squeaky clean OS install on a spare disk to verify the instructions.
Feel free to mention any and all speed bumps you encounter.

I assume in docker/before_build.sh? It looks to be overkill. That line can be replaced with:

pushd "$(dirname "$0")/.." > /dev/null

Just apt install docker, I spun up an EC2 instance with Ubuntu 20.03, just to test fresh, and it wanted to install it via snap, which I am still getting my head around, so I gave up when it acted the same.

Yeah especially on my mac with homebrew, I use zshell too, so there were a couple of things that I needed to sort out. I definitely attribute it to my setup though.

Poetry kept wanting to use my system interpreter, which by default on Mac is still Python2.7.

When I took out my specific variance, it was only those two points that I brought up that I thought worthwhile contributing back.

For development, I use IntelliJ IDEA with the Python plugin, so basically PyCharm. With the poetry plugin it found the venv no problems, and I just ended up just debugging with the entrypoint as __main__.py. Breakpoints and interactive debugging worked perfectly, so iterating quickly was no issue at all. I only went to docker after I was code complete.

I just followed your documentation for setting up Azure pipelines. The only section that I needed to stray from was this:

DOCKER_PASSWORD is your Docker Hub password. Make the value secret by clicking the lock icon.

I have two factor auth setup, so I needed to go into Account Settings -> Security and create an access token and use that for the password.

Otherwise, very complete and helpful.

I can see myself building more of these things.

One thing that I would like to build is a notifications plugin so that I can send myself telegram messages when thresholds are exceeded, or have it join a channel as a bot so I can send it requests for up-to-date metrics. The BBQ for example, I leave overnight, so it is good to know if the fan kicks up, or the temp rises/drops too much.

Oh, I case you were wondering, here is my fork of boilerplate https://github.com/clarkadamp/brewblox-sharemycook

Thanks, I’ll add a note about that.

Sounds very useful. If you wish to decouple the functionality and/or include data from other services: the brewblox_service.mqtt module also includes subscribe/listen functions.

That looks really nice. Do be careful if you run brewblox-ctl log: you’re currently logging credentials if you have a login error.

Edit: I ran a fresh Ubuntu install, and then followed the boilerplate instructions.
If you install docker with sudo apt install -y docker.io, buildx is not available.

To fix this, run:

sudo apt purge docker.io
curl -sL get.docker.com | sh

I’ll update the boilerplate instructions to include docker.

After getting my head around aresponses I finally have unit testing complete, but I get this warning

  /Volumes/dev/projects/brewblox-sharemycook/.venv/lib/python3.7/site-packages/brewblox_service/features.py:48: DeprecationWarning: Changing state of started or joined application is deprecated
    app[FEATURES_KEY] = dict()

But I can’t find the deprecation warning.

Is that something I need worry about?

I have brewblox-service = "^0.30.1" in my toml file (IntelliJ prompted me to update), boilerplate had "^0.28.0"

Also, what do you use to document use cases and design decisions? I would like to start gathering some use cases for alerting and actions.

I had a quick peek, and the deprecation warning is caused by the setup order in your test fixtures.

An aiohttp app has two phases in its lifecycle: setup, and running.

During the synchronous setup phases, the app object is created and all setup() functions are called. REST endpoints and service features are registered here.
After that setup phase is done, the app is started and bound to an endpoint. At this point the asyncio loop.run_until_complete() function is called.
In a production environment, this happens in brewblox_service.service.run_app(app). During tests, this happens in the client fixture defined in conftest.py.

In your test code, your app fixture depends on the client fixture, meaning your service is started, and then registers more functionality.
A side effect of this is that the prepare()/run() functions of your broadcaster are not called automatically.
This can be useful for test code, so you may well choose to not call setup() in your tests, but explicitly create the Broadcaster class.

Typically, we have test functions depend on only app if they test setup-phase code, and on app and client both if they test runtime-phase code.

Writing this, I do realise that the full init/setup model for services could use some more documentation. It works quite well, but requires an understanding of how concepts from asyncio, aiohttp, and brewblox-service all interact.

Version 0.29 for brewblox-service introduced the option to not bind to any port (if you don’t have a REST API, but still want to use the other scaffolding), and version 0.30 added support for MQTT will messages. Boilerplate will get a dependency version bump the next time there’s a relevant change.

I’m not sure whether you’re referring to processes, tooling, urls, or something else entirely here.

We have no formal documentation for use cases in our design process. BrewPi has a dev team of two, and we’re only now slowing down our iterative approach to development. We may revisit the decision in the future, but so far formal up-front documentation for features turned out to be write-only documents that were outdated before the ink was dry.

Design decisions are explicitly treated as snapshots, and can be found here. They help us keep track of why we started/stopped doing things.

Ahh, awesome, I removed the client fixture entirely as I wasn’t using it, all green, no warnings.

All good, just wondering if you had some kind of scratch pad, but as you said, team of two so it doesn’t really make much sense. Might just start another community post, alter the main comment as things evolve.

Thanks for all your assistance.

If you prefer, I can also send you an invite to the Slack channel. It tends to see more upfront discussion, whereas the forum is generally used for solving problems.

Hi Adam, I am looking at doing something similar with an Ultra Q controller. How did you gain access to its api? I have found info about the CyberQ, but not the ultra. Is there a web service I can hook into? Any info is much appreciated.

A quick google suggests the Cyber Q has a web API. Does the Ultra Q respond when querying the same endpoints?

The old one had an http server and an XML page was available. The UltraQ doesn’t have that, I have portscanned it and there is nothing, so I haven’t been able to work out how the phone app talks the device, potentially it is over Bluetooth.

In the end, I used the sharemycook website which the UltraQ sends its data too. So you can’t use it to get the Spark to control it, just use the brewblox UI to monitor it.

Personally, I dislike all of the UI supplied by the manufacturers of the UltraQ. I’m somewhat hoping the new rev of the hardware will support P100 or TypeK thermocouples so I can actually stop using it and replace it with the Spark.

See how you go with this:

If the device didn’t require wifi setup, this is most likely - but it would mean it stops publishing when your phone is out of range.

You can scan for nearby BT/BLE devices, but reverse engineering the data format could potentially be much harder.

Yeah, you set up up WiFi over bluetooth

What I think I meant was that it doesn’t run a “service” on a TCP port on the wifi address that you can access, so it can’t be polled like the CyberQ could. Any available polling most likely is over Bluetooth.

It just publishes telemetry at a certain cadence to their service once it has an internet connection.

On my todo list is to stick a switch with port mirroring enabled between two access points, one with my phone and the other with the UltraQ on it and see if any of that telemetry was being sent over WiFi