BrewBlox service for GPIO on Raspberry Pi

I’d like to point out that we deliberately didn’t run our control algorithms on the pi. All critical control (PIDs, sensors, actuators) runs embedded on the BrewPi Spark.

It can run independently and does not rely on the pi or anything that runs on it. So he stability and the quality of your beer does not depend on WiFi, docker, the SD card or any other things running on the pi, etc.

So even if you create a service that toggles pi gpio and can read onewire sensors, this does not make them available as inputs and outputs for the PIDs that run on the Spark. That’s all local on the Spark.

Hello Elco,hi Bob,

the last statement brings me to a related question regarding the MQTT eventbus.

I’ve got a MQTT capable device based on a Wemos D1 Mini that can drive an induction cooker by emulating its cable bound control. Actually the device subscribes to an MQTT topic and receives powerlevel from a PID (currently running on CraftbeerPi) and translates it to the special protocol used by the cooker.

It can be found here :

I understand that the PIDs will be running on the SPARK.

Is there a possibility to obtain PID control via an MQTT topic from the Brewblox message bus?

The plan is to completely replace CraftbeerPi with Brewblox. While driving my SSRs and heating elements I obtained from you last year perform perfectly and could easily be driven by Brewblox, driving the induction cooker for mashing is currently the only open point left.

Perhaps Elco remembers our phone call last year when i was suggesting using the heating element for mashing and you explained that it would burn the wort. So we concluded using it only for boiling and rely on the induction cooker for mashing. That works like a charm but relies on CraftbeerPi which is no longer maintained.

Would be perfect if the PID blocks could drive targets via MQTT.

Any chance to get this up and running?

Many thanks in advance for any hint,
Steffen

In this scenario, is the temperature sensor connected to the Spark?
If it is, you can configure the Spark control chain to use a mocked / unused output pin, and listen in when the Spark is broadcasting block state.
Code-wise, you’d need to parse the JSON body of the MQTT message, and implement a timer that disables your cooker if it hasn’t received an update for X seconds.

The default poll/broadcast interval of a Spark is 5s. Is this sufficient to control your cooker?

  • The spec for broadcast state can be found here.
  • You’ll want to subscribe to the brewcast/state/# topic, and filter messages based on the key/type fields in the message.
    • key = service name (eg. spark-one)
    • type = Spark.blocks
  • The data field for Spark block events is a list of blocks.

My own guess is that 5s is manageable for controlling a power-level based output (not a PWM-driven digital output), but likely needs some tuning to counteract the slower response time.

If that’s not true, we can have a look at possibly doing a more regular read of just the PWM, and publishing that separately. This kind of integration is part of the reason why we’re moving to MQTT, so may as well support it properly.

Hello Bob,

yikes - that sounds perfect - awesome !

Indeed, the temp sensor will be connected to the spark which I am awaiting to be shipped within the next days - just ordered today spontaneously after playing with the Brewblox for the last two nights.
It is simply AWESOME !

The power update from the original CraftBeerPi was 10seconds if I recall correctly, so 5seconds in Brewblox will be sufficient IMHO.

Now I need to figure out how to make the MQTT port accessible in my WLAN network. I am not yet too familiar with Docker networking and port exposing yet.

The learning curve is steep but Brewblox is exactly what I’ve been looking for :slight_smile:

Great job and support !

Many thanks in advance - very likely I’ll have further questions during my progress.

Best regards,
Steffen

For external clients, we use MQTT over websockets, and proxy it through :443/eventbus to avoid exposing more ports.

See https://brewblox.netlify.app/dev/tutorials/pubscript/#source-code for paho client settings and how to handle the self-signed cert.

It might even be worth investigating how complex it would be to let the spark send a message to the wemos directly. If it is just a simple message over TCP, we could create a special actuator for that.

1 Like

Hi Elco,

sending via SPARK directly that was my idea as well. Actually the standard PID module of Craftbeerpi was a bit modified to simply send the powerlevels via MQTT.

The corresponding Plugin is this one :

If we could make the SPARK drive it directly that would be fantastic :grinning:

Best regards,
Steffen

The immediate problem here is that the Spark is not guaranteed to have a Wifi connection. Anyone using USB without Wifi set up would either be unable to use this, or have to use the spark service as bridge.

I’m also sensing a minor miscommunication here, with @Elco referring to directly sending raw TCP messages from the Spark controller to the Wemos, and @Steffen to the Spark controller publishing MQTT events.

We’ve discussed converting the controller <-> service protocol from our homemade controlbox protocol to MQTT, but this would be a larger rework for something that isn’t currently broken.

I don’t think it is a problem require the spark to have WiFi if it should drive wemos modules.
I think not routing the request through the server removes a possible point of failure. If the spark doesn’t have WiFi, it is likely the wemos don’t have WiFi either.

Can the wemos modules be controlled only with mqqt? Or is a simpler TCP message an option?

Hi Elco,Bob,

only via MQTT. It connects directly to a mosquitto MQTT brokers default MQTT port 1883. The device itself is quite failsave. One can set what it should do in case MQTT messages cannot be processed or WiFi is lost, or the sensor breaks.

So after rethinking the setup, it would IMHO be the easiest approach to send and fetch the data via MQTT and the eventbus. I understand that the port 1883 is not exposed by default. I am not sure if the device can connect via the exposed websocket port 443.

The JSON payload that is required is actually quite straightforward and can be seen in the GitHub links for the device and the CraftbeerPi MQTT driver I’ve been referencing.

To my understanding you are evaluating mosquitto already?

If so, exposing the port 1883 mosquitto Port on the Raspberry would be the natural choice.
Any topic on the eventbus would then be accessible via mosquittos MQTT port.

The only thing left would be to properly put the required data into JSON as it is expected by the device, or modify the devices JSON decode to extract the currently available PID data on the eventbus.
One issue there is, that when disabling the PID block, the JSON payload must also include an ‚off‘ command for the actor vice versa when being enabled.

My idea for not breaking anything Brewblox eventbus internal would be to have some sort of MQTT tee option.
Meaning that any internal communication remains as it is.
Any service or device that wants to communicate via MQTT could subscribe to explicitly published topics whose payloads could be freely configured consisting of (theoretically) any information that is being processed on the eventbus on a special MQTT target.

What do you think? Would such an adaptable topic/JSON payload MQTT connector be feasible?

On the other hand, could the dockerized script approach be the solution for pushing the appropriate MQTT payload towards a mosquitto Broker to which the device could subscribe?

Best regards,
Steffen

Hi Elco, hi Bob,

https://brewblox.netlify.app/dev/tutorials/pubscript/#source-code

My guess is that this is actually quite exactly what I am looking for as an “adaptable” MQTT connector, right?

That would be relatively easy.

My head is spinning :slight_smile:

Best regards,
Steffen

I haven’t looked at your linked device code in depth, but did notice you’re using the paho client for mqtt. Paho natively supports websocket transport.
If using websockets is not an option, exposing the 1883 port is trivial (we even made a commandline helper function).

For this application, our current broker (RabbitMQ) is equivalent to mosquitto.

If the PID is disabled, both the PID and the PWM are still present in output, with desiredSetting being 0.

You’ll indeed need some adapter to convert block data to the payload accepted by your wemos.
That adapter can either be a separate service, or running on the wemos itself. That depends on whether you can and want to modify the source code for the wemos.

If you want to use a separate service, you can indeed listen to block state events, get the power level, and publish a new message to a topic where the wemos is listening.

The chain would be: spark controller > spark service > eventbus > adapter service > eventbus > wemos.

The pubscript tutorial is the basic version. We also have a boilerplate for brewblox services that has more safety built in. The general idea is the same: listen to topic A, publish to topic B.

1 Like

Hi Bob,

That sounds perfect. Thanks for guiding me towards the boilerplate. I’ll give it a try and will report back.

Many thanks for your great support :+1::+1::+1:

Best regards,
Steffen

Hi Bob,

seems I am stuck with exposing the Eventbus port.

I tried brewblox-ctl service expose eventbus 1883:5672 - did not work.

docker ps shows however

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f6e311e1b237 influxdb:1.8 “/entrypoint.sh infl…” 5 minutes ago Up 5 minutes 8086/tcp brewblox_influx_1
912efa7927db brewblox/rabbitmq:edge “docker-entrypoint.s…” 5 minutes ago Up 5 minutes 4369/tcp, 5671/tcp, 25672/tcp, 0.0.0.0:1883->5672/tcp brewblox_eventbus_1
ed83a9987f0e traefik:v1.7 “/traefik -c /dev/nu…” 5 minutes ago Up 5 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp brewblox_traefik_1
0c3dd6a1f506 brewblox/brewblox-automation:edge “node dist/main.js” 5 minutes ago Up 5 minutes 5000/tcp brewblox_automation_1
f72668fab770 brewblox/brewblox-mdns:edge “python3 -m brewblox…” 5 minutes ago Up 5 minutes brewblox_mdns_1
5ff7c9838df6 brewblox/brewblox-ui:edge “/docker-entrypoint.…” 5 minutes ago Up 5 minutes 80/tcp brewblox_ui_1
8a23b5efa963 brewblox/brewblox-history:edge “python3 -m brewblox…” 5 minutes ago Up 5 minutes 5000/tcp brewblox_history_1
8f9bf5a6d8a7 brewblox/brewblox-devcon-spark:edge “python3 -m brewblox…” 5 minutes ago Up 5 minutes 5000/tcp brewblox_spark-one_1
cea4131556bc treehouses/couchdb:2.3.1 “tini – /docker-ent…” 5 minutes ago Up 5 minutes 4369/tcp, 5984/tcp, 9100/tcp brewblox_datastore_1

Restored the docker compose file and tried

brewblox-ctl service expose eventbus 5672:5672 as suggested in one of the release notes.

Same result.

Am I missing something obvious?

Again, many thanks for your support.

Best regards,
Steffen

This is a case of the old release notes being outdated: 5672 is the port for AMQP events.

In this case you want to forward the MQTT port without changes. The command for that is:

brewblox-ctl service expose eventbus 1883:1883

Tried that as well and was not able to connect with MQTT fx.

Will try it agin.

Thanks so much,
Steffen

The docker ps entry for eventbus should include:

0.0.0.0:1883->1883/tcp

If it doesn’t, you may need to run docker-compose up -d to apply changes.

1 Like

Hello Bob,all,

I managed to subscribe to the Eventbus state messages as written some messages below. I am lost however on extracting the specific PID output setting for my Induction cooker. I am still a Python noob.

Here’s what I’ve been trying so far:

import paho.mqtt.client as mqtt
import json

The callback for when the client receives a CONNACK response from the server.

def on_connect(client, userdata, flags, rc):
print("Connected with result code "+str(rc))

# Subscribing in on_connect() means that if we lose the connection and
# reconnect then subscriptions will be renewed.
client.subscribe("brewcast/state/#")

The callback for when a PUBLISH message is received from the server.

Petty prints the Eventbus messages

def on_message(client, userdata, msg):
data = json.loads(msg.payload)
print(json.dumps(data, indent=6, separators=(". ", " = ")))

client = mqtt.Client()
client.on_connect = on_connect
client.on_message = on_message

client.connect(“192.168.188.45”, 1883, 60)

Blocking call that processes network traffic, dispatches callbacks and

handles reconnecting.

Other loop*() functions are available that give a threaded interface and a

manual interface.

client.loop_forever()

The output

many nested nids

The particular one I am looking for is mid :115

{
“nid” = 115.
“groups” = [
0
].
“type” = “Pid”.
“data” = {
“outputValue” = 100.0.
“outputSetting” = 100.0.
“enabled” = true.
“active” = true.
“p” = 118.2939453125.
“i” = 0.19677734375.
“d” = -0.1416015625.
“derivativeFilter” = “FILT_45s”.
“integralReset” = 0.0.
“boilMinOutput” = 0.0.
“boilModeActive” = false.
“derivative[delta_degC / minute]” = 0.005028252628567342.
“drivenOutputId<ActuatorAnalogInterface,driven>” = “PWMMockInduktion”.
“inputSetting[degC]” = 28.0.
“integral[delta_degC * hour]” = 0.0.
“inputId” = “SetpointInduktion”.
“boilPointAdjust[delta_degC]” = 0.0.
“td[second]” = 30.
“inputValue[degC]” = 25.96142578125.
“kp[1 / degC]” = 58.0.
“outputId” = “PWMMockInduktion”.
“error[delta_degC]” = 2.03955078125.
“ti[second]” = 600
}.
“id” = “PIDInduktion”
}.

So I’d like to obtain the outputValue value only, store it in a variable and push that into a separate MQTT topic for the MQTT device to subscribe to.

I am failing miserably on parsing the JSON file for that particular value. Any hint is highly appreciated.

Many thanks and best regards,
Steffen

SERVICE_ID = 'spark-one' # replace with the name of your spark service
PID_ID = 'PIDInduktion' # replace with string ID of desired PID
BLOCK_EVT = 'Spark.blocks' # const value

if not msg.payload:
    print('Discarding empty message')
    return

state = json.loads(msg.payload)
state_key = state['key']
state_type = state['type']

if state_key != SERVICE_ID or state_type != BLOCK_EVT:
    print(f'discarding state message: key={state_key} type={state_type}')
    return

# Find first block in list with correct ID
# Returns default value arg (None) if not found
blocks = state['data'] # list
pid = next((block for block in blocks if block['id'] == PID_ID), None)

if pid is None:
    print('PID not found in blocks')
    return

outputValue = pid['data']['outputValue']

For reference:

block[‘id’] is guaranteed to be unique. id and nid (numeric ID) exist because the Spark has very little persistent memory. We use 16 bit numbers as ID there, and the Spark service links those with user-defined string IDs.

Edit: forum posts allow code blocks by using triple backticks:

```
code goes here
```

This prevents ‘#’ comments being rendered as header.

1 Like

Hey Bob,

that worked ! :slight_smile:

Many Many thanks. I guess the rest is not difficult anymore.

I’ll report back soon.

Best regsrds,
Steffen

Best regards,
Steffen