Service Development

#1

I’m trying to put together a service to add Tilt integration to Brewblox.

  • Am I write in thinking the way to log the values is via publisher.publish()?
  • What would be an appropriate choice for the exchange?
  • Am I right in thinking the routing should just be an appropriate unique identifier (e.g. tilt.<colour>)?
  • Is there a way to use the temperature unit conversion used in the main Brewblox codebase?

I currently have something like this:

await self.publisher.publish(
    HISTORY_EXCHANGE,
    "tilt.{}".format(decodedData["colour"]),
    {
        'Temperature[degF]': decodedData["temp_f"],
        'Temperature[degC]': (decodedData["temp_f"] - 32) * 5/9,
        'Specific gravity': decodedData["sg"],
        'Signal strength[dBm]': rssi
    })

I’m hoping this will log the temperature, sg, and signal strength in the history service such that it can be included in graphs in brewblox.

0 Likes

#2

Ben built a service for the iSpindel:

I think the iSpindel can actively push data to the BrewBlox message bus endpoint, while the Tilt would be read over Bluetooth by a new microservice on the pi that will put a message on the exchange to be picked up by the data logging service.

I see Bob is also typing, so I’ll leave it to him to fill in the details :smiley:

0 Likes

#3

Happy to hear you’re playing with the Tilt. It certainly is something more people would like to use.

As to your questions:

  • Yes, if you want to log data into history, publisher.publish() is the way to go.
  • By default, the history service listens to the brewcast exchange. See https://brewblox.netlify.com/dev/reference/event_logging.html for more info on routing and data payloads.
  • You’re right in that the first part of the routing key should just be an unique identifier to avoid mixing data.
  • We use Pint to do the actual temperature conversions. Most of our own code is for finding temperature values in arbitrary blobs of JSON, and converting them to user-preferred alternatives. If your incoming data model is small and static, Pint is all you need.

Your code snippet looks correct: this should be handled correctly by the history service, and logged in Influx.

https://github.com/BrewBlox/brewblox-devcon-spark/blob/develop/brewblox_devcon_spark/broadcaster.py is where the Spark service does its publishing. It’s pretty straightforward.

0 Likes

#4

Thanks. @Elco I managed to get some standalone asyncio code going a few days ago that has a bluepy scanner going in one executor and a busy loop in another to prove it was working asynchronously. So that code works. I’ve been looking at the iSpindle code but wasn’t sure if it was 100% functional and wanted to check I was along the right lines before going down the rabbit hole :slight_smile:

@Bob_Steers Thanks. That all helps a lot.

I notice I’ve got an invite to the Slack space so I’ll take the convo there :slight_smile:

0 Likes