Accessing brewblox data

I thought I had read about this somewhere else in the forum but can’t find it now.

I would like to access some brewblox data from a different system so I can display it on a remote small TFT screen e.g. different temperature sensor data or to display what fermentation profile is running. Maybe to use some of it in an IFTTT rule. Is there a current easy way to access this data perhaps writing it to a local text file every x minutes? Or do I need to be querying a database to get at what I need?

Thanks for any suggestions or indeed pointers if others are doing this currently.

To get the data, you’re most probably talking to a REST API - either the database, or ours. For single metrics, you can also choose to listen in on published MQTT events.

Implementation would depend on what is powering the TFT screen, and whether you want graphs, plain values, or even a command-line output.

I must note that we’re currently preparing a migration to a new history database. InfluxDB has decided to drop support for 32-bit machines (such as the Pi).

Thank you Bob. I will look in to that.

Initially it’s only to know some simple numerical data e.g. perhaps 3 temp sensors.

I also have a sensor in my chest freezer full of food so I would be able to run an alert if that goes above a certain level (or reports no data) in case of an unexpected power loss in the outbuilding which has happened in the past ruining all of the food.

My screen is in the house and runs from a shell script displaying the status of my various linux machines and other useful information.

You can see the full API for our history service at http://HOST/history/api/doc.

Example call for metrics:

curl \
  -X POST \
  -H 'Content-Type: application/json' \
  http://localhost/history/history/last_values \
  -w "\n" \
  -d '{
        "fields": [
          "HERMS MT Sensor/value[degC]",
          "WiFiSettings/signal"
        ],
        "measurement": "sparkey",
        "epoch": "s"
      }'

Used arguments:

  • fields is a list of /-separated values. The service name is not included. In the UI, if you go to Graph/Metrics widget settings, and click on a field in the selection tree, the dialog title will be the field name. You can also call http://HOST/history/history/fields to get a list.
  • measurement is the service name.
  • epoch is the timestamp precision. ns, ms, s are some valid arguments.

Example output (prettified):

[
  {
    "field": "HERMS MT Sensor/value[degC]",
    "time": 1626629899,
    "value": 20
  },
  {
    "field": "WiFiSettings/signal",
    "time": 1626629899,
    "value": 2
  }
]

For inline formatting, I can recommend jq. For example:

curl \
  -X POST \
  -H 'Content-Type: application/json' \
  http://localhost/history/history/last_values \
  -w "\n" \
  -d '{
        "fields": [
          "HERMS MT Sensor/value[degC]",
          "WiFiSettings/signal"
        ],
        "measurement": "sparkey",
        "epoch": "s"
      }' \
  | jq -r '.[] | [(.time|strftime("%H:%m")),.field,.value] | @tsv'

Gives as output:

18:07 HERMS MT Sensor/value[degC]	20
18:07 WiFiSettings/signal	2

Edit: added time formatting for completeness sake.

That’s great. Thanks for that Bob - I’m sure that will be fine for my needs. Just need to fix an installation issue before I can give it a try…

Has anything changed with this since the recent update? It was working OK but I’m now getting the error:

{“error”: “HTTPNotFound(Not Found)”}

also when running examples from http://HOST/history/api/doc it is throwing some errors.

update:
I have installed grafana and can produce a query and graph e.g.

{name=“spark-one/Brewery Temperature/value[degC]”}

Please could you advise how I can run something from command line to obtain the latest value e.g. the curl command above.

The new database internally uses a very different data format, with all fields being independent. Due to the scope of the changes, we decided to put the new endpoints under /history/timeseries/.

such as?

curl \
  -X POST \
  -H 'Content-Type: application/json' \
  http://localhost/history/timeseries/metrics \
  -w "\n" \
  -d '{
        "fields": [
          "spark-one/Brewery Temperature/value[degC]"
        ]
      }'

Changes:

  • Endpoint: /history/history/last_values/history/timeseries/metrics.
  • Service name is included in field name, and no longer a separate argument.
  • The epoch argument no longer exists.

Example output:

[
  {
    "metric": "spark-one/Brewery Temperature/value[degC]",
    "value": 21.56,
    "timestamp": 1628211015782
  }
]

Changes:

  • fieldmetric
  • timetimestamp
  • service name is included in field name
  • timestamps are in ms

With jq formatting:

curl \
  -X POST \
  -H 'Content-Type: application/json' \
  http://localhost/history/timeseries/metrics \
  -w "\n" \
  -d '{
        "fields": [
          "spark-one/Brewery Temperature/value[degC]"
        ]
      }' \
  | jq -r '.[] | [(.timestamp/1000|strftime("%H:%m")),.metric,.value] | @tsv'

Alternatively, if you want to use the Prometheus API, or have off-the-shelf tools that can do so: you can access those endpoints under /victoria (/api/v1/query becomes /victoria/api/v1/query).