BrewBlox not working and can't backport to BrewPi

This should probably be 2 separate posts, and if needed, I’m happy to split them up.

First - I’m using the BrewPi Spark v2 hardware, and using a Ubuntu 18.04 machine as my base rather than a rPi. I have an rPi 3b v2 but with the ubuntu machine close by and so much faster, it was my preferred on the BrewPi so I was intending to stick with it in the new BrewBlox. On with my issues…

BrewBlox won’t connect:

I initially upgraded to the BrewBlox stack from BrewPi, and initially things were working well. I had gotten to the point where I could see my OneWire thermometers and was playing around trying to learn how to configure my fridge. I never quite got to setting up my Heating/Cooling SSRs, but I could see temperatures and was making progress. I left for a few days but when I came back, I was no longer seeing any devices connected. The Spark was no longer connected. I played around on my own and have only made things worse.

Initially, on the Services/Spark Controller tab, Service Running had a green check, but Controller Connected and Service Ready were red. I’m not sure why the controller was no longer visible. I tried via USB, and the Spark has a WiFi address and slow blinks blue. The screen said to check logs, but I couldn’t tell where those logs were stored.

I was impatient and tried deleting and starting from scratch to re-install the whole BrewBlox stack, but that put me further behind. Now Service Running is also red and I can’t see anything wrong. In Portainer, I see 8 containers running with the brewblox pre-fix. All of them have a _1 post-fix.

Can’t revert to BrewPi:

After fumbling around and trying to repeatedly restart, re-install, and try to figure out what was wrong, I decided perhaps it’s better to revert back to BrewPi…

Uninstalling BrewBlox and restarting the BrewPi container, I then went into the BrewPi contain console and tried to run python utils/updateFirmware.py. But I was not able to get the updater to connect to using DFU. I would just get a pulse every second or so saying couldn’t connect. Eventually I had to ctrl-C and move on. I tried various iterations of reset and setup buttons but wasn’t able to get it back to the older firmware.

Now I’m back to the BrewBlox stack. Spark connected to WiFi, and USB disconnected (but close by and easily reconnected). Not sure what my next step should be, forward or backward.

The link to the Log output is here (I wish I saw that on my first pass through!):
https://termbin.com/t307

(btw - if you put a single quote (i.e. can’t) in the initial prompt for creating the log dump, it fails with a unterminated quote error.)

I’d try to reflash your spark back to brewpi, there are some helpful instructions here: https://github.com/BrewPi/spark-firmware

I believe one of the spark cores isn’t supported for BrewBlox, not sure if this is the one that isn’t supported, but I still think your best best is go back to brewpi.

you can always escape quotes in the shell if you need to with \

like this:

I’m an escaped quote

Only the original spark core which was in our V1 is not supported, not the case here.

Running portainer on port 9000 might interfere with an internal docker proxy port and is not recommended. I would remove the old BrewPi containers and portainer.

I am willing to give some remote assistance to get you up and running and see if we made an error somewhere.

Thanks. I won’t be able to get back to it until Friday. I’ll try the portainer removal first. Will keep posted here on my progress.

There’s a lot of things going on at once here, so I’ll try to split it up:

  • I’m not sure why it suddenly stopped autodiscovering your Spark controller over WiFi. This is either a bug in our software, or some issue in your local network. I’ll try and reproduce the issue here.

  • For some reason, the BrewBlox service currently skips discovering USB devices. This is a bug that’s pretty high on the TODO list, and will be fixed shortly.

  • If you uninstall the system, don’t forget to stop the running containers first. You can inspect all containers on your system with the docker ps -a command. You can stop and remove all containers on your system by running

docker stop $(docker ps -aq) && docker rm $(docker ps -aq)
  • There are some weird issues with Portainer, where other containers crash immediately. For now we recommend not running Portainer.

  • I’ll start string escaping quotes in the log.

  • We’re adding some more sanity checks to the install process soon (are ports already in use, are containers already running). This should solve the issue where trying to fix it only makes it worse.

  • To view logs for your spark service, you can either use the log command in brewblox-ctl, or run docker-compose logs --follow spark-one in your BrewBlox directory.

1 Like

This is great. I’m arms deep in a broken washing machine. If I wrap this up before bedtime I’ll give a go at trying some of these suggestions. I can tell you which thing I’d rather be working on tonight! I appreciate the thorough response and apologize for munging multiple issues into the same thread. Great work and really looking forward to using this new brewblox stack.

The first issue was probably the USB bug you mentioned. Then trying to reconnect after that I think I probably triggered the rest of my issues.

Status update on items here:

Discovery issues failed to reproduce for both WiFi and USB. We’ll keep an eye on it, but we’re not sure anymore whether there’s a software bug.

Of note is that the USB device (controller) must be connected before the Spark service starts. This is a limitation in how Docker handles USB devices. You can restart the Spark service with docker-compose restart spark-one.

brewblox-ctl setup now checks whether ports are already in use, and will try its best to stop leftover containers from previous installations.

The “reason” field in brewblox-ctl log is now made safe for quotes.

Just a thought, you may get network discovery issues if users have virtual network interfaces, VPN’s and Vmware (or similar) are typical culprits.

Here are the steps I’m taking as I go along:

  1. remove portainer - using docker ps -a I see it is currently running so I ran “docker stop portainer” and “docker rm portainer”.
  2. updated BrewBlox by using “brewblox-ctl” and option 7 - Update.
  3. check web ui and reload - currently saying service not running and controller not connected. 3 red circles.

Next - that was via wifi. So let’s engage USB and restart docker:

  1. connected USB and then ran

docker-compose restart spark-one

I note that when I reconnect USB, initially USB is lit on the spark. Then USB and Wifi are lit for a few seconds. Then USB goes dark.

  1. my next move was to add the device-host line to the docker-compose.yml and restart the service.

Still not connected and Service Not Started.

  1. Ran the brewblox-ctl option 14, and let it upload my log to here:

https://termbin.com/2iul

And thanks @Bob_Steers for already fixing some of the things mentioned so quickly!

Let me know what other steps I could take to help diagnose what might be happening here.

No virtual networks, vmware or VPN that should get in the way. The Spark is on the same internal network as the BrewBlox server.

While I was here, and assuming my issues were due to trying to adapt the install procedures to my Ubuntu 18.04 system rather than a clean RPI3, I decided to do a clean install with the RaspberryPi3b, just to see.

Unfortunately, I’m finding myself in a similar place. I followed all of the steps from installing RaspianLite on a clean SD onward. Setup went clean and without errors up to running brewpi-ctl up.

When navigating to the brewblox website ( https://raspberrypi), the page comes up, with the same issues. In this case it said the Service Running was OK, but still couldn’t connect.

Next step - I ran the Flash Firmware process.

no luck, same situation. I can see the brewblox, but Service Running but Spark not connected. USB is connected to power the spark, but on the interface screen, the USB is greyed out and the Wifi is lit up with the properly assigned IP.

Log file for the clean rPi install is here:
https://termbin.com/vxh2

First of all: thanks @wactuary for the detailed information. It really helps a lot while tracking what went wrong.

As a side note: if you change the docker-compose.yml file, it needs a full restart of the system before it reads the file again (docker-compose down && docker-compose up -d). You can also do this with brewblox-ctl down up

Ubuntu

We do all our development on Ubuntu 18.04 machines, so it certainly is not an unusual configuration.

On your machine, it seems that device discovery is working correctly. The mdns service reports it discovering your spark at 192.168.1.163, but as it took him nearly 50 seconds to find the device, you restarted it just when it made connection.

I’m not sure what’s causing the delay. Discovery of already-started Spark controllers is generally instant.

Starting discovery:

spark-one_1  | 2019-02-16T00:23:02.952602305Z 2019/02/16 00:23:02 INFO     ..._devcon_spark.communication  Starting device discovery, id=None...

You refreshing the page in the UI, while it’s not yet connected

spark-one_1  | 2019-02-16T00:23:13.124896559Z 2019/02/16 00:23:13 INFO     ...ox_devcon_spark.api.sse_api  Initial subscription push failed: NotConnected(<SparkConduit for None> not connected)
spark-one_1  | 2019-02-16T00:23:18.217072867Z 2019/02/16 00:23:18 INFO     ...ox_devcon_spark.api.sse_api  Initial subscription push failed: NotConnected(<SparkConduit for None> not connected)

Device found, connected. (second green check in the UI). It will now update the controller time, and get settings from the datastore. Note how device discovery started at 00:23:02, and it connects at 00:23:51.

spark-one_1  | 2019-02-16T00:23:51.422602049Z 2019/02/16 00:23:51 INFO     ..._devcon_spark.communication  Connected <SparkConduit for 192.168.1.163:8332>
spark-one_1  | 2019-02-16T00:23:51.602553693Z 2019/02/16 00:23:51 INFO     ...blox_devcon_spark.commander  Spark event: "Connected to BrewBlox v0.1.0"
spark-one_1  | 2019-02-16T00:23:51.741202057Z 2019/02/16 00:23:51 INFO     ...devcon_spark.couchdb_client  <CouchDBClient for http://datastore:5984> Existing document found (430029001347343339383037-blocks-db)
spark-one_1  | 2019-02-16T00:23:51.741510418Z 2019/02/16 00:23:51 INFO     ...devcon_spark.couchdb_client  <CouchDBClient for http://datastore:5984> Existing document found (430029001347343339383037-config-db)
spark-one_1  | 2019-02-16T00:23:51.744188785Z 2019/02/16 00:23:51 INFO     ...devcon_spark.couchdb_client  <CouchDBClient for http://datastore:5984> Existing document found (430029001347343339383037-savepoints-db)
spark-one_1  | 2019-02-16T00:23:51.744602135Z 2019/02/16 00:23:51 INFO     ...blox_devcon_spark.datastore  <CouchDBConfig for spark-service/430029001347343339383037-config-db> Read 1 setting(s). Rev = 2-cca2f702b879bd0a61e6b592e2b365a4
spark-one_1  | 2019-02-16T00:23:51.746709160Z 2019/02/16 00:23:51 INFO     ...blox_devcon_spark.datastore  <CouchDBBlockStore for spark-service/430029001347343339383037-blocks-db> Read 19 blocks. Rev = 2-264d7eebef0f740c7de91d2872b2d82d
spark-one_1  | 2019-02-16T00:23:51.748289940Z 2019/02/16 00:23:51 INFO     ...blox_devcon_spark.datastore  <CouchDBConfig for spark-service/430029001347343339383037-savepoints-db> Read 0 setting(s). Rev = 1-bc82f7787f7b495c97a6fcf633688885

Shutting down (00:23:54).

spark-one_1  | 2019-02-16T00:23:54.648340644Z 2019/02/16 00:23:54 INFO     brewblox_service.events         Closing <EventListener for "eventbus">
spark-one_1  | 2019-02-16T00:23:54.648422328Z 2019/02/16 00:23:54 INFO     brewblox_service.events         Closing <EventPublisher for "eventbus">

Raspberry

Here discovery seems to not work at all. It started discovery at 03:22:10, and by 03:24:50, it still hadn’t received a response. It is using the default compose file, so connection may yet be solved by taking the following steps:

  • stop the system (docker-compose down or brewblox-ctl down)
  • add --device-host=192.168.1.163 to the spark-one command in docker-compose.yml
  • start the system (docker-compose up -d or brewblox-ctl up)

Conclusion

The whole thing seems to almost work, but still has a few kinks. If you want us to try and help out during your next attempt, I can invite you to the Slack chat.

Meanwhile, I’ll take a look at pushing more information to the UI when the Spark is busy discovering / connecting. That should offer some more insight in what it’s trying to do, without having to resort to reading Docker logs.

So I came over to my desktop this morning, and of the many tabs open on my browser, one of them was still pointing to the Ubuntu 18.04 machine, and the brewblox had at some point connected! I guess that’s the connection you saw in the logs, which I hadn’t realized happened before shutting it down. Unfortunately, since I had shut down that docker session and moved spark to the pi, that was just a dead screen. But it was heartening to see it there!

I moved from Ubuntu to the raspberry pi just so I would have a clean system to start from in case anything else on that machine was conflicting. I use that other box to serve my music, and runs a few other servers and may have had packages or processes that conflicted. At least with the pi, it’s clean and fresh and I can wipe it and restart if you need. Eventually, I’d definitely prefer to get it up on the ubuntu box because its so much faster and more responsive. At least with BrewPi, I found that if a brew had gone for a long time, the rpi would bog down with the graphing data after a few weeks worth. Once I moved to the larger and faster box, those issues disappeared. I’m sure less frequent data polling would have helped, but where is the fun in that! :slight_smile:

Anyway, this morning I added the --device-host line to the yml file and brought docker down and then up. So far no connection. Log file below and I’m happy to get on a slack discussion to track this down. Let me know how to connect, I have a slack account already.

https://termbin.com/zy4c

thanks!

By the way, it looks like this page:
https://brewblox.netlify.com/user/examples/single_spark.html#service-spark
might be out of date. It says to use the tag --device-url instead of --device-host. I gave it a shot to see what would happen, but I get Service Not Running, and an error at the bottom saying something about a bad JSON at position 0 flashing at the bottom of the screen briefly.

Reverted back to --device-host.

Also this page:
https://brewblox.netlify.com/user/adding_services.html#step-4-connect-to-a-specific-spark
refers to rpi-stable, but I believe that’s currently rpi-edge. That could be confusing if not commented, although I’m sure eventually the -stable is the right answer so not sure if it should be updated on this page.

Anyway, since my spark is on wifi but also connected to USB, I figured I’d switch the yml file to the serial number figuring that would be a bit less ambiguous and should work for either path.

I’m having the same issues shown here (discovery taking a long long time and not working).

Perhaps the difference is that I’m using the USB cable and do not plan to connect through Wifi (didn’t run brewblox-ctl wifi).

I already tried adding the device-id SER number in the YAML file without any success.

Service is running but the controller won’t connect.

spark-one_1  | 2019/02/27 18:33:30 INFO     brewblox_service.events         Closing <EventListener for "eventbus">
spark-one_1  | 2019/02/27 18:33:30 INFO     ..._devcon_spark.communication  Starting device discovery, id=3a001e001147353236343033...
spark-one_1  | 2019/02/27 18:33:30 INFO     ...ox_devcon_spark.broadcaster  Starting Broadcaster
spark-one_1  | 2019/02/27 18:33:30 INFO     ...ox_devcon_spark.api.sse_api  Starting SSEPublisher

It’s stuck at this point after 10 minutes.

After trying to open the UI I got:

spark-one_1 | 2019/02/27 18:29:09 INFO ...ox_devcon_spark.api.sse_api Initial subscription push failed: NotConnected(<SparkConduit for None> not connected)

My docker-compose.yml:

  spark-one:
    image: brewblox/brewblox-devcon-spark:rpi-${BREWBLOX_RELEASE:-stable}
    privileged: true
    depends_on:
      - eventbus
      - datastore
    restart: unless-stopped
    labels:
      - "traefik.port=5000"
      - "traefik.frontend.rule=PathPrefix: /spark-one"
    command: >
      --name=spark-one
      --device-id=3a001e001147353236343033
      --mdns-port=${BREWBLOX_PORT_MDNS:-5000}

:rofl::rofl::rofl::rofl:

Tried to connect via Wifi and it worked (the discovery).

Docker containers can only access devices that are present when they boot. Don’t know if that could be the issue here too.

As a side note: device-id does not change where it looks during discovery, it’s only an additional check: discovered devices must have this ID.

If you want to enforce USB, you can use --device-serial (usually /dev/ttyACM0).

I’ll update documentation to clarify the distinction between the args.

Hello team and all good people out there.
I hope that someone might have some spare time to help an fellow brewer.
I have earlier been using Brewpi 0.5.10 with the spark 3.
Im a bit of a noob in all these stuff but atleast managed to get the spark going.
Now that i am eager to test the new Brewblox i cant get it running at all.
I have been following the guide step by step all the way to step 3 where you are supposed to connect the Spark 3 via SSH.
The spark are connected to my wifi ( Since brewpi) and has an Ip adress showing in the screen.
When i am trying to connect via SSH with putty i am frequently havning an error message Saying “Connection refused”. Where do i go from here?
I have no containers running in docker and even tried to connect via putty when having docker Uninstalled if this even helps.

Thank you in advance.

You should SSH into the raspberry pi you’re installing the software on, not the brewpi controller itself. Hope that helps!