BrewBlox and Spark 3 Wifi connectivity issue

I’m wondering if someone can point me in the right direction.

I’ve just setup BrewBlox from scratch on my Pi running Raspbian and everything appears to be working correctly.
I can access the UI via the wireless interface and I can access Portainer and manage the various containers.
During the install I had my Spark 3 plugged in via USB and I flashed the Spark with the wifi Firmware.
It rebooted, obtained an IP address and is showing me temps on the screen.
I can access the Spark from BrewBlox as long as it is plugged in via USB but as soon as I remove the USB cable BrewBlox loses communication with the Spark.

I can ping both Pi and Spark so both are successfully connected to my Wifi.

I’ve done quite a bit of googling and searched through the topics here but nothing suggests how to troubleshoot connectivity issues between Pi and Spark.

What am I missing and what should I be looking at to give me more clues as to what is going wrong?

Thanks in advance

Matt

I have just run through “How to report a problem” and can confirm everything is up to date.
I have run brewblox-ctl logs and they can be found here. https://termbin.com/h15n
I don’t have any Blox configured at this stage as I haven’t made it that far :slight_smile:

Connection settings are described in https://brewblox.netlify.com/user/connect_settings.html.

If your Spark has an IP address, then the likely problem is mDNS device discovery. To confirm, you can try to directly connect to your Spark by IP address.

To do so, you’ll have to edit your docker-compose.yml file. You can do so by running nano docker-compose.yml in your brewblox directory.

Scroll down until you find the spark-one service, and add --device-host=SPARK_IP to the commands. (replace SPARK_IP with the actual IP address). Use spaces for indentation.

Your Spark service should now look like:

  spark-one:
    image: brewblox/brewblox-devcon-spark:rpi-${BREWBLOX_RELEASE:-stable}
    privileged: true
    depends_on:
      - eventbus
      - datastore
    restart: unless-stopped
    labels:
      - "traefik.port=5000"
      - "traefik.frontend.rule=PathPrefix: /spark-one"
    command: >
      --name=spark-one
      --mdns-port=${BREWBLOX_PORT_MDNS:-5000}
      --device-host=SPARK_IP

Run the following commands to quickly restart your spark service:

docker-compose stop spark-one
docker-compose up -d

Troubleshooting is often done by looking at the service logs. brewblox-ctl log dumps them all, but you can look at the spark service logs yourself by running docker-compose logs --follow spark-one

Hi Bob,

Thanks. I really appreciate the response. I’ve done as you suggested and the logs ( see below) suggest that my UI is now able to see the Spark.

spark-one_1  | 2019/08/08 13:35:23 INFO     ..._devcon_spark.communication  Connected <SparkConduit for 172.16.108.15:8332>
spark-one_1  | 2019/08/08 13:35:23 INFO     ...blox_devcon_spark.commander  HandshakeMessage(name='BREWBLOX', firmware_version='6d9a4a3f', proto_version='7a2a6a9', firmware_date='2019-07-25', proto_date='2019-07-15', system_version='1.2.1-rc.2', platform='p1', reset_reason_hex='28', reset_data_hex='00', reset_reason='POWER_DOWN')

Success but unfortunately I’ve just moved onto the next problem.
The next part of the instructions suggest that there is an actions menu at the top right but I don’t have one at all. I’ll see if I can hunt down more info.

Appreciate your assistance…

Thanks

The actions menu is in the Spark service page in the UI. Open the side bar (hamburger menu), and click on Spark service 'spark-one' under Services.

I don’t have any. I’ve just re-installed and still don’t have any.
I’m obviously missing something fundamental.

The default ‘spark-one’ service is added by the installer. You’re likely suffering from a bug where the datastore occasionally doesn’t start. We’re currently trying to pin down why this happens.

  • Close the UI
  • Run docker-compose stop datastore; docker-compose up -d; docker-compose logs --follow datastore
  • Wait a bit for a message to appear about being in admin mode.
  • If it doesn’t, press Ctrl+C and run the command again.
1 Like

Not a lot of clues here.

datastore_1  | ****************************************************
datastore_1  | WARNING: CouchDB is running in Admin Party mode.
datastore_1  |          This will allow anyone with access to the
datastore_1  |          CouchDB port to access your database. In
datastore_1  |          Docker's default configuration, this is
datastore_1  |          effectively any other container on the same
datastore_1  |          system.
datastore_1  |          Use "-e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password"
datastore_1  |          to set it in "docker run".
datastore_1  | ****************************************************
datastore_1  | [os_mon] memory supervisor port (memsup): Erlang has closed
datastore_1  | [os_mon] cpu supervisor port (cpu_sup): Erlang has closed
datastore_1  | ****************************************************
datastore_1  | WARNING: CouchDB is running in Admin Party mode.
datastore_1  |          This will allow anyone with access to the
datastore_1  |          CouchDB port to access your database. In
datastore_1  |          Docker's default configuration, this is
datastore_1  |          effectively any other container on the same
datastore_1  |          system.
datastore_1  |          Use "-e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password"
datastore_1  |          to set it in "docker run".
datastore_1  | ****************************************************
datastore_1  | [os_mon] cpu supervisor port (cpu_sup): Erlang has closed
datastore_1  | [os_mon] memory supervisor port (memsup): Erlang has closed
datastore_1  | ****************************************************
datastore_1  | WARNING: CouchDB is running in Admin Party mode.
datastore_1  |          This will allow anyone with access to the
datastore_1  |          CouchDB port to access your database. In
datastore_1  |          Docker's default configuration, this is
datastore_1  |          effectively any other container on the same
datastore_1  |          system.
datastore_1  |          Use "-e COUCHDB_USER=admin -e COUCHDB_PASSWORD=password"
datastore_1  |          to set it in "docker run".
datastore_1  | ****************************************************

Nothing really specific about being in admin mode.
Should I do a full re-install ?

It does look like it’s now up. To confirm, add a dashboard in the UI, and refresh the page. If it’s still there, the datastore works.

When the datastore is acting up, it doesn’t log anything - not even that warning about couchdb running in admin party mode.

If somehow it managed to lose the default Spark service, you can simply add it yourself: make a new one with ID spark-one.

Aaah. See what happens when you know what you’re doing.

Thanks for your assistance Bob. Really appreciate it !