Probes not giving a reading

Hello, something weird happening that I dont have an way of solviing myself. I’ve had this system up and running for months now, but recently had a problem with docker containers taking up gigabytes of space. That is solved but in the process of solving that issue updated raspberry pi and upgraded, then updated the brewblox software, all went well. After the dashboard comes up successfully but none of the probes are giving me a reading. I have another spark service which is correctly reporting temp values. The service and controller are both up and communicating correctly. The onewire addresses are correct also. I’ve rebooted the controller several times, reseated the probes but no joy.

sensors2
sensors

Hre is a recent logfile.

Any help appreciated.

log file

https://termbin.com/ovch

During your log, none of your services seem to be up and running. What happens if you run brewblox-ctl up, and force refresh the page?

If I may ask, what was the issue with containers taking up too much space?

https://termbin.com/aixk

That should be better

The var/lib/docker/containers directory was full at 250G. Due to my own idiocy!
I have a docker mqtt pubsub python script that captures spark actuatir events and that then drives a mqtt switch which then controls glycol solenoids.
I had left that running without any cleanup, so all stderr/stdout was being sent to a container image that was growing all the time. So now have limited max log size. SHould be good now!

It seems the sensors on spark-fv-control are all down, while spark-one is fine.
Typically this is caused by a single malfunctioning 1-wire device (sensor, actuator or extension board). A drawback of 1-wire is that if a single device malfunctions, it takes the entire bus down with it.

You can try unplugging all 1-wire devices, and then plugging them back in one by one, until everything goes down again.

Both Sparks report to have been restarted just before the log. Is this expected?

Edit: traefik complains about being unable to read the cert. If your setup is older than a year, you may want to refresh the SSL certificate with brewblox-ctl makecert, and then restart traefik.

Ok Bob, your suggestion makes sense.
So i took a spare probe and put it on spark-one, hit “Discover new 1-wire devices” and it picked it up fine.
Deleted the block.
Removed all probes from spark-fv-control and inserted the spare probe (only 1 probe connected) onto the spark-fv-control. Hit the “Discover new 1-wire devices” and got the “Discovered no new blocks”.

I am not sure what to do now, any ideas?

If you go to the Spark service, is the block present anyway? We automatically call discover in multiple places.

Otherwise:

  • is it recognized by any other physical onewire port?
  • if you open the casing, is there any physical or burn damage to the board?

No - no new blocks appeared on the spark-fv-control service.

And putting the spare probe into another port made no difference.

That suggests that the problem lies in the board’s hardware. Could you please open the casing, and take some close-up photos of the board and the onewire ports?



All looks pretty clean, but keen to hear your view.

There indeed are no obviously damaged components on the board itself. The corner 1-wire port on the side of the power supply is more suspect. I can’t tell whether it’s damaged or just a bit grimy.

Cleaning and inspecting the contact can’t hurt, and may fix the problem. For more in-depth expertise and next steps (repair, send in for repairs, return under warranty, etc) I’d need to consult Elco.
When did you buy the Spark?

March 2021.

Order #100007829-1

Thanks

1 Like

Some of the IC’s directly behind the green terminal blocks look like they could be burnt out, but the photo is not clear enough to tell.

Hello, i’ve cleaned the terminals and dont observe any issues with the components.

Should we arrange for a replacement unit? Wait for spark4 perhaps?

Regards

Peter

If the onewire bus is defective, chances are that we can repair it by replacing the terminals.

Our shipping address is:

BrewPi BV
Torenallee 32-42
5617BD Eindhoven
The Netherlands

Just thinking out loud, but As i have 2 sparks, can i easily migrate all blocks/services from the bad one to the good one?

Cheers

The Spark 3 has a practical limit of ~60 blocks before it runs out of memory.
If you’re below that limit, and would have enough IO ports, then yes, you can use a single Spark for everything.

It’s not a one-click migration (our normal import wipes all blocks before loading the imported ones), but not terribly complicated either.

Steps are:

  • plug in sensors to spark-one
  • discover and rename sensor blocks
  • get blocks from spark-fv-control
  • sort them to avoid blocks depending on blocks that aren’t created yet.
  • remove sensor blocks (already exist on spark-one)
  • remove nid fields
  • create blocks

You mentioned an actuator script earlier, so this may be something you’re comfortable doing yourself. If not, I can put together a quick script.

Ok i have migrated across to the working spark and will send you the faulty unit monday 26th july.

Regards
Peter

1 Like