Hello, something weird happening that I dont have an way of solviing myself. I’ve had this system up and running for months now, but recently had a problem with docker containers taking up gigabytes of space. That is solved but in the process of solving that issue updated raspberry pi and upgraded, then updated the brewblox software, all went well. After the dashboard comes up successfully but none of the probes are giving me a reading. I have another spark service which is correctly reporting temp values. The service and controller are both up and communicating correctly. The onewire addresses are correct also. I’ve rebooted the controller several times, reseated the probes but no joy.
The var/lib/docker/containers directory was full at 250G. Due to my own idiocy!
I have a docker mqtt pubsub python script that captures spark actuatir events and that then drives a mqtt switch which then controls glycol solenoids.
I had left that running without any cleanup, so all stderr/stdout was being sent to a container image that was growing all the time. So now have limited max log size. SHould be good now!
It seems the sensors on spark-fv-control are all down, while spark-one is fine.
Typically this is caused by a single malfunctioning 1-wire device (sensor, actuator or extension board). A drawback of 1-wire is that if a single device malfunctions, it takes the entire bus down with it.
You can try unplugging all 1-wire devices, and then plugging them back in one by one, until everything goes down again.
Both Sparks report to have been restarted just before the log. Is this expected?
Edit: traefik complains about being unable to read the cert. If your setup is older than a year, you may want to refresh the SSL certificate with brewblox-ctl makecert, and then restart traefik.
Ok Bob, your suggestion makes sense.
So i took a spare probe and put it on spark-one, hit “Discover new 1-wire devices” and it picked it up fine.
Deleted the block.
Removed all probes from spark-fv-control and inserted the spare probe (only 1 probe connected) onto the spark-fv-control. Hit the “Discover new 1-wire devices” and got the “Discovered no new blocks”.
That suggests that the problem lies in the board’s hardware. Could you please open the casing, and take some close-up photos of the board and the onewire ports?
There indeed are no obviously damaged components on the board itself. The corner 1-wire port on the side of the power supply is more suspect. I can’t tell whether it’s damaged or just a bit grimy.
Cleaning and inspecting the contact can’t hurt, and may fix the problem. For more in-depth expertise and next steps (repair, send in for repairs, return under warranty, etc) I’d need to consult Elco.
When did you buy the Spark?
The Spark 3 has a practical limit of ~60 blocks before it runs out of memory.
If you’re below that limit, and would have enough IO ports, then yes, you can use a single Spark for everything.
It’s not a one-click migration (our normal import wipes all blocks before loading the imported ones), but not terribly complicated either.
Steps are:
plug in sensors to spark-one
discover and rename sensor blocks
get blocks from spark-fv-control
sort them to avoid blocks depending on blocks that aren’t created yet.
remove sensor blocks (already exist on spark-one)
remove nid fields
create blocks
You mentioned an actuator script earlier, so this may be something you’re comfortable doing yourself. If not, I can put together a quick script.