Brewblox install on a Synology NAS offical guide

I’ve taken some time this morning and worked on migrating over to the new Brewblox and I’ve been able to sucessfull flash and update the Spark but now I cannot get the controller to see the Spark when on wifi. It works just fine when connected to USB. I was able to get a sucessfull Wifi connection when setting it up and the Spark device has an IP address on it. It’s only the controller that doesn’t see it.

Any ideas?

Do you see the wifi address appear when running brewblox-ctl discover-spark?

I get a No buffer space available error.

How large is your SD card? It may be you’re out of space.

It’s on a Synology NAS that has several free TB. That shouldn’t be an issue.

I got it working and here are the results.

Does you service now discover the Spark? If not, could you please run brewblox-ctl log?

I receive this error message.

You may need to install that separately. In Synology packets, can you find “net-tools” or “netcat”?

I’ll keep researching but I’m not seeing any netcat or net-tools as a Synology package or even a community package.

Does the synology box need to have wifi or can the Spark utilize the wifi signal from the access points that I have around the house?

They just need to be on the same network. @Elco do you what the package is for nc on Synology?

Edit: in the meantime, you can also use filezilla to get and upload the brewblox.log file in your brewblox dir.

Even though I received an error message for the log it looks like it created one.

brewblox.log (105.4 KB)

Yes, nc is used to upload the log after it was created.

I’m seeing a bunch of errors, some of which are memory-related (I assume the same issue as the buffer error).
You may want to run brewblox-ctl disable-ipv6 to fix some issues with ipv6 + docker networking.

It’s also unable to show its logs. Again, that’s something @Elco will have to help with: what is the preferred log driver on Synology?

It’s a db based log inside the docker package.
You can access the log in the docker app through the web interface.

Maybe it is the same issue I had with DNS discovery. In that case adding --device-host 192.168.1.100 to the spark container in docker-compose.yml should help. This will skip DNS discovery and use a fixed IP instead. Replace with the actual IP of the spark of course.

Thanks. For the IP address is that the IP of the spark or the synology server?

Ok, looks like adding the Spark IP address as the device host worked. Good thing it’s working because I have a brew day scheduled for early tomorrow morning.

Thank you @Bob_Steers and @Elco for the help.

The Spark, because you are telling the server where to find the spark. But you figured that out already.
You’ll probably want to give the Spark a static IP in your router.

I’ll try to figure out why mDNS has issues on the NAS.

Sorry to drag up this old chain. Trying to get this set up again after a long break on my synology. Is it possible just to create a stack in docker to start all of the containers or do I HAVE to use brewblox-ctl?

brewblox-ctl does nothing you can’t do yourself, but it does do quite a lot. Is there any specific reason you’d like to avoid it?

Basically was trying to get around needing to run brewblox-ctl and just use portioner and a stack to run it from