First of all: thanks @wactuary for the detailed information. It really helps a lot while tracking what went wrong.
As a side note: if you change the docker-compose.yml file, it needs a full restart of the system before it reads the file again (docker-compose down && docker-compose up -d). You can also do this with brewblox-ctl down up
Ubuntu
We do all our development on Ubuntu 18.04 machines, so it certainly is not an unusual configuration.
On your machine, it seems that device discovery is working correctly. The mdns service reports it discovering your spark at 192.168.1.163, but as it took him nearly 50 seconds to find the device, you restarted it just when it made connection.
I’m not sure what’s causing the delay. Discovery of already-started Spark controllers is generally instant.
Starting discovery:
spark-one_1 | 2019-02-16T00:23:02.952602305Z 2019/02/16 00:23:02 INFO ..._devcon_spark.communication Starting device discovery, id=None...
You refreshing the page in the UI, while it’s not yet connected
spark-one_1 | 2019-02-16T00:23:13.124896559Z 2019/02/16 00:23:13 INFO ...ox_devcon_spark.api.sse_api Initial subscription push failed: NotConnected(<SparkConduit for None> not connected)
spark-one_1 | 2019-02-16T00:23:18.217072867Z 2019/02/16 00:23:18 INFO ...ox_devcon_spark.api.sse_api Initial subscription push failed: NotConnected(<SparkConduit for None> not connected)
Device found, connected. (second green check in the UI). It will now update the controller time, and get settings from the datastore. Note how device discovery started at 00:23:02, and it connects at 00:23:51.
spark-one_1 | 2019-02-16T00:23:51.422602049Z 2019/02/16 00:23:51 INFO ..._devcon_spark.communication Connected <SparkConduit for 192.168.1.163:8332>
spark-one_1 | 2019-02-16T00:23:51.602553693Z 2019/02/16 00:23:51 INFO ...blox_devcon_spark.commander Spark event: "Connected to BrewBlox v0.1.0"
spark-one_1 | 2019-02-16T00:23:51.741202057Z 2019/02/16 00:23:51 INFO ...devcon_spark.couchdb_client <CouchDBClient for http://datastore:5984> Existing document found (430029001347343339383037-blocks-db)
spark-one_1 | 2019-02-16T00:23:51.741510418Z 2019/02/16 00:23:51 INFO ...devcon_spark.couchdb_client <CouchDBClient for http://datastore:5984> Existing document found (430029001347343339383037-config-db)
spark-one_1 | 2019-02-16T00:23:51.744188785Z 2019/02/16 00:23:51 INFO ...devcon_spark.couchdb_client <CouchDBClient for http://datastore:5984> Existing document found (430029001347343339383037-savepoints-db)
spark-one_1 | 2019-02-16T00:23:51.744602135Z 2019/02/16 00:23:51 INFO ...blox_devcon_spark.datastore <CouchDBConfig for spark-service/430029001347343339383037-config-db> Read 1 setting(s). Rev = 2-cca2f702b879bd0a61e6b592e2b365a4
spark-one_1 | 2019-02-16T00:23:51.746709160Z 2019/02/16 00:23:51 INFO ...blox_devcon_spark.datastore <CouchDBBlockStore for spark-service/430029001347343339383037-blocks-db> Read 19 blocks. Rev = 2-264d7eebef0f740c7de91d2872b2d82d
spark-one_1 | 2019-02-16T00:23:51.748289940Z 2019/02/16 00:23:51 INFO ...blox_devcon_spark.datastore <CouchDBConfig for spark-service/430029001347343339383037-savepoints-db> Read 0 setting(s). Rev = 1-bc82f7787f7b495c97a6fcf633688885
Shutting down (00:23:54).
spark-one_1 | 2019-02-16T00:23:54.648340644Z 2019/02/16 00:23:54 INFO brewblox_service.events Closing <EventListener for "eventbus">
spark-one_1 | 2019-02-16T00:23:54.648422328Z 2019/02/16 00:23:54 INFO brewblox_service.events Closing <EventPublisher for "eventbus">
Raspberry
Here discovery seems to not work at all. It started discovery at 03:22:10, and by 03:24:50, it still hadn’t received a response. It is using the default compose file, so connection may yet be solved by taking the following steps:
- stop the system (
docker-compose down or brewblox-ctl down)
- add
--device-host=192.168.1.163 to the spark-one command in docker-compose.yml
- start the system (
docker-compose up -d or brewblox-ctl up)
Conclusion
The whole thing seems to almost work, but still has a few kinks. If you want us to try and help out during your next attempt, I can invite you to the Slack chat.
Meanwhile, I’ll take a look at pushing more information to the UI when the Spark is busy discovering / connecting. That should offer some more insight in what it’s trying to do, without having to resort to reading Docker logs.