Spark-Core Development Flash Management and Wifi

I’ve been keen to see if I could get the BrewPi spark-core to talk over wifi.

After the first round of digging it became clear that the SparkCloud had been disabled to free up flash space. (The loader gives you a nice error when you do overflow trying to make the main build without the right command line options).

But I was curious about how much of that memory issue was taken up by cloud specific items, and whether you could squeeze the wifi and a tcpserver into the firmware.

As it happens you can fire up the wifi without any problems, but trying to get the tcpserver going kicks the flash over the memory limit.
[Albeit without telling you like it did with the cloud enabled, i spent about 4hrs to work out why my code wasn’t updating - loader doesn’t complain, web upload of firmware works fine - device reboots but version doesn’t bump!
Eventually I switched to dfu for upload and it now seems to reliably complain about the last bytes not being writeable]

Main 2.10.0 = 104k
Main 2.10.0 + Wifi = 107k
Main 2.10.0 + Wifi + Tcp = > 108k
fixed-point(9/6/15) = ~96k
fixed-point(9/6/15) + blake temp space saver = 88k

Which brings me to the question - is there a better way to manage flash on the core to free up enough space to get this going?

I’m guessing the answer might be ‘not easily on the core wait for the photon, it has 1Mb of Flash’)

On the Photon, it is definitely easier, because of the extra space and the work Spark/Particle has put into this.

Just 4 lines of code will publish a variable to the Particle Cloud.


SYSTEM_MODE(AUTOMATIC); // enable wifi + cloud

// global
double fridgeTemp = 0;

// in setup()
Spark.variable("fridge", &fridgeTemp, DOUBLE); // publish variable

// in loop()
fridgeTemp = double(tempControl.fridgeSensor->readFastFiltered())/512 + 48.0; // set variable and remove internal offset

And boom, variable accessible anywhere in the world.

On the Spark Core, it should be just as easy, only that we are running out of space. I hope my current rewrite of the control algorithm will free up some space.

So let’s discuss here how to do the script to controller link over WiFi: who is the server and who is the client?

Does it make a difference if long term we plan to have the option to have multiple instances of the Spark hardware and interface with a single Raspberry Pi? Would that mean the Spark has to be the client?

Here’s the user story for WiFi - https://github.com/BrewPi/firmware/issues/12

Having multiple instances doesn’t really have a bearing on if the Brewpi controller should be a client or a server - the brewpi script will just have an open socket to them.

On startup and at regular intervals, we’ll have the Brewpi Controller send a UDP multicast message as a presence announcement. The script can listen for these message and then connect to the controller.

To my mind, the decision about the controller being a client or server hinges more on the reliability of each. With the latest patches for the CC3000 TCPServer is more reliable, but not for the stock version installed on the core, so deciding to make the Controller a Server would require everyone to install the CC3000 patch (which is probably a good idea anyhow.)

I agree, I don’t think running multiple spark instances forces the client server model either way.

I think there are two user classes for using the wifi:

  1. “The Healthy Option” BrewPi with less Pi. Local storage of log data and an occasional client that logs in and grabs the data. Client could be a Webserver/Pi or another option (app?)

  2. “The Serious Brewery” One Pi to many Sparks running multiple chambers etc.

The first case tends to lend itself to the spark being the “Server” model, and this most resembles the way the serial works today. Even if there were no UDP broadcast, it is pretty simple for the Webserver to be configured to look for specific Spark IPs and the port could be fixed.

This just leaves the wifi creds to be set on the Spark and you’re ready to start talking.

Spark Cloud Integration:
I’m not sure this is desirable, sure it makes config simple on the Photon, but it basically means you need an Internet connection to work. Might be Ok for setup, but i’m dubious about using spark variable pushes as the interface to the webserver.

Reliability:
I agree this is a key driver - when you say “TCPServer is more reliable” do you mean more reliable than before? or more reliable than TCPClient? There doesn’t seem to be waves of people on the particle forums raving about how much better it is.(but plenty of old quiet threads about how bad it was).
Every alleyway i run down with this project seems to end with “mdma is going to fix that” :wink:

I’m still keen to see if I can get something going on the Core. Memory management remains an issue but i did “find” 8k which might help me keep tinkering but I’m going to look into getting the CC3000 updated first.

1 Like

Agreed with all above. :+1:

That’s because there aren’t many people using the latest CC3000 service pack. It’s not something we can fix - the problems originate in the CC3000 :shit: firmware. Only me and a handful of Elites and one or two customers have tried it with mainly positive results. There were some reports of intermittent resets, but not conclusive that this was due to the service pack upgrade.

Regarding flash memory and program size - are you building against the latest code which is in the develop branch? That should give vastly smaller binaries, not least because of link time optimization and optional sprintf() floating point support. Try it and let me know how it goes.

If I had to make a call now, I’d say make the Controller a server. It feels correct, and fits with our current models of interaction, as you mentioned.

Let me know if there’s anything I can do to support your efforts here. :bow:

ah no i haven’t been, I had issues compiling it earlier, but Elco suggested the other day that this might be because my spark-firmware is still synced to brewpi/spark-firmware/feature/hal which is behind the origin spark-firmware/develop branch which is now the base for the latest brewpi-firmware/develop branch.

Been going round in circles with the CC3000 update today, tried to follow your instructions in the readme.md which suggested getting spark/particle-cli going was the easier way. But the npm installer doesn’t seem to be doing much (my internet has been reduced to a tiny pipe for the rest of the month so maybe thats it) and the Windows tutorial suggests I need Vis Studio express and at 10gigs thats not happening in a hurry.
All of which has led me back to thinking that maybe just DFU-ing it is going to be worlds easier.

The only thing i don’t know was whether my core had been “deep-updated”. I assume given the timing that it has, but i was contemplating trying to do it just to be sure, but the whole cli has put me off - i might consider doing it via manual dfu too if i can find that reference again)

I’ll try get myself up and running on develop this arvo and try to get back into it.

Playing around with the Core and UDP broadcast and TCPServer caused me headaches because it suddenly stops, even Serial does not receive anything any more.

@mdma: When you say CC3000 service pack makes things better, you mean the v1.14 version? This requires the current host driver and the CC3000 firmware as I understood. I looked at the current spark/firmware develop branch but the host driver is still the old one. Does it make sense to just update the v1.14 firmware and I would see reliability improvements or do I need both?

From my playing around with the Core and WiFi I can’t see that it will run reliable and for a longer period for a fermentation. I hope that you will get it reliable because the Pi and Serial connection to BrewPi is something I find to complicated. If you like more than one BrewPi running (Fermenting and Brewing) makes this setup a bit strange as they are not just next to each other all the time.

Despite what TI have said in the release notes, there are several people successfully using the 1.14 update without updating the driver code. (I did port the driver changes to a branch - the diffs were minimal.)

If reliability is the utmost concern, I recommend switching to the Photon. It’s already more reliable the core, plus we can fix much more of the stack - in particular the TCP stack - we have as source code, so can fix issues as they arise.

If you use

particle flash --usb tinker
particle subscribe mine

Then start your core, you’ll see a cc3000 event when it connects to the cloud. 1.14 release corresponds to version 1.32 service pack (other versions.)

“name”:“spark/cc3000-patch-version”,“data”:“1.29”,“ttl”:“60”,“published_at”:“2015-06-17T12:57:39.859Z”

er… 1.29 doesn’t seem to correspond to anything… is that a newer version again or did i break something?

iirc 1.29 is 1.13.1 - curiously not listed in TI’s table. It corresponds to the “deep update” that the online IDE/particle-cli perform.

Alright! It works.

  • Not sure now whether i just forgot to try the cli upgrade of the cc3000 only yesterday ,or whether it defaults to the older version as well.
  • Couldn’t get the .bin version packaged with the github repo to work, but i did a make all and make program-dfu and it came good.

Sorry to revive an old thread but has anyone managed to get wifi working on the current implementation of BrewPi? I’d rather not wait to the next major release, which doesn’t appear to be happening any time in the near future.

I was hoping to connect my BrewPi Spark which I am going to use for mashing to a Raspberry Pi already connected with another BrewPi (Arduino) in another room (being used for fermentation).

This setup would also require running 2 instances of the script and web interface (legacy and current) on the same Raspberry Pi. I assume this is possible.