Spark drops the outputs

@Bob_Steers
Hi,
Periodically, Spark drops the outputs.

Can you figure out what’s going on.

See attached log: https://termbin.com/iy2q

Arnt

In the generic sense, your logs look pretty normal, and I don’t see major disconnect events.

As to the graph: I have no idea what I’m looking at, as the legend was not included.

Ok, how can I extract the data you need?

What was your cool PID doing at this time?

Could you please make a new graph of SystemInfo / Uptime during this period? the drops in output may be caused by to the controller losing power or rebooting.

I am not using cool pid for this.

This one you asked for?

And the same is hapening to this controler too:

It looks like the controller did not reboot at least. If you only show “updates per second”, do you see any big dips there? Right now it looks flat because its value is so much lower than the uptime.

Could you please show the full PID graph (three dots in PID widget → Graph)? You only need to show back until the dip at 6:00 on the 9th.

Is this normal?

Full PID graph.

And what does this mean?

The numbers on your Spark display are current and peak RAM use.

I suspect that the sensors are dropping out, leading to a Spark reset. We’ve had some related issues that were caused by corrupted leftover data on the controller.

To reset the blocks, export and then re-import them using the Spark actions. It looks like you’re not yet on the latest release, so this action is called import/export blocks.

This does carry its own risk of something going wrong, and the Spark not having any blocks until that is fixed. If your active fermentation is almost done, I would suggesting waiting for that. Otherwise it’s up to you.

Thanks for the clarification about the numbers on the screen.

I have noticed that Spark has sort of recovered, without it being reflected in uptime. Can also mention that I had some trouble with updating to the latest version. Had to delete some orphan folders/files before the update went through, so maybe this messed up the system.

I have now carried out the export/import of the blocks, so now it’s just a matter of waiting and seeing if that helped.

Thanks for the help so far and have a great weekend.
Arnt

Your peak memory use is very close to the limit where we start seeing crashes. If you have any unused blocks, I recommend deleting them to save memory.

Your input temperature suddenly drops at those moments, did the cooler turn on at that time?
I would increase the mutex lock time on both the heater and the cooler to something like 2 hours, so they cannot alternate.
With a lock time of 2 hours, the cooler can only activate when the heater was not active for the past 2 hours.

On Friday I ran an update of Spark and some unused blocks were removed.
I have now deleted several blocks that are no longer in use, but I can’t see that it helped anything in particular.

Is it possible to increase the memory in Spark?

Has no fermentation going on now.
I currently use Spark for frost protection in 2 cabinets, so no cooler is used.
The reason why the temperature drops so quickly is because the temperature on the outside of the cabinets is between -5 to -10 degrees Celsius.

Otherwise, the drops have come a little less frequently than on Friday, now there are about 24 hours between the drops.

Adding memory is not possible. You didn’t get restarts so it doesn’t seem to be a problem at the moment.

So the sudden drop is just because the door was opened?

The heat PWM value is the sum of P, I and D. If the heater goes to 100%, the I part is decreased, because the heater is already at the maximum and increasing the integrator is of no use and could cause overshoot later.
The sudden drop seems to trigger this integrator anti-windup.

To reduce the sudden drop in the measurement, you can go to each setpoint and increase the filter time.

No, it is a result of a output dropp from PID.

This is the PID setup:

On the side of that panel, you can slide out the graph. Can you adjust the time span of that graph to include one of these drops and post a screenshot?


I am puzzled. It might be triggered because it is running near 100%, but I would need to do add some tests.

The actuator running at 100%, but reaching setpoint because it all just works out, is a rare case with an underpowered heater.

The Spark haven’t gotten any better.
Can’t start any fermentation when it behaves like this.
Still no idea what’s going on?

If no fermentation is active, I recommend exporting, removing, and then re-importing your blocks. This will clear any corrupted data leftovers from the controller memory.

I performed that action last Friday without it making any difference other than 1% on memory peak.
Also used the remove unused blocks command and manually removed blocks I don’t need.