Raspberry Pi 3 losing network connectivity

RPi3 is connected on ethernet cable and wifi is not configured. After some number of hours I am unable to ssh or ping it. After a power cycle, I can ssh and check the logs in /var/log/messages (see bottom of post). These messages have been happening, about every 8 minutes the entire time since startup and they continued after network connectivity was lost. (I had a live ssh session tailing the logs and network was lost at 2am. I power cycled at 9am in the morning and logs below show it still continuing at 4am.)

Additionally, during the day, the brewblox web UI reports lost connectivity to the Spark, then connectivity comes back after a minute or two. This happens somewhat regularly, though less frequently than every 8 minutes (this behavior doesn’t appear to line up with the logs below).

(EDIT: IPv6 is disabled. This behavior happened before I installed the 2020/05/04 release, so when I saw the disable IPv6 option (after update) I tried it, but unfortunately that didn’t seem to have an effect.)

Has anyone else seen this behavior or similar logs to these?

More importantly, anyone know what these logs mean?

May 11 04:09:41 fermenator kernel: [50012.090496] br-7cd619fb585b: port 1(veth0de4cca) entered disabled state
May 11 04:09:41 fermenator kernel: [50012.090738] vethee7c288: renamed from eth0
May 11 04:09:41 fermenator kernel: [50012.223511] br-7cd619fb585b: port 1(veth0de4cca) entered disabled state
May 11 04:09:41 fermenator kernel: [50012.233006] device veth0de4cca left promiscuous mode
May 11 04:09:41 fermenator kernel: [50012.233019] br-7cd619fb585b: port 1(veth0de4cca) entered disabled state
May 11 04:09:41 fermenator kernel: [50012.585182] br-7cd619fb585b: port 1(veth2e16985) entered blocking state
May 11 04:09:41 fermenator kernel: [50012.585190] br-7cd619fb585b: port 1(veth2e16985) entered disabled state
May 11 04:09:41 fermenator kernel: [50012.585420] device veth2e16985 entered promiscuous mode
May 11 04:09:41 fermenator kernel: [50012.585676] br-7cd619fb585b: port 1(veth2e16985) entered blocking state
May 11 04:09:41 fermenator kernel: [50012.585683] br-7cd619fb585b: port 1(veth2e16985) entered forwarding state
May 11 04:09:42 fermenator kernel: [50013.866526] br-7cd619fb585b: port 1(veth2e16985) entered disabled state
May 11 04:09:42 fermenator kernel: [50013.867411] eth0: renamed from vethbb70f06
May 11 04:09:43 fermenator kernel: [50013.926965] br-7cd619fb585b: port 1(veth2e16985) entered blocking state
May 11 04:09:43 fermenator kernel: [50013.926977] br-7cd619fb585b: port 1(veth2e16985) entered forwarding state
May 11 04:17:30 fermenator kernel: [50480.916456] br-7cd619fb585b: port 1(veth2e16985) entered disabled state
May 11 04:17:30 fermenator kernel: [50480.917799] vethbb70f06: renamed from eth0
May 11 04:17:30 fermenator kernel: [50481.062730] br-7cd619fb585b: port 1(veth2e16985) entered disabled state
May 11 04:17:30 fermenator kernel: [50481.071771] device veth2e16985 left promiscuous mode
May 11 04:17:30 fermenator kernel: [50481.071783] br-7cd619fb585b: port 1(veth2e16985) entered disabled state
May 11 04:17:30 fermenator kernel: [50481.397773] br-7cd619fb585b: port 1(vetha1a5103) entered blocking state
May 11 04:17:30 fermenator kernel: [50481.397783] br-7cd619fb585b: port 1(vetha1a5103) entered disabled state
May 11 04:17:30 fermenator kernel: [50481.398078] device vetha1a5103 entered promiscuous mode
May 11 04:17:30 fermenator kernel: [50481.398414] br-7cd619fb585b: port 1(vetha1a5103) entered blocking state
May 11 04:17:30 fermenator kernel: [50481.398422] br-7cd619fb585b: port 1(vetha1a5103) entered forwarding state
May 11 04:17:31 fermenator kernel: [50482.667774] br-7cd619fb585b: port 1(vetha1a5103) entered disabled state
May 11 04:17:31 fermenator kernel: [50482.668275] eth0: renamed from veth62910e6
May 11 04:17:31 fermenator kernel: [50482.708391] br-7cd619fb585b: port 1(vetha1a5103) entered blocking state
May 11 04:17:31 fermenator kernel: [50482.708406] br-7cd619fb585b: port 1(vetha1a5103) entered forwarding state
May 11 04:25:19 fermenator kernel: [50949.917279] br-7cd619fb585b: port 1(vetha1a5103) entered disabled state
May 11 04:25:19 fermenator kernel: [50949.920529] veth62910e6: renamed from eth0
May 11 04:25:19 fermenator kernel: [50950.091937] br-7cd619fb585b: port 1(vetha1a5103) entered disabled state
May 11 04:25:19 fermenator kernel: [50950.100453] device vetha1a5103 left promiscuous mode
May 11 04:25:19 fermenator kernel: [50950.100467] br-7cd619fb585b: port 1(vetha1a5103) entered disabled state
May 11 04:25:19 fermenator kernel: [50950.395599] br-7cd619fb585b: port 1(vethbe5aa99) entered blocking state
May 11 04:25:19 fermenator kernel: [50950.395609] br-7cd619fb585b: port 1(vethbe5aa99) entered disabled state
May 11 04:25:19 fermenator kernel: [50950.396182] device vethbe5aa99 entered promiscuous mode
May 11 04:25:19 fermenator kernel: [50950.396818] br-7cd619fb585b: port 1(vethbe5aa99) entered blocking state
May 11 04:25:19 fermenator kernel: [50950.396827] br-7cd619fb585b: port 1(vethbe5aa99) entered forwarding state
May 11 04:25:21 fermenator kernel: [50951.928874] br-7cd619fb585b: port 1(vethbe5aa99) entered disabled state
May 11 04:25:21 fermenator kernel: [50951.929715] eth0: renamed from veth1438fd7
May 11 04:25:21 fermenator kernel: [50951.979381] br-7cd619fb585b: port 1(vethbe5aa99) entered blocking state
May 11 04:25:21 fermenator kernel: [50951.979393] br-7cd619fb585b: port 1(vethbe5aa99) entered forwarding state

These logs happen when a docker container restarts, which causes new virtual network interfaces to be created and registered. It is an open issue with docker that this causes a network reset on the host with ipv6. Disabling ipv6 should prevent this.

If you run docker ps, what is the uptime of the containers?

NAMES                  CREATED AT                      CREATED             STATUS
brewblox_tilt_1        2020-05-11 10:56:43 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_eventbus_1    2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_influx_1      2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_traefik_1     2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_history_1     2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_mdns_1        2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_spark2_1      2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_emitter_1     2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_datastore_1   2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes
brewblox_spark3_1      2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 27 seconds
brewblox_ui_1          2020-05-11 10:56:34 -0500 CDT   8 minutes ago       Up 8 minutes

Is spark3 offline?

The container will keep restarting, because the compose file has restart: unless-stopped.
Running docker-compose stop spark3 will stop that.

The other containers are online for 8 minutes, so I assume that is when you brought up the system manually?

Yes, spark3 is offline (spark3 is the brewstand, spark2 is the ferment fridge.)

I stopped the spark3 docker service and no new log messages in the last 18 minutes! Yay!

Looks good for now. I’ll monitor for the next day to see if this resolves long-term network stability.

RPi3 has maintained network connectivity for 24+ hours now!
Thanks for the help getting this worked out, @Elco !!

1 Like

Thanks for the update. We have made plans to add a mechanism to pause a service from the UI and to prevent frequent restarts after a few retries.