-
-
Save amgorb/c38a77599c1f30e853cc9babf596d634 to your computer and use it in GitHub Desktop.
If you redis initial replication fails with error like | |
"5101:M 20 Feb 18:14:29.130 # Client id=4500196 addr=71.459.815.760:43872 fd=533 name= age=127 idle=127 flags=S db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=13997 oll=1227 omem=192281275 events=rw cmd=psync scheduled to be closed ASAP for overcoming of output buffer limits. | |
means that slave buffer is not enough and you should increase it (at master!) with command like | |
redis-cli config set client-output-buffer-limit "slave 836870912 836870912 0" | |
more info: https://redis.io/topics/clients |
You're my new hero. I was wondering what the heck was happening.
@amgorb, I am not getting the slave buffer error in redis logs but the same error redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer
in application logs.
Is increasing the slave buffer going to help?
@amgorb, I am not getting the slave buffer error in redis logs but the same error
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer
in application logs.Is increasing the slave buffer going to help?
Yes, buffer issue might be the root cause:
Right now getting from app logs "Error 104 while writing to socket. Connection reset by peer.",
then at the same time stamps, but in the redis logs, got the "to be closed ASAP for overcoming of output buffer limits." error.
above trick just ran and keeping an eye on the logs -
These errors can also happen for the pubsub buffer overruns, not only the slave buffers.
The errors we see in the log that are related to pubsub buffer overruns are something like:
<timestamp> # Client id=494028 addr=127.0.0.1:43534 fd=36 name= age=75 idle=0 flags=N db=0 sub=148630
psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=742 omem=12154393 events=rw
cmd=subscribe scheduled to be closed ASAP for overcoming of output buffer limits.
By increasing the client-output-buffer-limit pubsub <hard> <soft> <seconds>
this may be resolved as well.
you can set this in a running production server by
redis-cli config set client-output-buffer-limit "pubsub <hard> <soft> <seconds>"
# eg to set it to 64 MB hard, 32 MB soft for 120s:
redis-cli config set client-output-buffer-limit "pubsub 64mb 32mb 120"
or put it in the redis.conf
and find and replace the client-output-buffer-limit pubsub
line
Thanks for this post for guiding our thoughts and eventually saving our lives as well.
also my life, thank you!
Thanks for the help!
These manipulations require AUTH password to submit changes.
redis-cli config -a "$REDIS_PASSWORD" set client-output-buffer-limit "pubsub 64mb 32mb 120"
It must return OK
if a request is successful.
I can't use "mb" in redis-cli command for server version 4.0.9
.
I must specify pure numbers in the command:
redis-cli config set client-output-buffer-limit "pubsub 268435456 67108864 120"
TO THE MOON
Thank you. Helpful hint.
I knew there is some buffer but official documentation was missing that info.
Thanks this is helpful!
Me too :) Thanks a lot
too bad I got here after the crash loop. Still, very helpful, thanks!
you saved my life as well 🙏
In my case, I found that I need to set repl-backlog-size 512mb
as well(RDB file size is around 65 GB) in addition to set client-output-buffer-limit "slave 836870912 836870912 0"
and in 2024 still more lifes saved 👍😉
Thank you!!