Skip to content

增量同步时报错,已持续执行一周 #958

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
lslt2025 opened this issue May 20, 2025 · 4 comments
Open

增量同步时报错,已持续执行一周 #958

lslt2025 opened this issue May 20, 2025 · 4 comments
Labels
type: question Further information is requested

Comments

@lslt2025
Copy link

问题描述(Issue Description)

增量同步时报错,已持续执行一周

Please provide a brief description of the issue you encountered.

环境信息(Environment)

  • RedisShake 版本(4.1.0):
  • Redis 源端版本(5.0.10):
  • Redis 目的端版本(6.0.10):
  • Redis 部署方式(主从):
  • 是否在云服务商实例上部署(否):

日志信息(Logs)

2025-05-20 08:33:00 INF read_count=[102758485], read_ops=[934.05], write_count=[102758485], write_ops=[934.05], syncing aof, diff=[323586]
2025-05-20 08:33:02 ERR write tcp 172.16.56.218:50056->172.16.56.218:6379: write: connection reset by peer
RedisShake/internal/client/redis.go:191 -> (*Redis).flush()
RedisShake/internal/client/redis.go:177 -> (*Redis).Send()
RedisShake/internal/reader/sync_standalone_reader.go:480 -> (*syncStandaloneReader).sendReplconfAck()
runtime/asm_amd64.s:1650 -> goexit()

If there are any error logs or other relevant logs, please provide them here.

其他信息(Additional Information)

请提供任何其他相关的信息,如配置文件、错误信息或截图等。

Please provide any additional information, such as configuration files, error messages, or screenshots.

@lslt2025 lslt2025 added the type: question Further information is requested label May 20, 2025
@suxb201
Copy link
Member

suxb201 commented May 20, 2025

ERR write tcp 172.16.56.218:50056->172.16.56.218:6379: write: connection reset by peer

说明 RedisShake 到 172.16.56.218:6379 的 TCP 连接被断开了,建议:检查 172.16.56.218:6379 的运行日志

@lslt2025
Copy link
Author

你好,谢谢,是缓冲区超限了
服务端日志
735:S 20 May 2025 10:20:48.011 # Client id=4455162 addr=172.16.56.218:48954 fd=18 name= age=108 idle=1 flags=S db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=0 oll=1595 omem=152073608 events=rw cmd=replconf scheduled to be closed ASAP for overcoming of output buffer limits.
服务端配置
127.0.0.1:6379> config get client-output-buffer-limit

  1. "client-output-buffer-limit"
  2. "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60"
    127.0.0.1:6379> config get client-query-buffer-limit
  3. "client-query-buffer-limit"
  4. "1073741824"

请问,在服务端不改配置的情况的,是不是可以通过调低shake的target_redis_client_max_querybuf_len这个参数进行控制?

shake的相关配置
pipeline_count_limit(默认 1024):
控制客户端一次可以积累多少未应答的请求。降低这个值可以减少并行度,从而降低瞬时网络吞吐量。

target_redis_client_max_querybuf_len(默认 1 GB):
它对应 Redis 的 client-query-buffer-limit,限制服务端为每个客户端分配的查询缓冲区大小。

target_redis_proto_max_bulk_len(默认 512 MB):
定义了协议中单个字符串元素的最大大小。对大 value 的拆分和传输也会因此受限。

@suxb201
Copy link
Member

suxb201 commented May 20, 2025

不行,必须需要改源端 Redis 的 client-output-buffer-limit。

@lslt2025
Copy link
Author

好的,谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants