-
Couldn't load subscription status.
- Fork 296
Support grpc-retry-pushback-ms #664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
e8f3005
2dc17a2
11975a4
908de69
62aaad1
7e3f438
c8728fe
695f349
60b16d8
f982a8d
e24a9d6
61fbef5
60676b3
d6d9036
8ccfd8c
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -240,7 +240,6 @@ Here is a sample Go code to illustrate: | |
| if err != nil { | ||
| log.Fatal(err) | ||
| } | ||
|
|
||
| return st.Err() | ||
| ``` | ||
|
|
||
|
|
@@ -314,8 +313,14 @@ error with code [Unavailable](https://godoc.org/google.golang.org/grpc/codes) | |
| and MAY supply additional | ||
| [details via status](https://godoc.org/google.golang.org/grpc/status#Status.WithDetails) | ||
| using | ||
| [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40). | ||
| Here is a snippet of sample Go code to illustrate: | ||
| [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40) | ||
| or via Trailer [with the gRPC metadata key `grpc-retry-pushback-ms`](https://github.com/grpc/proposal/blob/master/A6-client-retries.md#pushback). | ||
|
|
||
| The client SHOULD honor at least one of these backpressure mechanisms. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What happens with the clients that honor the new mechanism that attempt to work with a server that honors only the old mechanism (existing implementations)? They would be incompatible, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Below it's recommended below for the server to implement both mechanisms, then the client is guaranteed to respect it.. Right now clients are split between the 2 mechanisms, that's the current state, the server needs to do both if it wants a guarantee. Alternatively lets just recommend the retry - config mechanism in all cases for everyone ? Since the clients are already split, it's likely that server's timeouts aren't being respected in all cases. If a retry-timeout isn't respected it's not that bad anyway, as all the clients have their own back off timeout logic already. |
||
|
|
||
| In order to have the backpressure respected, the server SHOULD implement both mechanisms. | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Does this mean the server should be responding with both mechanisms at the same time? The example below only has one mechanism being used at a time. This seems odd to me. It is usually the client that is recommended to handle multiple versions of a response and the server is expected to support at least one. Is there a preference for which mechanism the server should support? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think there is a preference. About half the languages are using gRPC retry config which supports only the If a client uses gRPC retry config, it is offloading all the retry logic to gRPC internal library, so it can't also support |
||
|
|
||
| Here is a snippet of sample Go code to illustrate using `RetryInfo`: | ||
|
|
||
| ```go | ||
| // Do this on the server side. | ||
|
|
@@ -341,28 +346,23 @@ Here is a snippet of sample Go code to illustrate: | |
| } | ||
| ``` | ||
|
|
||
| When the client receives this signal, it SHOULD follow the recommendations | ||
| outlined in documentation for | ||
| [RetryInfo](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L40): | ||
| Here is a snippet of sample Go code to illustrate using `grpc-retry-pushback-ms`: | ||
pellared marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| ```go | ||
| // Do this on the server side. | ||
| trailer := metadata.Pairs("grpc-retry-pushback-ms", "5000") | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this expected to be done only if the server wants to pushback? It is not clear from the sample if this is some sort of permanent option that the server must always set or only set when the server is overloaded and needs to pushback the clients. |
||
| grpc.SetTrailer(ctx, trailer) | ||
| ``` | ||
| // Describes when the clients can retry a failed request. Clients could ignore | ||
| // the recommendation here or retry when this information is missing from the error | ||
| // responses. | ||
| // | ||
| // It's always recommended that clients should use exponential backoff when | ||
| // retrying. | ||
| // | ||
| // Clients should wait until `retry_delay` amount of time has passed since | ||
| // receiving the error response before retrying. If retrying requests also | ||
| // fail, clients should use an exponential backoff scheme to increase gradually | ||
| // the delay between retries based on `retry_delay` until either a maximum | ||
| // number of retries has been reached, or a maximum retry delay cap has been | ||
| // reached. | ||
| ``` | ||
|
|
||
| The value of `retry_delay` is determined by the server and is implementation | ||
| dependant. The server SHOULD choose a `retry_delay` value that is big enough to | ||
| On the client side using [gRPC retry config](https://grpc.io/docs/guides/retry) | ||
| will cause the gRPC client to automatically parse the `grpc-retry-pushback-ms` | ||
| trailer metadata and handle all backoff and retry logic. | ||
|
|
||
| The description of how the client should respect `RetryInfo` is provided | ||
| in its [documentation](https://github.com/googleapis/googleapis/blob/6a8c7914d1b79bd832b5157a09a9332e8cbd16d4/google/rpc/error_details.proto#L34-L39). | ||
|
|
||
| The value of `retry_delay`/`grpc-retry-pushback-ms` is determined by the server and is implementation | ||
| dependant. The server SHOULD choose a `retry_delay`/`grpc-retry-pushback-ms` value that is big enough to | ||
| give the server time to recover yet is not too big to cause the client to drop | ||
| data while being throttled. | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please link to a permanent link instead of one that can break.