-
Notifications
You must be signed in to change notification settings - Fork 2.2k
graph/db: unwrap dns addresses from opaque ones during migration #10162
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
graph/db: unwrap dns addresses from opaque ones during migration #10162
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @ellemouton, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical aspect of the database migration by implementing the functionality to unwrap DNS addresses from opaque address types. Previously, certain DNS addresses might have been stored as generic opaque data, hindering proper identification and use. This change ensures that during the migration, these embedded DNS addresses are correctly parsed and stored as their specific type, improving data integrity and usability. It also includes refactoring of the underlying address reading mechanisms and comprehensive test coverage to validate the new behavior.
Highlights
- DNS Address Unwrapping During Migration: The primary change involves enhancing the database migration process to correctly identify and extract DNS addresses that were previously stored within generic 'opaque' address types. This ensures that network addresses are properly categorized and accessible.
- Refactored Address Reading Logic: The core logic for reading network addresses from byte streams has been refactored. A new
ReadAddress
function was introduced to parse individual addresses, and the overall address reading mechanism was updated to leverage this, allowing for more granular control and the ability to handle partial opaque payloads. - Comprehensive Test Coverage for Address Unwrapping: Extensive new test cases have been added to validate the unwrapping process. These tests cover various scenarios, including opaque addresses containing single, multiple, or invalid DNS entries, ensuring the migration handles diverse data states correctly.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces logic to unwrap DNS addresses from opaque address types during database migration. The changes involve refactoring address deserialization in lnwire
and applying this new logic in the graphdb
migration process. The overall approach is sound, but I've identified two key issues that need to be addressed.
First, the address deserialization loop in lnwire/lnwire.go
incorrectly passes the total buffer length instead of the remaining length to ReadAddress
, which could lead to parsing errors. This is a critical issue that affects all address deserialization.
Second, the migration logic in graph/db/sql_migration.go
only unwraps the first address from an opaque payload. It should be improved to loop and extract all parsable addresses from the payload.
Addressing these points will make the migration logic more robust and correct. The accompanying test cases are well-written and cover several important scenarios.
3b32894
to
a69762f
Compare
In this commit, the logic in ReadElements that is used to read a single address descriptor from a io.Reader is separated out into a new function so that the logic can be re-used elsewhere.
In this commit, we take advantage of the graph SQL migration and use it to also extract DNS addresses from the opaque address type. We use opaque addresses to store addresses that we dont understand yet. We recently added logic for DNS addresses and so we may have persisted node announcements that have DNS addresses but we would currently have them stored under the opaque address type. So we use this migration to see if we can extract such addresses. A few decisions were made here: 1) If multiple DNS addressees are extracted, this is ok and we continue to migrate the node even though this is actually invalid at a protocol level. We will currently check (at a higher level) that a node announcement only has 1 DNS address in it before we broadcast it though. 2) If an invalid DNS address is encountered (so we hit the DNS type descriptor but then the rest of the DNS address payload is invalid and cannot be parsed into the expected hostname:port, then we skip migrating the node completely.
a69762f
to
3ff17d4
Compare
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors address parsing logic and uses it to unwrap DNS addresses from opaque addresses during database migration. The refactoring in lnwire/lnwire.go
is well done. However, there's a subtle bug in the new maybeOverrideNodeAddresses
function in graph/db/sql_migration.go
where a node's address list might not be updated correctly if an opaque address parses to an empty list. I've provided a suggestion to fix this.
cc @mohamedawnallah for review 🙏 |
3ff17d4
to
a74f0b1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solid and thoughtful work @ellemouton as usual! It is looking pretty good to me. I was just curious if we should ignore entire node migration because of invalid DNS address other than that LGTM!
if err = maybeOverrideNodeAddresses(node); err != nil { | ||
skipped++ | ||
log.Warnf("Skipping migration of node %x with invalid "+ | ||
"address (%v): %v", pub, node.Addresses, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Q: Why are we skipping node migration when an invalid DNS address is encountered? Can't we just skip the invalid network address? In the worst-case scenario, the remaining node information is still valid and should be eligible for migration
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I read more the code perhaps invalid DNS address to propagate that violates specification probably worthwhile to skip the entire node migration because of it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah exactly :) i originally had it that way but then decided to just skip the nodes so that we are not propagating invalid nodes.
Also, this is a far edge case & in all tests i've run, i have not come across any nodes with invalid DNS addrs. So this really is just an edge case. We could really go either way and it wont matter much i think.
r = bytes.NewReader(payload) | ||
numAddrBytes = uint16(len(payload)) | ||
) | ||
byteRead, readAddr, err := lnwire.ReadAddress( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: addrRead
instead of readAddr
similar to byteRead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 📷
This addresses the final TODO in #9795.
With the inclusion of #9455, we now have the ability to read DNS addresses off the wire. However, if we had previously received node announcements with such addresses, we would have persisted them as Opaque addresses.
In this PR we use the new graph SQL migration as an opportunity to extract DNS addresses from opaque addresses.