Skip to content

Conversation

nowakjd
Copy link

@nowakjd nowakjd commented Sep 4, 2025

Implementation of MyMap interface

Copy link

@mateacademy-ai-mentor mateacademy-ai-mentor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

I'm requesting changes due to two critical blockers that break core functionality and violate explicit requirements.

  • Incorrect bucket indexing (critical):

    • Location: src/main/java/core/basesyntax/MyHashMap.java, method getIndex (line number where modulus is hardcoded to 16).
    • Why it's wrong: Using a hardcoded 16 ignores the current capacity and breaks rehashing after resize, violating the requirement to handle hashing and bucket indexing correctly and checklist item #2 (avoid magic numbers).
    • How to fix: Replace the hardcoded modulus with the current capacity, e.g., use table.length or a capacity field: int index = (hash & 0x7fffffff) % table.length; Ensure all calls during resizing also use the updated capacity.
  • Incorrect size management (critical):

    • Location: insertNode in MyHashMap.java (lines where size++ occurs both on initial insert and after traversing/updating nodes).
    • Why it's wrong: size should reflect the number of distinct keys. Incrementing size when updating an existing key inflates the count and violates the requirement “Update size when a new key is inserted.”
    • How to fix: Increment size only when a new node is actually added to a bucket. If a matching key is found and its value is updated, do not change size. One approach: track whether you inserted a new node (e.g., a boolean flag) and increment only in that branch.

What you did well:

  • Clear naming, correct generics usage, chaining for collisions, constants for initial capacity and load factor, and overall structure.

Minor suggestions (optional):

  • After fixing getIndex, re-run tests that cover resize and retrieval post-resize to validate correct redistribution.

Please address the two issues above. Once fixed, your implementation should meet the requirements. Great work overall—these are quick changes that will solidify correctness.


✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨

Important

Fix the issues, commit the changes, and then re-request my review using the Re-request review button2 in the Reviewers tab.

Tip

If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.

Caution

Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.

Footnotes

  1. Rate AI review example

  2. Re-request AI review example

  3. Add AI review comments example

}

private int getIndex(K key) {
return (key == null) ? 0 : (key.hashCode() & Integer.MAX_VALUE) % 16;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The getIndex method uses a hardcoded modulus of 16 instead of the current capacity. This violates the requirement for dynamic resizing and correct bucket indexing. It should use 'capacity' instead of '16'.

This violates checklist item #2: 'Avoid magic numbers; use named constants for capacities, load factor, and related threshold calculations.'

It also violates the requirement: 'The map must correctly handle hashing and bucket indexing to store and retrieve key-value pairs.'

Comment on lines 74 to 89
size++;
return;
} else if (Objects.equals(bucket.key, newNode.key)) {
bucket.value = newNode.value;
return;
} else {
while (bucket.next != null) {
if (Objects.equals(bucket.key, newNode.key)) {
bucket.value = newNode.value;
return;
}
bucket = bucket.next;
}
bucket.next = newNode;
}
size++;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The insertNode method increments size both when a new node is inserted at an empty bucket and again after the while loop, which may result in size being incremented even when updating an existing key. The size should only increase when a new key is inserted, not when updating an existing key.

This violates the requirement: 'Update size when a new key is inserted.'

Copy link

@mateacademy-ai-mentor mateacademy-ai-mentor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

Decision: APPROVE

Your MyHashMap implementation meets all functional and stylistic requirements. Bucket indexing correctly uses the current capacity, resizing and rehashing work as expected, and size is updated only on new insertions. Public methods (put, getValue, getSize) behave correctly, collisions are handled via chaining, and constants/generics/access modifiers are used appropriately. No critical blockers or checklist violations were found.

Minor optional improvement ideas:

  • Consider adding brief Javadoc to put and getValue to clarify behavior for null keys/values and update vs. insert semantics.

Great work—clean, readable, and robust implementation!


✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨

Footnotes

  1. Rate AI review example

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants