-
Notifications
You must be signed in to change notification settings - Fork 308
Done Rick and Morty task #266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Done Rick and Morty task #266
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Summary
Decision rationale
- The implementation misses core requirements that are critical blockers: fetching all data from the public API once at startup is not implemented due to missing pagination and incorrect DTO mapping for
info
. This prevents completing the task as specified.
Critical issues to fix (blockers)
- Pagination and DTO mapping (RickAndMortyClient.java, RickAndMortyResponseDto.java, InfoDto.java):
- Currently only the first page is fetched; implement a loop following
info.next
until null and accumulate all results. - Replace
io.swagger.v3.oas.models.info.Info
with yourInfoDto
inRickAndMortyResponseDto
and ensureInfoDto
maps API keys correctly (next
,prev
). If you keep the current field nameprevious
, annotate it with@JsonProperty("prev")
or rename toprev
.
- Currently only the first page is fetched; implement a loop following
- Idempotent initial load and uniqueness (CharacterServiceImpl.java, Character.java):
- Add a DB uniqueness constraint on
external_id
(e.g.,@Column(unique = true)
), and makeloadInitialData()
skip already existing records (e.g., by checking existingexternalId
s or skipping when data already loaded). AddfindByExternalId(String)
to the repository to support this.
- Add a DB uniqueness constraint on
- Startup trigger safety (DataLoader.java):
- Avoid
@PostConstruct
for external calls. UseApplicationRunner
/ApplicationReadyEvent
and guard with a profile/property so tests (H2) don’t hit the public API.
- Avoid
- Robustness and logging (RickAndMortyClient.java, CharacterServiceImpl.java, DataLoader.java):
- Remove
System.out.println
of large payloads; use SLF4J logging. Add basic retry/backoff in the client for transient failures and fail-fast on incomplete fetch to avoid partial DB writes.
- Remove
- Efficient random selection and empty handling (CharacterServiceImpl.java, CharacterController.java):
- Replace
findAll()
+ in-memory random withcount()
+ single-rowPageRequest
by offset, and handle empty DB explicitly (e.g., return Optional and map to 404/204). Also make search param optional in the controller:@RequestParam(required = false) String name
to avoid 400 on missing input.
- Replace
Minor improvements (non-blockers)
- Update Swagger docs to mark the search parameter optional and add examples.
- Consider brief JavaDoc on service methods describing expected behaviors.
You’ve set up a clear architecture and the DTO/entity mapping is on the right track. Addressing the above items will align the app with the checklist and make it robust for startup and testing. Keep going—you’re close.
✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨
Important
Fix the issues, commit the changes, and then re-request my review using the Re-request review
button2 in the Reviewers tab.
Tip
If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor
tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.
Caution
Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.
Footnotes
HttpRequest httpRequest = HttpRequest.newBuilder() | ||
.GET() | ||
.uri(URI.create(BASE_URL)) | ||
.build(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Issue: Only a single request to the base URL is built here which fetches only the first page of results.
- Why: The task requires fetching "all data from the public API" once during startup. You must follow pagination (info.next) and loop until no next page remains.
- Suggestion: Implement a loop that follows the
next
URL from the response info and accumulates results from every page; return the full list only after all pages are successfully fetched. See project description requirement about fetching all data once.
HttpResponse<String> response = httpClient | ||
.send(httpRequest, HttpResponse.BodyHandlers.ofString()); | ||
|
||
System.out.println(response.body()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Issue: Debug printing of entire response body.
- Why:
System.out.println(response.body())
can leak large payloads and is not appropriate for production/CI; tests and logs get noisy. - Suggestion: Remove the println and use a logger at DEBUG level to log minimal info (status, page number) if needed.
RickAndMortyResponseDto apiResponse = objectMapper | ||
.readValue(response.body(), RickAndMortyResponseDto.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Issue: Deserialization target mismatch risk.
- Why: The code deserializes into
RickAndMortyResponseDto
. Ensure thatRickAndMortyResponseDto
uses the correctInfoDto
type that matches the external API (fields: count, pages, next, previous). Currently the project's wrapper imports a wrong Info type which will break Jackson mapping. Fix the wrapper DTO import/type so.info().next()
(or equivalent) is available for pagination.
RickAndMortyResponseDto apiResponse = objectMapper | ||
.readValue(response.body(), RickAndMortyResponseDto.class); | ||
|
||
allCharacters.addAll(apiResponse.results()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Issue: Only adds first-page results to allCharacters.
- Why:
allCharacters.addAll(apiResponse.results())
is executed once. Without followingapiResponse.info().next
you will not gather subsequent pages and will save an incomplete dataset to DB. - Suggestion: Change the logic to add results for each page inside a loop that follows
info.next
until null. Accumulate results and return the full set. Also consider how to avoid partial DB writes on failure (persist only after full successful fetch or use transactional/atomic approach).
} catch (IOException | InterruptedException e) { | ||
throw new RuntimeException("Failed to fetch information from Rick and Morty API ", e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Issue: Exception handling lacks retries and may cause partial or unexpected failures.
- Why: A single IO/InterruptedException is wrapped into RuntimeException without retry; transient network errors may be recoverable. The startup loader requirement expects robust handling (retries or clear failure) to avoid inconsistent DB state.
- Suggestion: Add a small retry mechanism (e.g., 2–3 attempts with backoff) for transient failures. If full fetch cannot be completed, propagate a clear error to prevent persisting partial data.
}) | ||
.toList(); | ||
|
||
characterRepository.saveAll(toSaveDataEntity); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling saveAll right after mapping will persist all fetched items unconditionally. To avoid partial or duplicate data, first complete the full fetch (all pages) and then persist in a single transactional operation, filtering out existing items. Also consider batch size/flush behavior if dataset is large. Currently saveAll can create duplicates on restart.
|
||
@Override | ||
public void loadInitialData() { | ||
System.out.println("loadInitialData() виконується..."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoid System.out.println for production-ready apps; use an SLF4J Logger. Replace the prints at startup, after fetch, and after DB save with logger.debug/info/error as appropriate to provide structured and configurable logs.
public CharacterResponseDto getRandomCharacter() { | ||
List<Character> characters = characterRepository.findAll(); | ||
if (characters.isEmpty()) { | ||
return null; | ||
} | ||
Character characterEntity = characters.get(random.nextInt(characters.size())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current random selection implementation loads all characters into memory (findAll) and then selects one. For large datasets this is inefficient. Use repository.count() to get total and then fetch a single row using pagination or a native query with LIMIT 1 OFFSET. Also handle the empty-database case explicitly (return Optional or throw a custom exception mapped to 404) rather than returning null. This addresses checklist item #3.15 and avoids loading entire table into memory.
public List<CharacterResponseDto> searchByName(String name) { | ||
return (name == null || name.isBlank()) | ||
? List.of() : | ||
characterRepository.findByNameContainingIgnoreCase(name).stream() | ||
.map(this::toDto) | ||
.toList(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
searchByName returns an empty list when name is null or blank — this is acceptable per requirements. Ensure the controller parameter is optional (@RequestParam(required = false)) so requests without the parameter don't produce 400. The controller currently requires the parameter by default; consider aligning controller and service behavior.
@Service | ||
@Transactional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Transactionality: class-level @transactional is good, but to avoid partial DB state you must: 1) fetch all external pages first, 2) then begin a transaction and persist results (or upsert), and 3) fail the startup if the fetch is incomplete (fail-fast). This makes initial load atomic and consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Summary
I’m requesting changes because there are a few critical blockers that can break functionality and test stability.
Critical issues to fix:
- RickAndMortyClient.java: mutable static
BASE_URL
is reassigned inside the paging loop (e.g., lines ~23/38/58). After a full run it becomes null, so subsequent calls won’t fetch anything. Replace it with a final constant (e.g.,INITIAL_URL
) and use a localpageUrl
variable updated each iteration. - DataLoader.java: initial load runs unconditionally on every ApplicationContext creation, which will trigger real network calls during tests. Gate it via profile or a conditional property (e.g.,
@ConditionalOnProperty(name = "app.initial-load.enabled", havingValue = "true", matchIfMissing = true)
) and set that property to false in test properties. Ensure the same property keys exist in both main and test property files. - CharacterController.java: the random endpoint returns
Optional<CharacterResponseDto>
directly. Map theOptional
to aResponseEntity
and return 200 with body when present, otherwise 404/204. Remove@ResponseStatus
on that method when returningResponseEntity
. - InfoDto.java: the external API uses
prev
but the record component is namedprevious
. Rename toprev
or annotate with@JsonProperty("prev")
so Jackson binds correctly.
Important robustness fixes (please address alongside the above):
- RickAndMortyClient.java: use
LoggerFactory.getLogger(RickAndMortyClient.class)
; checkresponse.statusCode()
before parsing JSON; include the caught exception in logs (e.g.,logger.warn("...", e)
); preserve thread interrupt and avoid throwing a generic RuntimeException that hides the original context. - CharacterServiceImpl.java: use
LoggerFactory.getLogger(CharacterServiceImpl.class)
; avoidfindAll()
to build the existing ID set (use a projection orexistsByExternalId
per DTO); in random selection, checkpage.hasContent()
before accessing the first element.
You’ve got the main structure right (endpoints, DTOs, pagination idea, entity constraints). With the above fixes, the app will be stable, idempotent, and test-safe. Keep going — you’re close!
✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨
Important
Fix the issues, commit the changes, and then re-request my review using the Re-request review
button2 in the Reviewers tab.
Tip
If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor
tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.
Caution
Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.
Footnotes
|
||
@Component | ||
public class RickAndMortyClient { | ||
private static final Logger logger = LoggerFactory.getLogger(DataLoader.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logger initialization uses DataLoader.class which is confusing and should use this client class. Use LoggerFactory.getLogger(RickAndMortyClient.class) and remove the DataLoader import if it becomes unused.
public class RickAndMortyClient { | ||
private static final Logger logger = LoggerFactory.getLogger(DataLoader.class); | ||
private static final int MAX_RETRIES = 3; | ||
private static String BASE_URL = "https://rickandmortyapi.com/api/character"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BASE_URL is declared static and mutated inside fetchAllCharacters. After a full fetch this will end up null and further calls to fetchAllCharacters will do nothing. This makes the method stateful across invocations and is a source of bugs in tests and multiple application contexts. Replace with a final constant for the initial URL (e.g. INITIAL_URL) and use a local variable inside the method (e.g. String pageUrl = INITIAL_URL) that you update each iteration.
|
||
HttpClient httpClient = HttpClient.newHttpClient(); | ||
|
||
while (BASE_URL != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fetch loop uses the shared BASE_URL variable (while (BASE_URL != null)) and then sets BASE_URL = apiResponse.info().next(). Mutating shared static state during the paging loop is unsafe and breaks idempotence. Use a local pageUrl variable instead and keep any constants final.
HttpResponse<String> response = httpClient | ||
.send(httpRequest, HttpResponse.BodyHandlers.ofString()); | ||
|
||
logger.debug("Fetched URL: {}, Status code: {}", | ||
BASE_URL, response.statusCode()); | ||
|
||
RickAndMortyResponseDto apiResponse = objectMapper | ||
.readValue(response.body(), RickAndMortyResponseDto.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After receiving the response you parse the body immediately. You should check response.statusCode() and handle non-200 responses before attempting to parse JSON (e.g., log and retry or throw a descriptive exception). Currently a 4xx/5xx will still attempt to parse and likely fail with a cryptic error.
} catch (IOException | InterruptedException e) { | ||
logger.warn("Attempt {}/{} failed for URL: {}", | ||
attempt, MAX_RETRIES, BASE_URL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When catching IOException|InterruptedException you log a warning but do not include the exception object. Include the exception in the log (e.g., logger.warn("Attempt {}/{} failed for URL: {}", attempt, MAX_RETRIES, pageUrl, e)) to preserve stacktrace for debugging.
import java.util.stream.Collectors; | ||
import lombok.RequiredArgsConstructor; | ||
import mate.academy.rickandmorty.client.RickAndMortyClient; | ||
import mate.academy.rickandmorty.config.DataLoader; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unnecessary import: DataLoader is only imported to obtain a logger in this class. Replace the logger target and remove this import. Use LoggerFactory.getLogger(CharacterServiceImpl.class) so logs are attributed to this class and the DataLoader import can be removed.
@RequiredArgsConstructor | ||
public class CharacterServiceImpl implements CharacterService { | ||
|
||
private static final Logger logger = LoggerFactory.getLogger(DataLoader.class); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logger is created with DataLoader.class which misattributes logs to the wrong class. Change to LoggerFactory.getLogger(CharacterServiceImpl.class).
Set<String> existingExternalIds = characterRepository.findAll().stream() | ||
.map(Character::getExternalId) | ||
.collect(Collectors.toSet()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This call uses characterRepository.findAll() and then maps to external IDs. Loading all entity instances into memory can be expensive for large datasets. Consider a projection repository method that returns only external IDs (e.g., List findAllExternalIds()) or check existence per DTO with characterRepository.existsByExternalId(dtoId) when processing each DTO to avoid high memory usage.
List<Character> toSaveInitialData = externalCharacterDto.stream() | ||
.filter(dto -> !existingExternalIds.contains(dto.id().toString())) | ||
.map(dto -> { | ||
Character character = new Character(); | ||
character.setExternalId(dto.id().toString()); | ||
character.setName(dto.name()); | ||
character.setStatus(dto.status()); | ||
character.setGender(dto.gender()); | ||
return character; | ||
}) | ||
.toList(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When mapping ExternalCharacterDto to Character you convert dto.id() to string which is fine. If you switch to checking existsByExternalId(dto.id().toString()) per DTO to avoid loading all entities, you can insert only new ones. Also consider wrapping mapping logic into a mapper method to keep this method focused on flow rather than object construction.
long count = characterRepository.count(); | ||
if (count == 0) { | ||
return Optional.empty(); | ||
} | ||
|
||
int randomIndex = random.nextInt((int) count); | ||
Page<Character> page = characterRepository.findAll(PageRequest.of(randomIndex, 1)); | ||
|
||
Character character = page.getContent().get(0); | ||
return Optional.of(CharacterMapper.toDto(character)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Random selection: using count() and PageRequest.of(randomIndex, 1) works when page size is 1, but be defensive: check that the returned Page has content before calling get(0) to avoid IndexOutOfBoundsException in edge cases (concurrent deletes or unexpected empty page). Also consider DB-level randomness alternatives for performance/accuracy if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Summary
Educational approval to help you progress while learning.
Overall, solid structure: clear DTOs, mapper, repository search, and Swagger annotations are in place. Critical fixes to address next:
- In
RickAndMortyClient
(around the request creation), you useURI.create(BASE_URL)
each loop; this prevents pagination from advancing. UsepageUrl
for the request URI, logs, and error messages so each page is fetched correctly [CHECKLIST ITEM #2]. Also, handleInterruptedException
separately and callThread.currentThread().interrupt()
to preserve the interrupt flag. - In
InfoDto
, the field should beprev
(or annotate with@JsonProperty("prev")
) to match the external API; otherwise pagination mapping breaks [CHECKLIST ITEM #2]. - In
DataLoader
, gate execution with@ConditionalOnProperty(name = "app.initial-load.enabled", havingValue = "true", matchIfMissing = true)
and set it to false in test properties to avoid network calls during tests; log exceptions withlogger.error("...", e)
[CHECKLIST ITEM #2, #7]. - In
CharacterController.getRandomCharacter()
, returnResponseEntity<CharacterResponseDto>
and map the Optional to 200 or 204/404 instead of returningOptional
directly.
Minor improvements: consider renaming findByExternalId()
(projection of all IDs) to something like findAllExternalIds
, and make the random selection logic robust if page size changes. Great job on the entity design (separate externalId
) and search functionality—keep it up!
✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨
Footnotes
HttpRequest httpRequest = HttpRequest.newBuilder() | ||
.GET() | ||
.uri(URI.create(BASE_URL)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: The HttpRequest is created with URI.create(BASE_URL)
so every iteration requests the same initial URL instead of the page-specific URL. Replace the constant with the local pageUrl
variable here so the loop can advance through pages and import all characters (this is required by the pagination requirement, checklist item #2).
logger.debug("Fetched URL: {}, Status code: {}", | ||
BASE_URL, response.statusCode()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Logging / diagnostic: You log the fetched URL using BASE_URL
. Use pageUrl
instead so log lines reflect the actual URL being requested (helps debugging pagination and network issues).
if (response.statusCode() != 200) { | ||
throw new RuntimeException("Failed to fetch characters. URL: " | ||
+ " Status code: " + response.statusCode() | ||
+ " Body: " + response.body()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error messages: When constructing the RuntimeException on non-200 response and when throwing after max retries you use BASE_URL
in the message. Use the actual pageUrl
so the error text points to the correct failing URL (important for diagnosing which page failed).
} catch (IOException | InterruptedException e) { | ||
logger.warn("Attempt {}/{} failed for URL: {}", | ||
attempt, MAX_RETRIES, pageUrl, e); | ||
|
||
if (attempt == MAX_RETRIES) { | ||
throw new RuntimeException("Failed after " + MAX_RETRIES | ||
+ " attempts for URL: " + BASE_URL, e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
InterruptedException handling: the catch (IOException | InterruptedException e)
block treats InterruptedException the same as IOException and does not restore the interrupted flag. Separate the catches (or inspect instance type) so when an InterruptedException
is caught you call Thread.currentThread().interrupt()
before propagating or returning. Otherwise the thread interrupt status is lost which can hide shutdown signals and cause test flakiness.
logger.warn("Attempt {}/{} failed for URL: {}", | ||
attempt, MAX_RETRIES, pageUrl, e); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Retry message: The logger.warn call already passes the exception which is good, but ensure it uses pageUrl
(it currently does; good). Consider including attempt number and max retries in the message (you already do) — no change required here other than ensuring the URL variable is correct.
logger.info("Starting initial data load from external API..."); | ||
List<ExternalCharacterDto> externalCharacterDto = rickAndMortyClient.fetchAllCharacters(); | ||
logger.info("Received {} characters", externalCharacterDto.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This call invokes the external client which performs network requests. If DataLoader or other startup logic calls this unconditionally during tests, tests will make real network calls and become flaky/slow. Ensure the startup runner is gated (for example with @ConditionalOnProperty or a profile) so tests disable initial import. This enforces the requirement that the import runs once at application startup in production but not during test runs (checklist item #2).
|
||
Set<String> existingExternalIds = new HashSet<>(characterRepository.findByExternalId()); | ||
|
||
List<Character> toSaveInitialData = externalCharacterDto.stream() | ||
.filter(dto -> !existingExternalIds.contains(dto.id().toString())) | ||
.map(CharacterMapper::toEntity) | ||
.toList(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You build a Set of existing external IDs by calling characterRepository.findByExternalId()
and wrapping in a HashSet. This is functional and avoids duplicate inserts. Two notes:
- The repository method name
findByExternalId()
is misleading because it returns all externalId strings; consider renaming tofindAllExternalIds()
or similar for clarity. - For very large datasets, loading all IDs into memory may be heavy. If that becomes a problem later, consider batching or a DB-side merge/upsert approach. Currently this satisfies idempotency requirement (no duplicate imports).
int randomIndex = random.nextInt((int) count); | ||
Page<Character> page = characterRepository.findAll(PageRequest.of(randomIndex, 1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Random selection: you compute randomIndex = random.nextInt((int) count)
and then use PageRequest.of(randomIndex, 1)
. Because page size is 1 this effectively selects the element at offset randomIndex and is correct. However this ties correctness to the page size being 1 — if page size changes, this will be incorrect. Consider documenting this assumption or computing page index using pageIndex = randomIndex / pageSize
to be explicit.
List<Character> existsContent = page.getContent(); | ||
if (existsContent.isEmpty()) { | ||
return Optional.empty(); | ||
} | ||
|
||
Character character = page.getContent().get(0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor: you call page.getContent()
twice (lines 78 and 83). After checking existsContent
, reuse existsContent.get(0)
instead of calling page.getContent().get(0)
again. This is a small cleanup to avoid duplicate calls and improve readability.
public List<CharacterResponseDto> searchByName(String name) { | ||
return (name == null || name.isBlank()) | ||
? List.of() : | ||
characterRepository.findByNameContainingIgnoreCase(name).stream() | ||
.map(CharacterMapper::toDto) | ||
.toList(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good: searchByName
returns an empty list when name
is null or blank and otherwise uses findByNameContainingIgnoreCase
, which implements the required substring, case-insensitive search on the name field (meets checklist item #1 for the search endpoint). Keep this behavior and its tests.
No description provided.