fix: real-time team member sync without page reload#362
fix: real-time team member sync without page reload#362compiler041 wants to merge 1 commit intoAOSSIE-Org:mainfrom
Conversation
📝 WalkthroughWalkthroughThis pull request refactors WebSocket handler signatures across multiple backend modules by removing unused connection parameters, simplifies error handling patterns, consolidates variable allocations, and adds periodic polling to the team builder frontend. Key changes include removing the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can disable poems in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (6)
frontend/src/Pages/TeamBuilder.tsx (3)
225-232: Consider extracting ID normalization to a helper function.The inline double-cast logic for handling both string IDs and MongoDB
{ $oid: "..." }format is repeated for bothcaptainIdanduser.id. Extracting this to a utility would improve readability and reduce duplication.♻️ Suggested helper
const normalizeId = (id: unknown): string => { if (typeof id === "string") return id; if (id && typeof id === "object" && "$oid" in id) { return (id as { $oid: string }).$oid; } return String(id); };Then in
isCaptain:- const captainIdStr = - typeof team.captainId === "string" - ? team.captainId - : ((team.captainId as unknown) as Record<string, string>)?.$oid || String(team.captainId); - const userIdStr = - typeof user.id === "string" - ? user.id - : ((user.id as unknown) as Record<string, string>)?.$oid || String(user.id); + const captainIdStr = normalizeId(team.captainId); + const userIdStr = normalizeId(user.id);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/Pages/TeamBuilder.tsx` around lines 225 - 232, The duplicated inline ID normalization for captainId and user.id should be extracted into a utility like normalizeId(id: unknown): string; implement normalizeId to return the string directly if id is a string, return (id as { $oid: string }).$oid when an object contains $oid, and fallback to String(id) otherwise, then replace the current expressions that compute captainIdStr and userIdStr with calls to normalizeId(team.captainId) and normalizeId(user.id) (used in the isCaptain/check logic) to remove duplication and improve readability.
726-726: Add a newline at end of file.The file is missing a trailing newline, which violates POSIX standards and can cause minor issues with some diff tools and version control systems.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/Pages/TeamBuilder.tsx` at line 726, The file ends with "export default TeamBuilder;" but lacks a trailing newline; add a single newline character after the final line (after the export default TeamBuilder statement) so the file ends with a newline to satisfy POSIX/VC expectations.
348-348:onKeyPressis deprecated; preferonKeyDown.The
onKeyPressevent is deprecated in React. Consider usingonKeyDownfor better compatibility.♻️ Proposed fix
- onKeyPress={(e) => e.key === "Enter" && handleJoinByCode()} + onKeyDown={(e) => e.key === "Enter" && handleJoinByCode()}This applies to other
onKeyPressusages in this file as well (lines 385, 447).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/Pages/TeamBuilder.tsx` at line 348, Replace deprecated onKeyPress handlers with onKeyDown and keep the same Enter check; specifically update the JSX that currently uses onKeyPress={(e) => e.key === "Enter" && handleJoinByCode()} to use onKeyDown and the same predicate, and apply the same change for other onKeyPress usages in this component (e.g., the handlers associated with handleJoinByCode and similar input/button handlers) so keyboard Enter behaves correctly across the TeamBuilder component.backend/websocket/debate_spectator.go (1)
147-151: Silent error swallowing in broadcast loops.Errors from
WriteJSONare checked but not logged or handled. This makes debugging connection issues difficult. Consider logging at debug/warn level.Suggested fix
for _, client := range clients { if err := client.WriteJSON(eventData); err != nil { + log.Printf("BroadcastToDebate WriteJSON failed: %v", err) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/websocket/debate_spectator.go` around lines 147 - 151, The broadcast loop currently swallows errors from client.WriteJSON(eventData); update it to handle and log failures instead of ignoring them: inside the loop where client.WriteJSON(eventData) is called (the for _, client := range clients block), detect non-nil err and call the project's logger at debug/warn level with context (which client/connection and the event name or eventData) and optionally close/remove the faulty client from clients to avoid repeated failures; reference the WriteJSON call, the clients slice/iteration and eventData to locate the change.backend/test_server.go (1)
31-40: Emptyfor rangeloops serve no purpose.These loops iterate over the pool but discard all values and perform no operations. Either log/print the pool contents for debugging or remove these loops entirely.
Suggested fix
// Check pool — should have 2 users now - for range ms.GetPool() { + pool := ms.GetPool() + log.Printf("Pool after starting matchmaking: %d users", len(pool)) + for id, user := range pool { + log.Printf(" %s: %v", id, user) } // Wait a bit for matching time.Sleep(1 * time.Second) // Check pool after matching - for range ms.GetPool() { - } + log.Printf("Pool after matching: %d users", len(ms.GetPool()))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/test_server.go` around lines 31 - 40, The two empty for range loops over ms.GetPool() in backend/test_server.go are no-ops and should be removed or replaced with an explicit inspection/logging step; either delete both empty loops, or iterate the slice returned by ms.GetPool() and log its contents (e.g., using t.Logf or fmt.Printf) to assert expected pool size/state before and after matching — locate the no-op loops that call ms.GetPool() and change them to either direct assertions on len(ms.GetPool()) or to a loop that records/logs each entry for debugging.backend/websocket/team_websocket.go (1)
697-849: Very long critical section inhandleTeamReadyStatus.The mutex acquired at line 699 is held until line 849—spanning ~150 lines including JSON marshaling (
json.Marshal), multipleSafeWriteJSONcalls to clients, and spawning a goroutine. This increases contention and latency for other operations on this room.Consider restructuring to release the mutex earlier, snapshot necessary state, then perform broadcasts outside the critical section.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/websocket/team_websocket.go` around lines 697 - 849, handleTeamReadyStatus holds room.Mutex across heavy work (json.Marshal, SafeWriteJSON loops, goroutine spawn), causing contention; refactor to keep the mutex only for quick state updates and snapshots: inside handleTeamReadyStatus use room.Mutex to validate client, update Team1Ready/Team2Ready, recompute counts, set client.LastActivity and capture minimal required state (assignedToTeam, current counts, room.CurrentPhase, and a copied slice/map of clients or their SafeWriteJSON wrappers), then release room.Mutex before marshaling readyMessage and doing all SafeWriteJSON calls; for the countdown path, set room.CurrentPhase = "countdown" while holding the mutex (or atomically) before releasing and then spawn the goroutine that reacquires the room (as the existing goroutine does) so no long-running I/O occurs while room.Mutex is locked; use symbols: handleTeamReadyStatus, room.Mutex, TeamRoom, TeamMessage, SafeWriteJSON, teamRooms, roomKey to locate and change the code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/test_server.go`:
- Around line 12-29: The current test_server.go silently discards all errors
(e.g., from ms.AddToPool, ms.GetPool, ms.StartMatchmaking) by using `if err :=
...; err != nil { _ = err }`; update each error branch to handle failures
properly—either log the error and fail the test/main (e.g., using log.Printf +
os.Exit or t.Fatalf in tests), return the error to the caller, or panic—so
failures are visible; alternatively, if this file is obsolete, remove it. Ensure
you modify the error handling in the blocks that call ms.AddToPool, ms.GetPool,
and ms.StartMatchmaking.
In `@backend/websocket/debate_spectator.go`:
- Line 433: The call to loadPollSnapshot(client.debateID) currently ignores its
error, which can hide Redis failures and leave clients with only the
poll_created event; update the code around loadPollSnapshot to capture the
returned error, log it (including the debate ID and error text) using the
existing logger in this file, and then proceed with the existing fallback
behavior so clients still get served even if snapshot loading failed. Ensure you
reference loadPollSnapshot and client.debateID when adding the log message and
do not change the current event emission behavior.
- Line 240: The handler goroutine currently blocks forever on an unbounded
select {} while readPump runs separately; change it so readPump signals
completion (e.g., via a done channel or context) and the handler waits on that
signal instead of a naked select. Specifically, add a done chan struct{} (or use
an existing ctx) in the scope where readPump and the handler are started, have
readPump close(done) or send on done when it exits, and replace select {} with a
select that returns when <-done (and also handles any other necessary cases like
write errors or cleanup) so the handler goroutine can exit and release
resources.
In `@backend/websocket/websocket.go`:
- Around line 705-710: The code calls primitive.ObjectIDFromHex on client.UserID
and opponent.UserID without checking errors before passing userObjID and
opponentObjID into services.UpdateRatings, risking invalid zero IDs; update the
block to capture and check the returned errors from primitive.ObjectIDFromHex
for both client.UserID and opponent.UserID, log a clear error (including the
invalid ID and parsing error) and skip or abort the UpdateRatings call when
either parse fails to avoid passing zero-value ObjectIDs to
services.UpdateRatings; ensure you reference the variables userObjID and
opponentObjID and the function services.UpdateRatings when implementing the
guards.
- Around line 311-324: You are iterating over room.Clients after the mutex was
released which can cause a data race; take a snapshot of the slice/map while
holding the room mutex (or re-acquire it) and then iterate over that snapshot
when calling client.SafeWriteJSON so concurrent modifications to room.Clients
cannot occur during the loop; update the code around the existing room.Clients
access (and the mutex used when releasing at line ~306) to copy the clients into
a local slice under the lock, release the lock, and then call
client.SafeWriteJSON for each entry in the snapshot.
In `@frontend/src/Pages/TeamBuilder.tsx`:
- Around line 118-131: The current React.useEffect performs aggressive polling
(calling fetchAvailableTeams and fetchUserTeams every 3s) which can overload the
backend and cause overlapping requests; modify the effect to (1) skip polling
when the tab is hidden by checking document.visibilityState (or listen to
visibilitychange) so polling pauses while not visible, (2) implement exponential
back-off on repeated errors from fetchAvailableTeams/fetchUserTeams (increase
interval on failure and reset on success) to avoid tight retry loops, and (3)
guard against overlapping calls by cancelling or awaiting previous fetches
before starting the next interval (e.g., track an in-flight flag or use
AbortController) while retaining clearInterval in the cleanup.
- Around line 190-192: The catch block after calling getTeamMemberProfile
swallows errors and leaves the UI unresponsive; update the catch to surface
feedback by either invoking the dialog open flow with an error state (e.g.,
setTeamMemberDialog({ open: true, error: true, memberId })) or trigger a
user-facing toast/notification (use existing toast helper or create one) and
ensure any loading flag is cleared so the UI is interactive; locate the
try/catch around getTeamMemberProfile in TeamBuilder (e.g., where
getTeamMemberProfile is awaited and the dialog state is set) and replace the
empty catch with code that sets an error state for the dialog or calls showToast
with the error message.
---
Nitpick comments:
In `@backend/test_server.go`:
- Around line 31-40: The two empty for range loops over ms.GetPool() in
backend/test_server.go are no-ops and should be removed or replaced with an
explicit inspection/logging step; either delete both empty loops, or iterate the
slice returned by ms.GetPool() and log its contents (e.g., using t.Logf or
fmt.Printf) to assert expected pool size/state before and after matching —
locate the no-op loops that call ms.GetPool() and change them to either direct
assertions on len(ms.GetPool()) or to a loop that records/logs each entry for
debugging.
In `@backend/websocket/debate_spectator.go`:
- Around line 147-151: The broadcast loop currently swallows errors from
client.WriteJSON(eventData); update it to handle and log failures instead of
ignoring them: inside the loop where client.WriteJSON(eventData) is called (the
for _, client := range clients block), detect non-nil err and call the project's
logger at debug/warn level with context (which client/connection and the event
name or eventData) and optionally close/remove the faulty client from clients to
avoid repeated failures; reference the WriteJSON call, the clients
slice/iteration and eventData to locate the change.
In `@backend/websocket/team_websocket.go`:
- Around line 697-849: handleTeamReadyStatus holds room.Mutex across heavy work
(json.Marshal, SafeWriteJSON loops, goroutine spawn), causing contention;
refactor to keep the mutex only for quick state updates and snapshots: inside
handleTeamReadyStatus use room.Mutex to validate client, update
Team1Ready/Team2Ready, recompute counts, set client.LastActivity and capture
minimal required state (assignedToTeam, current counts, room.CurrentPhase, and a
copied slice/map of clients or their SafeWriteJSON wrappers), then release
room.Mutex before marshaling readyMessage and doing all SafeWriteJSON calls; for
the countdown path, set room.CurrentPhase = "countdown" while holding the mutex
(or atomically) before releasing and then spawn the goroutine that reacquires
the room (as the existing goroutine does) so no long-running I/O occurs while
room.Mutex is locked; use symbols: handleTeamReadyStatus, room.Mutex, TeamRoom,
TeamMessage, SafeWriteJSON, teamRooms, roomKey to locate and change the code.
In `@frontend/src/Pages/TeamBuilder.tsx`:
- Around line 225-232: The duplicated inline ID normalization for captainId and
user.id should be extracted into a utility like normalizeId(id: unknown):
string; implement normalizeId to return the string directly if id is a string,
return (id as { $oid: string }).$oid when an object contains $oid, and fallback
to String(id) otherwise, then replace the current expressions that compute
captainIdStr and userIdStr with calls to normalizeId(team.captainId) and
normalizeId(user.id) (used in the isCaptain/check logic) to remove duplication
and improve readability.
- Line 726: The file ends with "export default TeamBuilder;" but lacks a
trailing newline; add a single newline character after the final line (after the
export default TeamBuilder statement) so the file ends with a newline to satisfy
POSIX/VC expectations.
- Line 348: Replace deprecated onKeyPress handlers with onKeyDown and keep the
same Enter check; specifically update the JSX that currently uses
onKeyPress={(e) => e.key === "Enter" && handleJoinByCode()} to use onKeyDown and
the same predicate, and apply the same change for other onKeyPress usages in
this component (e.g., the handlers associated with handleJoinByCode and similar
input/button handlers) so keyboard Enter behaves correctly across the
TeamBuilder component.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5f4a875b-2c4b-4de1-99f2-0e9a7280c9c3
📒 Files selected for processing (5)
backend/test_server.gobackend/websocket/debate_spectator.gobackend/websocket/team_websocket.gobackend/websocket/websocket.gofrontend/src/Pages/TeamBuilder.tsx
| go readPump(client) | ||
|
|
||
| // Keep connection alive | ||
| select {} |
There was a problem hiding this comment.
Unbounded select {} blocks the goroutine forever.
The main handler goroutine blocks on select {} while readPump runs in a separate goroutine. When readPump exits (on connection close), this goroutine remains blocked indefinitely, leaking resources.
Consider having readPump signal completion so the handler can exit cleanly.
Suggested fix using a done channel
- // ✅ FIX: removed unused `hub` param — readPump only needs client
- go readPump(client)
-
- select {}
+ done := make(chan struct{})
+ go func() {
+ readPump(client)
+ close(done)
+ }()
+
+ <-done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/websocket/debate_spectator.go` at line 240, The handler goroutine
currently blocks forever on an unbounded select {} while readPump runs
separately; change it so readPump signals completion (e.g., via a done channel
or context) and the handler waits on that signal instead of a naked select.
Specifically, add a done chan struct{} (or use an existing ctx) in the scope
where readPump and the handler are started, have readPump close(done) or send on
done when it exits, and replace select {} with a select that returns when <-done
(and also handles any other necessary cases like write errors or cleanup) so the
handler goroutine can exit and release resources.
| if err != nil { | ||
| // Even if snapshot fails, continue with created event | ||
| } | ||
| snapshot, _ := loadPollSnapshot(client.debateID) |
There was a problem hiding this comment.
Silently discarding loadPollSnapshot error may leave clients with incomplete state.
Per the relevant code snippet from poll_store.go, GetPollState can fail with "Redis client not available" or Redis command errors. Discarding this error means clients may not receive the full poll_snapshot after poll creation—only the bare poll_created event.
Consider logging the error for observability, even if the fallback behavior is acceptable.
Suggested fix
- snapshot, _ := loadPollSnapshot(client.debateID)
+ snapshot, err := loadPollSnapshot(client.debateID)
+ if err != nil {
+ log.Printf("loadPollSnapshot failed for debate %s: %v", client.debateID, err)
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| snapshot, _ := loadPollSnapshot(client.debateID) | |
| snapshot, err := loadPollSnapshot(client.debateID) | |
| if err != nil { | |
| log.Printf("loadPollSnapshot failed for debate %s: %v", client.debateID, err) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/websocket/debate_spectator.go` at line 433, The call to
loadPollSnapshot(client.debateID) currently ignores its error, which can hide
Redis failures and leave clients with only the poll_created event; update the
code around loadPollSnapshot to capture the returned error, log it (including
the debate ID and error text) using the existing logger in this file, and then
proceed with the existing fallback behavior so clients still get served even if
snapshot loading failed. Ensure you reference loadPollSnapshot and
client.debateID when adding the log message and do not change the current event
emission behavior.
| for _, existing := range room.Clients { | ||
| detailMessage := map[string]interface{}{ | ||
| "type": "userDetails", | ||
| "userDetails": payload, | ||
| } | ||
|
|
||
| if connRef == conn { | ||
| // Already sent this client's participant data; ensure they have their own detail payload too | ||
| client.SafeWriteJSON(detailMessage) | ||
| } else { | ||
| // Send existing participant info to the new client | ||
| client.SafeWriteJSON(detailMessage) | ||
| "type": "userDetails", | ||
| "userDetails": map[string]interface{}{ | ||
| "id": existing.UserID, | ||
| "username": existing.Username, | ||
| "displayName": existing.Username, | ||
| "email": existing.Email, | ||
| "avatarUrl": existing.AvatarURL, | ||
| "elo": existing.Elo, | ||
| }, | ||
| } | ||
| } | ||
|
|
||
| // Prepare detailed payload for the new client to broadcast to others | ||
| userDetailsPayload := map[string]interface{}{ | ||
| "id": client.UserID, | ||
| "username": client.Username, | ||
| "displayName": client.Username, | ||
| "email": client.Email, | ||
| "avatarUrl": client.AvatarURL, | ||
| "elo": client.Elo, | ||
| client.SafeWriteJSON(detailMessage) | ||
| } |
There was a problem hiding this comment.
Data race: iterating room.Clients without mutex.
The mutex is released at line 306, but lines 311-324 iterate over room.Clients without re-acquiring it. Another goroutine could modify room.Clients concurrently, causing a data race.
Suggested fix: take a snapshot under the lock
room.Mutex.Lock()
room.Clients[conn] = client
+ existingClients := make([]*Client, 0, len(room.Clients))
+ for _, existing := range room.Clients {
+ existingClients = append(existingClients, existing)
+ }
room.Mutex.Unlock()
participantsMsg := buildParticipantsMessage(room)
client.SafeWriteJSON(participantsMsg)
- for _, existing := range room.Clients {
+ for _, existing := range existingClients {
detailMessage := map[string]interface{}{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/websocket/websocket.go` around lines 311 - 324, You are iterating
over room.Clients after the mutex was released which can cause a data race; take
a snapshot of the slice/map while holding the room mutex (or re-acquire it) and
then iterate over that snapshot when calling client.SafeWriteJSON so concurrent
modifications to room.Clients cannot occur during the loop; update the code
around the existing room.Clients access (and the mutex used when releasing at
line ~306) to copy the clients into a local slice under the lock, release the
lock, and then call client.SafeWriteJSON for each entry in the snapshot.
| userObjID, _ := primitive.ObjectIDFromHex(client.UserID) | ||
| opponentObjID, _ := primitive.ObjectIDFromHex(opponent.UserID) | ||
| if _, _, err := services.UpdateRatings(userObjID, opponentObjID, 0.0, time.Now()); err != nil { | ||
| log.Printf("Error updating ratings after concede: %v", err) | ||
| } | ||
| } |
There was a problem hiding this comment.
Ignoring ObjectIDFromHex errors may pass invalid IDs to UpdateRatings.
If either client.UserID or opponent.UserID contains an invalid hex string, the resulting zero-value ObjectID will be passed to UpdateRatings, potentially corrupting rating data or causing downstream errors.
Suggested fix
- userObjID, _ := primitive.ObjectIDFromHex(client.UserID)
- opponentObjID, _ := primitive.ObjectIDFromHex(opponent.UserID)
- if _, _, err := services.UpdateRatings(userObjID, opponentObjID, 0.0, time.Now()); err != nil {
+ userObjID, err := primitive.ObjectIDFromHex(client.UserID)
+ if err != nil {
+ log.Printf("Invalid user ObjectID %s: %v", client.UserID, err)
+ return
+ }
+ opponentObjID, err := primitive.ObjectIDFromHex(opponent.UserID)
+ if err != nil {
+ log.Printf("Invalid opponent ObjectID %s: %v", opponent.UserID, err)
+ return
+ }
+ if _, _, err := services.UpdateRatings(userObjID, opponentObjID, 0.0, time.Now()); err != nil {
log.Printf("Error updating ratings after concede: %v", err)
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| userObjID, _ := primitive.ObjectIDFromHex(client.UserID) | |
| opponentObjID, _ := primitive.ObjectIDFromHex(opponent.UserID) | |
| if _, _, err := services.UpdateRatings(userObjID, opponentObjID, 0.0, time.Now()); err != nil { | |
| log.Printf("Error updating ratings after concede: %v", err) | |
| } | |
| } | |
| userObjID, err := primitive.ObjectIDFromHex(client.UserID) | |
| if err != nil { | |
| log.Printf("Invalid user ObjectID %s: %v", client.UserID, err) | |
| return | |
| } | |
| opponentObjID, err := primitive.ObjectIDFromHex(opponent.UserID) | |
| if err != nil { | |
| log.Printf("Invalid opponent ObjectID %s: %v", opponent.UserID, err) | |
| return | |
| } | |
| if _, _, err := services.UpdateRatings(userObjID, opponentObjID, 0.0, time.Now()); err != nil { | |
| log.Printf("Error updating ratings after concede: %v", err) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/websocket/websocket.go` around lines 705 - 710, The code calls
primitive.ObjectIDFromHex on client.UserID and opponent.UserID without checking
errors before passing userObjID and opponentObjID into services.UpdateRatings,
risking invalid zero IDs; update the block to capture and check the returned
errors from primitive.ObjectIDFromHex for both client.UserID and
opponent.UserID, log a clear error (including the invalid ID and parsing error)
and skip or abort the UpdateRatings call when either parse fails to avoid
passing zero-value ObjectIDs to services.UpdateRatings; ensure you reference the
variables userObjID and opponentObjID and the function services.UpdateRatings
when implementing the guards.
| // ✅ FIX: Poll every 3s so member count stays live for all team members. | ||
| // Previously fetched once on mount — Member 1's UI went stale when Member 2 | ||
| // joined via HTTP and had to reload to see the updated count. | ||
| React.useEffect(() => { | ||
| fetchAvailableTeams(); | ||
| fetchUserTeams(); | ||
|
|
||
| const interval = setInterval(() => { | ||
| fetchUserTeams(); | ||
| fetchAvailableTeams(); | ||
| }, 3000); | ||
|
|
||
| return () => clearInterval(interval); | ||
| }, [fetchAvailableTeams, fetchUserTeams]); |
There was a problem hiding this comment.
3-second polling is aggressive and may strain the backend.
This implementation fires 2 API requests every 3 seconds per active user. Given the service functions have no caching or deduplication (per teamService.ts), this can overwhelm the server at scale. Additional concerns:
- No visibility check — polling continues when the tab is hidden.
- No back-off on errors — if the server is unavailable, requests keep firing.
- Potential race conditions — overlapping requests if latency exceeds 3 seconds.
Consider using WebSockets for true real-time sync (aligns with PR title), or at minimum add document.visibilityState checks and exponential back-off on failures. A longer interval (e.g., 10–15 seconds) would also reduce load significantly.
🛠️ Suggested improvement with visibility check and back-off
React.useEffect(() => {
fetchAvailableTeams();
fetchUserTeams();
+ let backoffMs = 3000;
+ const maxBackoff = 30000;
+
+ const poll = async () => {
+ if (document.visibilityState === "hidden") return;
+ try {
+ await Promise.all([fetchUserTeams(), fetchAvailableTeams()]);
+ backoffMs = 3000; // reset on success
+ } catch {
+ backoffMs = Math.min(backoffMs * 2, maxBackoff);
+ }
+ };
+
- const interval = setInterval(() => {
- fetchUserTeams();
- fetchAvailableTeams();
- }, 3000);
+ const interval = setInterval(poll, backoffMs);
return () => clearInterval(interval);
}, [fetchAvailableTeams, fetchUserTeams]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/Pages/TeamBuilder.tsx` around lines 118 - 131, The current
React.useEffect performs aggressive polling (calling fetchAvailableTeams and
fetchUserTeams every 3s) which can overload the backend and cause overlapping
requests; modify the effect to (1) skip polling when the tab is hidden by
checking document.visibilityState (or listen to visibilitychange) so polling
pauses while not visible, (2) implement exponential back-off on repeated errors
from fetchAvailableTeams/fetchUserTeams (increase interval on failure and reset
on success) to avoid tight retry loops, and (3) guard against overlapping calls
by cancelling or awaiting previous fetches before starting the next interval
(e.g., track an in-flight flag or use AbortController) while retaining
clearInterval in the cleanup.
| } catch { | ||
| // profile fetch failed silently | ||
| } |
There was a problem hiding this comment.
Silent failure provides no feedback to the user.
When getTeamMemberProfile fails, the user receives no indication that their click did anything. The dialog doesn't open and no error is shown. Consider showing a toast or opening the dialog with an error state.
🔧 Proposed fix
- } catch {
- // profile fetch failed silently
+ } catch (err) {
+ setError((err as Error).message || "Failed to load member profile");
+ setTimeout(() => setError(""), 5000);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch { | |
| // profile fetch failed silently | |
| } | |
| } catch (err) { | |
| setError((err as Error).message || "Failed to load member profile"); | |
| setTimeout(() => setError(""), 5000); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/Pages/TeamBuilder.tsx` around lines 190 - 192, The catch block
after calling getTeamMemberProfile swallows errors and leaves the UI
unresponsive; update the catch to surface feedback by either invoking the dialog
open flow with an error state (e.g., setTeamMemberDialog({ open: true, error:
true, memberId })) or trigger a user-facing toast/notification (use existing
toast helper or create one) and ensure any loading flag is cleared so the UI is
interactive; locate the try/catch around getTeamMemberProfile in TeamBuilder
(e.g., where getTeamMemberProfile is awaited and the dialog state is set) and
replace the empty catch with code that sets an error state for the dialog or
calls showToast with the error message.
Addressed Issues:
Fixes #361
Screenshots/Recordings:
Before:
Video.Project.2.mp4
After:
Video.Project.41.mp4
Additional Notes:
Implemented real-time synchronization for team member updates so that
additions and removals are instantly reflected in the UI without
requiring a manual page reload, improving the overall collaborative
experience on the platform.
AI Usage Disclosure:
and this PR complies with this policy. I have tested the code locally
and I am responsible for it.
I have used the following AI models and tools: Claude
Checklist