Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor: src/screens/UserPortal/Posts from Jest to Vitest #2683

Closed

Conversation

Niyatijain-9
Copy link

@Niyatijain-9 Niyatijain-9 commented Dec 19, 2024

What kind of change does this PR introduce?
It is a refactor of changing the test case migration of jest to vitest.

Issue Number:

Fixes #2578

Did you add tests for your changes?
Yes

Snapshots/Videos:
Screenshot (4)
Screenshot (5)

If relevant, did you update the documentation?

Summary
This code is tested with vitest and all the tests are run and passed.

Does this PR introduce a breaking change?
No, does not affect other code workflow

Other information

Have you read the contributing guide?
Yes

Summary by CodeRabbit

  • Tests
    • Enhanced test coverage for the UserPortal's Posts component with new scenarios.
    • Verified functionality for pinned posts, liking posts, displaying comments, and deleting posts.
    • Added tests for creating new posts and displaying advertisements.
    • Updated pagination tests to ensure correct page navigation.

Copy link
Contributor

coderabbitai bot commented Dec 19, 2024

Warning

Rate limit exceeded

@Niyatijain-9 has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 9 minutes and 14 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 5a78857 and 435861e.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Posts/Posts.test.tsx (3 hunks)

Walkthrough

The pull request focuses on refactoring the Posts.test.tsx file in the UserPortal component from Jest to Vitest. The changes involve updating the testing library, replacing Jest-specific mocking functions with Vitest equivalents, and enhancing test coverage. The modifications include adding new test cases for various scenarios such as pinned posts, post liking, commenting, deletion, advertisement display, and pagination functionality.

Changes

File Change Summary
src/screens/UserPortal/Posts/Posts.test.tsx - Replaced jest.mock with vi.mock for mocking libraries
- Updated window.matchMedia mock implementation
- Added/modified test cases for pinned posts, liking, commenting, deletion, and pagination

Assessment against linked issues

Objective Addressed Explanation
Replace Jest-specific functions with Vitest equivalents [#2578]
Ensure tests pass using npm run test:vitest [#2578] Requires actual test run verification.
Maintain 100% test coverage [#2578] Requires coverage report verification.

Possibly related issues

Suggested labels

refactor

Suggested reviewers

  • pranshugupta54
  • varshith257

Poem

🐰 In the realm of tests, a rabbit hops with glee,
From Jest to Vitest, a migration spree!
Posts dancing, mocks singing, coverage bright,
Code quality leaping to a new height! 🚀
Refactoring magic, with whiskers so keen! 🧙‍♂️


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

Our Pull Request Approval Process

Thanks for contributing!

Testing Your Code

Remember, your PRs won't be reviewed until these criteria are met:

  1. We don't merge PRs with poor code quality.
    1. Follow coding best practices such that CodeRabbit.ai approves your PR.
  2. We don't merge PRs with failed tests.
    1. When tests fail, click on the Details link to learn more.
    2. Write sufficient tests for your changes (CodeCov Patch Test). Your testing level must be better than the target threshold of the repository
    3. Tests may fail if you edit sensitive files. Ask to add the ignore-sensitive-files-pr label if the edits are necessary.
  3. We cannot merge PRs with conflicting files. These must be fixed.

Our policies make our code better.

Reviewers

Do not assign reviewers. Our Queue Monitors will review your PR and assign them.
When your PR has been assigned reviewers contact them to get your code reviewed and approved via:

  1. comments in this PR or
  2. our slack channel

Reviewing Your Code

Your reviewer(s) will have the following roles:

  1. arbitrators of future discussions with other contributors about the validity of your changes
  2. point of contact for evaluating the validity of your work
  3. person who verifies matching issues by others that should be closed.
  4. person who gives general guidance in fixing your tests

CONTRIBUTING.md

Read our CONTRIBUTING.md file. Most importantly:

  1. PRs with issues not assigned to you will be closed by the reviewer
  2. Fix the first comment in the PR so that each issue listed automatically closes

Other

  1. 🎯 Please be considerate of our volunteers' time. Contacting the person who assigned the reviewers is not advised unless they ask for your input. Do not @ the person who did the assignment otherwise.
  2. Read the CONTRIBUTING.md file make

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
src/screens/UserPortal/Posts/Posts.test.tsx (3)

19-26: Clean up empty lines in imports section

The migration to Vitest looks good, but there's an unnecessary empty line between imports and the mock setup.

import { DELETE_POST_MUTATION } from 'GraphQl/Mutations/mutations';
import { vi } from 'vitest';
-

vi.mock('react-toastify', () => ({
🧰 Tools
🪛 eslint

[error] 21-22: Delete

(prettier/prettier)


237-246: Consider using a more concise matchMedia mock implementation

While the current implementation works, it could be simplified by using object method shorthand notation.

  value: vi.fn().mockImplementation((query) => ({
    matches: false,
    media: query,
    onchange: null,
-    addListener: vi.fn(),
-    removeListener: vi.fn(),
-    addEventListener: vi.fn(),
-    removeEventListener: vi.fn(),
-    dispatchEvent: vi.fn(),
+    addListener: () => {},
+    removeListener: () => {},
+    addEventListener: () => {},
+    removeEventListener: () => {},
+    dispatchEvent: () => {},
  })),

257-266: Enhance pinned post test robustness

The current test only verifies the presence of a pinned post. Consider adding assertions for:

  1. The pinned status indicator
  2. The correct ordering when multiple posts exist
  test('Check if pinned post is displayed at the top', async () => {
    renderHomeScreen();
    await wait();
    const pinnedPost = await screen.findByText('post one');
    const postsContainer = screen.getByTestId('postsContainer');
+   const pinnedIndicator = within(pinnedPost).getByTestId('pinned-indicator');
    expect(
      within(postsContainer).getAllByText('post one')[0],
    ).toBeInTheDocument();
+   expect(pinnedIndicator).toBeInTheDocument();
    expect(postsContainer.firstChild).toContainElement(pinnedPost);
+   const allPosts = within(postsContainer).getAllByTestId(/^post-/);
+   expect(allPosts[0]).toBe(pinnedPost);
  });
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9723078 and 1c33e70.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Posts/Posts.test.tsx (3 hunks)
🧰 Additional context used
🪛 eslint
src/screens/UserPortal/Posts/Posts.test.tsx

[error] 21-22: Delete

(prettier/prettier)

src/screens/UserPortal/Posts/Posts.test.tsx Outdated Show resolved Hide resolved
src/screens/UserPortal/Posts/Posts.test.tsx Show resolved Hide resolved
src/screens/UserPortal/Posts/Posts.test.tsx Outdated Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🔭 Outside diff range comments (1)
src/screens/UserPortal/Posts/Posts.test.tsx (1)

Line range hint 29-186: Add error and loading state mocks

The current mocks only cover successful scenarios. Consider adding mocks for:

  1. API errors
  2. Loading states
  3. Empty data states

Add these mock configurations:

const ERROR_MOCKS = [
  {
    request: {
      query: ORGANIZATION_POST_LIST,
      variables: { id: 'orgId', first: 10 },
    },
    error: new Error('Failed to fetch posts'),
  },
];

const EMPTY_MOCKS = [
  {
    request: {
      query: ORGANIZATION_POST_LIST,
      variables: { id: 'orgId', first: 10 },
    },
    result: {
      data: {
        organizations: [{
          posts: {
            edges: [],
            pageInfo: {
              hasNextPage: false,
              hasPreviousPage: false,
            },
            totalCount: 0,
          },
        }],
      },
    },
  },
];
♻️ Duplicate comments (3)
src/screens/UserPortal/Posts/Posts.test.tsx (3)

267-274: ⚠️ Potential issue

Add proper state update verification for like functionality

The current test doesn't properly wait for state updates after clicking the like button.

 test('Check if a post can be liked', async () => {
   renderHomeScreen();
   await wait();
   const likeBtn = await screen.findByTestId(
     'likeBtn-6411e54835d7ba2344a78e29',
   );
+  expect(likeBtn).toHaveTextContent('2');
   userEvent.click(likeBtn);
-  expect(likeBtn).toHaveTextContent('3');
+  await waitFor(() => {
+    expect(likeBtn).toHaveTextContent('3');
+    expect(likeBtn).not.toBeDisabled();
+  });
 });

333-341: 🛠️ Refactor suggestion

Enhance pagination test coverage

The current pagination test is too basic. It should verify the content and state of both pages.

 test('Check if pagination works', async () => {
   renderHomeScreen();
   await wait();
+  // Verify initial state
+  const firstPagePosts = screen.getAllByTestId(/^post-/);
+  expect(firstPagePosts).toHaveLength(2);
+  expect(screen.getByText('post one')).toBeInTheDocument();
+
   const nextPageBtn = await screen.findByTestId('nextPageBtn');
   expect(nextPageBtn).toBeInTheDocument();
+
   userEvent.click(nextPageBtn);
+
   await waitFor(() => {
     expect(screen.queryByText('post one')).not.toBeInTheDocument();
+    const secondPagePosts = screen.getAllByTestId(/^post-/);
+    expect(secondPagePosts).not.toEqual(firstPagePosts);
+    expect(nextPageBtn).toBeDisabled();
   });
+
+  // Test previous page navigation
+  const prevPageBtn = screen.getByTestId('prevPageBtn');
+  userEvent.click(prevPageBtn);
+
+  await waitFor(() => {
+    expect(screen.getByText('post one')).toBeInTheDocument();
+    expect(prevPageBtn).toBeDisabled();
+  });
 });

248-254: ⚠️ Potential issue

Add missing test coverage for loading and error states

The test suite still lacks coverage for important edge cases that were highlighted in previous reviews.

Add these essential test cases:

  1. Loading states while data is being fetched
  2. Error handling when API calls fail
  3. Empty states when no posts are available

Example implementation:

test('Should show loading state', async () => {
  renderHomeScreen();
  expect(screen.getByTestId('loading-spinner')).toBeInTheDocument();
});

test('Should handle API errors gracefully', async () => {
  const errorLink = new StaticMockLink(ERROR_MOCKS, true);
  render(
    <MockedProvider link={errorLink}>
      <Home />
    </MockedProvider>
  );
  
  await waitFor(() => {
    expect(screen.getByText('Error loading posts')).toBeInTheDocument();
  });
});

test('Should display empty state message', async () => {
  const emptyLink = new StaticMockLink(EMPTY_MOCKS, true);
  render(
    <MockedProvider link={emptyLink}>
      <Home />
    </MockedProvider>
  );
  
  await waitFor(() => {
    expect(screen.getByText('No posts available')).toBeInTheDocument();
  });
});
🧹 Nitpick comments (2)
src/screens/UserPortal/Posts/Posts.test.tsx (2)

21-27: Enhance mock implementation with error scenarios

The current mock implementation for react-toastify is basic. Consider enhancing it to verify toast calls and their arguments.

 vi.mock('react-toastify', () => ({
   toast: {
-    error: vi.fn(),
-    info: vi.fn(),
-    success: vi.fn(),
+    error: vi.fn().mockImplementation((msg) => {
+      console.log('Toast error:', msg);
+      return msg;
+    }),
+    info: vi.fn().mockImplementation((msg) => {
+      console.log('Toast info:', msg);
+      return msg;
+    }),
+    success: vi.fn().mockImplementation((msg) => {
+      console.log('Toast success:', msg);
+      return msg;
+    }),
   },
 }));

236-244: Consider moving window.matchMedia mock to test utils

The window.matchMedia mock could be reused across different test files.

Consider moving this to a shared test utils file, e.g., src/utils/testUtils.ts:

// src/utils/testUtils.ts
export const setupMatchMediaMock = (): void => {
  Object.defineProperty(window, 'matchMedia', {
    writable: true,
    value: vi.fn().mockImplementation((query) => ({
      matches: false,
      media: query,
      onchange: null,
      addListener: vi.fn(),
      removeListener: vi.fn(),
      addEventListener: vi.fn(),
      removeEventListener: vi.fn(),
      dispatchEvent: vi.fn(),
    })),
  });
};
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c33e70 and 5a78857.

📒 Files selected for processing (1)
  • src/screens/UserPortal/Posts/Posts.test.tsx (3 hunks)

src/screens/UserPortal/Posts/Posts.test.tsx Show resolved Hide resolved
@palisadoes
Copy link
Contributor

We have a policy of unassigning contributors who close PRs without getting validation from our reviewer team. This is because:

  1. We start looking for people to review PRs when you submit them.
  2. We often contact them and link to the PR. If the PR is closed the whole effort is wasted.
  3. The historical thread of reviewer comments is broken when the work is spread across multiple PRs. The quality of our code is affected negatively.

Please be considerate of our volunteers' limited time and our desire to improve our code base.

This policy is stated as a pinned post in all our Talawa repositories. Our YouTube videos explain why this practice is not acceptable to our Community.

In most cases you don’t have to close the PR to trigger the GitHub workflow to run again. Making a new commit and pushing it to your GitHub account will normally be sufficient.

Unfortunately, if this continues we will have to close the offending PR and unassign you from the issue.

@Niyatijain-9
Copy link
Author

Hello, I am extremely sorry for this.

We have a policy of unassigning contributors who close PRs without getting validation from our reviewer team. This is because:

  1. We start looking for people to review PRs when you submit them.
  2. We often contact them and link to the PR. If the PR is closed the whole effort is wasted.
  3. The historical thread of reviewer comments is broken when the work is spread across multiple PRs. The quality of our code is affected negatively.

Please be considerate of our volunteers' limited time and our desire to improve our code base.

This policy is stated as a pinned post in all our Talawa repositories. Our YouTube videos explain why this practice is not acceptable to our Community.

In most cases you don’t have to close the PR to trigger the GitHub workflow to run again. Making a new commit and pushing it to your GitHub account will normally be sufficient.

Unfortunately, if this continues we will have to close the offending PR and unassign you from the issue.

Hello, I am extremely sorry for this. I had raised the pr #2681 earlier but as I mentioned in it that I had only made changes it one file but in the file changed column it unexpectedly showed49 files changed. I tried deleting the extra files but that too didn't help. Therefore I had to make another PR #2683. I am extremely sorry for the inconvienience caused. I will take care of it from the future. Kindly assign me as I have solved the issue to some extent and would really like to work on it.

@palisadoes
Copy link
Contributor

  1. You need to create a local branch based on the latest upstream PalisadoesFoundation:develop-postgres branch
  2. Make sure the local branch name is not named develop-postgres
  3. Make your changes and open a PR

That should solve the issue with the excessive number of files

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants