Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

miggration from 8.11.4 to 10.1.3 memory shortage on github action #907

Open
erwanriou opened this issue Jan 10, 2025 · 6 comments
Open

miggration from 8.11.4 to 10.1.3 memory shortage on github action #907

erwanriou opened this issue Jan 10, 2025 · 6 comments

Comments

@erwanriou
Copy link

erwanriou commented Jan 10, 2025

Versions

  • NodeJS: 22
  • mongodb-memory-server-*: 10.1.3
  • mongodb(the binary version): 6.0.5
  • mongoose: 8.8.3
  • system: github action (i believe its ubuntu 24 now)

package: mongodb-memory-server

What is the Problem?

i updated the mongodb-memory-server to latest in order to pass a test that is making use of a feature added in mongodb 5.2 and test passed locally (but much much slower tho). Since i deployed it on the CI but the github action timeout due to memory when i don't even have that much test in it.

Code Example

I have a total of 78 test suite (in some other be i have more than 200 without any issues)
I believe a screen is better than more explainations:
image

So i believe there might be some sort of memoryleak underneath that generate the issue. In the meantime i will do

await MongoMemoryServer.create({ binary: { version: "6.0.5" } })
@erwanriou erwanriou added the bug label Jan 10, 2025
@hasezoey
Copy link
Member

In the meantime i will do

Alternatively, you could also try to use mongodb-memory-server version 9.x, see Default MongoDB Versions for MMS.


I dont know anything about MMS that could leak that much and i also am not aware of any change in mongodb that would lead to a massive increase. Do you clean your database between test-suites?
A alternative could be to use storageEngine: 'wiredTiger' with a different location than the default /tmp.

Is the log / repository public? If so i it would be great to investigate there.
Regardless if it is public or not, could you add a command in your test environment that runs du -xh /tmp and also ps -aux --sort=pmem once the ENOSPC Error happens and post the result? (it needs to be in jest so it can capture the state when the error happens and not after the processes get killed)

@erwanriou
Copy link
Author

@hasezoey I could try the version 9 too indeed (they might have a higher default binary that true)

yes i do have events between test to clean and clear:

// DROP TEST DABASE
beforeEach(async () => await dbhandler.clearDatabase())

the create and clear event are quite standard:

const mongoose = require("mongoose")
const { MongoMemoryServer } = require("mongodb-memory-server")

let mongo

module.exports.connect = async () => {
  const systemBinary = process.env.MONGOMS_SYSTEM_BINARY || undefined
  const version = process.env.MONGOMS_VERSION || "6.0.19"

  mongo = await MongoMemoryServer.create({ binary: { version, systemBinary } })
  const mongoUri = await mongo.getUri()
  await mongoose.set("strictQuery", true)
  await mongoose.connect(mongoUri)
}

module.exports.clearDatabase = async () => {
  const collections = await mongoose.connection.db.collections()
  for (let collection of collections) {
    await collection.deleteMany({})
  }
}

I don't make usage of views in this backend so its a simple one but i do have a few aggregate, lookups and things. Sadly it's not a public repository :(
I will do what you requested on the CICD and send the logs here. (the tests work locally, just are very slow to finish on the version 10)

@hasezoey
Copy link
Member

just are very slow to finish on the version 10

To confirm, are you using jest with global-setup? Or do you start a mongodb-memory-server instance for each test-suite?

@erwanriou
Copy link
Author

erwanriou commented Jan 11, 2025

Yes i do have a global setup for jest that is used by all the tests after.
the config is simple:

"jest": {
  "testEnvironment": "node",
  "setupFilesAfterEnv": [
    "./src/test/setup_test.js"
  ]
}

The main differences between the local and cicd are the command i launch that are respectivly:

{
  "test": "jest --watchAll --runInBand --verbose",
  "test:ci": "jest --forceExit --detectOpenHandles --maxWorkers=10",
}

to give you an idea of how would looklike a test:

const request = require("supertest")
const app = require("../../../../app")

const eventBuilder = async () => {
  const listener = new PreprofileCreatedList(NatsWrapper.client())
  const data = global.preprofileGenerator({})
  const msg = { ack: jest.fn() }
  return { listener, data, msg }
}

it("returns a 200 when fetching list of preprofiles as admin", async () => {
  // GENERATE AND APPROVE 3 ARCHITECT PREPROFILES
  for (let i = 0; i < 3; i++) {
    const { listener, data, msg } = await eventBuilder()
    await listener.onMessage(data, msg)
    const preprofile = await PreProfile.findById(data._id)
    expect(preprofile._id.toString()).toEqual(data._id)
  }

  // FETCH ARCHITECT WITH ADMIN API
  const adminCookie = await global.adminRegister()
  const page = 0
  const limit = 100

  const { body } = await request(app)
    .get(`/api/architect/admin/example-endpoint/list?page=${page}&limit=${limit}`)
    .set({ Host: "www.example.com" })
    .set("Cookie", adminCookie)
    .expect(200)

  expect(body[0].total).toEqual([{ count: 3 }])
  expect(body[0].preprofiles).toHaveLength(3)
})

@Tim0theus
Copy link

Tim0theus commented Jan 12, 2025

EDIT: Seems to be that JEST increased heap usage with each test, which lead to this ENOMEM. So my problem below is not directly with MMS, but with JEST instead.

Hey, I'm not sure if this is related, but since updating Node in my dockerfile vom 20 to 22.13 and then also updating the mongodb-memory-server to 10.1.3, I see a new ENOMEM error when testing, which didn't happen before that.
I already tried to increase the memory, but that didn't help so far. As the heap is not fully using the available memery in my case, it most likely doesn't have to do with not enough memory anyway.
Also tried wired-tiger instead and it didn't help.

I read that it might have to do with virtual memory and that increasing vm.max_map_count might help. However since I'm using an AWS build environment and FARGATE I think that's not possible there. (Still researching).

However I'll try changing the binary and see if that helps. Didn't try that so far.

Here the error:

Starting the MongoMemoryServer Instance failed, enable debug log for more information. Error:
--
3117 | Error: spawn ENOMEM
3118 | at ChildProcess.spawn (node:internal/child_process:420:11)
3119 | at spawn (node:child_process:753:9)
3120 | at MongoInstance._launchMongod (/backend/node_modules/mongodb-memory-server-core/src/util/MongoInstance.ts:506:31)
3121 | at MongoInstance.start (/backend/node_modules/mongodb-memory-server-core/src/util/MongoInstance.ts:394:31)
3122 | at async Function.create (/backend/node_modules/mongodb-memory-server-core/src/util/MongoInstance.ts:294:5)
3123 | at async MongoMemoryServer._startUpInstance (/backend/node_modules/mongodb-memory-server-core/src/MongoMemoryServer.ts:530:22)
3124 | at async MongoMemoryServer.start (/backend/node_modules/mongodb-memory-server-core/src/MongoMemoryServer.ts:350:5)
3125 | at async Function.create (/backend/node_modules/mongodb-memory-server-core/src/MongoMemoryServer.ts:317:5) {
3126 | errno: -12,
3127 | code: 'ENOMEM',
3128 | syscall: 'spawn'
3129 | }
3130 |  
3131 | at node_modules/mongodb-memory-server-core/src/MongoMemoryServer.ts:359:17
3132 | at async MongoMemoryServer.start (node_modules/mongodb-memory-server-core/src/MongoMemoryServer.ts:350:5)
3133 | at async Function.create (node_modules/mongodb-memory-server-core/src/MongoMemoryServer.ts:317:5)

@hasezoey
Copy link
Member

hasezoey commented Jan 12, 2025

Starting the MongoMemoryServer Instance failed, enable debug log for more information. Error:
3117 | Error: spawn ENOMEM

Well this is weird, this means that the mongodb binary couldnt even start because it has no memory, to my knowledge this issue(#907) is about running out of memory while a binary is already running (though i dont know yet if the issue is mongodb, MMS, or something else here).

I dont think i can help in your case, you will need to look why it is already out-of-memory at that point. (if it is actually MMS, please open a new issue)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants