-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration output should provide progress to completion #160
Comments
I'm a bit confused by your logs, if you see libp2p errors (which is a known unrelated issue btw ipfs/kubo#9432) that means Kubo is running, if Kubo is running that means the migration finished ? or at least isn't running. We don't support migrating concurrently to running the node. |
I see 8 migration processes running:
The fact that I see modifications to files shows it was started. It was initiated by ipfs-update. The ipfs repo stat command was issued after the ipfs-update. I also see a ipfs daemon process, thought that was part of the migration. I decided to take the risk and kill it, b/c the systemd service said the last start failed and it wasn't running. systemctl stop ipfs had no affect on the messages. I stopped the process with kill, it exited cleanly. Restarted using systemctl and no errors reported, no messages in journalctl logs. All appears to be good now, ipfs repo stat is fine. The key for resolving the messages is the changes to the swarm section suggested in another issue:
|
Problem solved as described. |
I'm in the process of migrating a large ipfs repo (not sure of old version of this repo, I believe it was v11. History is now gone since upgrade was started) to latest kubo, v12. There is a file in the repo root named 11-to-12-cids.txt which is 497KB long.
Here are the stats currently reported:
~/.ipfs$ ipfs repo stat
NumObjects: 1500619
RepoSize: 391498113680
StorageMax: 6600000000000
RepoPath: /home/ipfs/.ipfs
Version: fs-repo@12
I started this upgrade early Monday and it's late Wednesday now.
I see many resource errors, and the numbers start over (308 in sample below, 509 is largest I've seen). I presume that is due to the migration level sequence being done but I'm not sure:
`2022-11-30T18:34:40.016-0600 ERROR resourcemanager libp2p/rcmgr_logging.go:53 Resource limits were exceeded 308 times with error "system: cannot reserve inbound connection: resource limit exceeded".
`
I resolved these errors on another repo after it completed by changing the swarms section of the config and restarting the daemon.
I am reluctant to stop the migration on this repo before it completes, but would like to and then resume it unless that will lengthen the time to finish it.
The text was updated successfully, but these errors were encountered: