Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubo IPNS-over-Pubsub is ignoring user supplied TTL right after starting the daemon #10657

Open
3 tasks done
Rinse12 opened this issue Jan 9, 2025 · 4 comments
Open
3 tasks done
Assignees
Labels
kind/bug A bug in existing code (including security flaws) need/analysis Needs further analysis before proceeding need/author-input Needs input from the original author

Comments

@Rinse12
Copy link

Rinse12 commented Jan 9, 2025

Checklist

Installation method

dist.ipfs.tech or ipfs-update

Version

Kubo version: 0.32.1
Repo version: 16
System version: amd64/linux
Golang version: go1.23.3

Config

{
  "API": {
    "HTTPHeaders": {}
  },
  "Addresses": {
    "API": "/ip4/127.0.0.1/tcp/5001",
    "Announce": [],
    "AppendAnnounce": [],
    "Gateway": "/ip4/127.0.0.1/tcp/8080",
    "NoAnnounce": [],
    "Swarm": [
      "/ip4/0.0.0.0/tcp/4001",
      "/ip6/::/tcp/4001",
      "/ip4/0.0.0.0/udp/4001/webrtc-direct",
      "/ip4/0.0.0.0/udp/4001/quic-v1",
      "/ip4/0.0.0.0/udp/4001/quic-v1/webtransport",
      "/ip6/::/udp/4001/webrtc-direct",
      "/ip6/::/udp/4001/quic-v1",
      "/ip6/::/udp/4001/quic-v1/webtransport"
    ]
  },
  "AutoNAT": {},
  "AutoTLS": {},
  "Bootstrap": [
    "/ip4/104.131.131.82/tcp/4001/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/ip4/104.131.131.82/udp/4001/quic-v1/p2p/QmaCpDMGvV2BGHeYERUEnRQAwe3N8SzbUtfsmvsqQLuvuJ",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmNnooDu7bfjPFoTZYxMNLWUQJyrVwtbZg5gBMjTezGAJN",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmQCU2EcMqAqQPR2i9bChDtGNJchTbq5TbXJJ16u19uLTa",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmbLHAnMoJPWSCR5Zhtx6BHJX9KiKNN6tpvbUcqanj75Nb",
    "/dnsaddr/bootstrap.libp2p.io/p2p/QmcZf59bWwK5XFi76CZX8cbJ4BhTzzA3gU1ZjYZcYW3dwt"
  ],
  "DNS": {
    "Resolvers": {}
  },
  "Datastore": {
    "BloomFilterSize": 0,
    "GCPeriod": "1h",
    "HashOnRead": false,
    "Spec": {
      "mounts": [
        {
          "child": {
            "path": "blocks",
            "shardFunc": "/repo/flatfs/shard/v1/next-to-last/2",
            "sync": true,
            "type": "flatfs"
          },
          "mountpoint": "/blocks",
          "prefix": "flatfs.datastore",
          "type": "measure"
        },
        {
          "child": {
            "compression": "none",
            "path": "datastore",
            "type": "levelds"
          },
          "mountpoint": "/",
          "prefix": "leveldb.datastore",
          "type": "measure"
        }
      ],
      "type": "mount"
    },
    "StorageGCWatermark": 90,
    "StorageMax": "10GB"
  },
  "Discovery": {
    "MDNS": {
      "Enabled": true
    }
  },
  "Experimental": {
    "FilestoreEnabled": false,
    "Libp2pStreamMounting": false,
    "OptimisticProvide": false,
    "OptimisticProvideJobsPoolSize": 0,
    "P2pHttpProxy": false,
    "StrategicProviding": false,
    "UrlstoreEnabled": false
  },
  "Gateway": {
    "DeserializedResponses": null,
    "DisableHTMLErrors": null,
    "ExposeRoutingAPI": null,
    "HTTPHeaders": {},
    "NoDNSLink": false,
    "NoFetch": false,
    "PublicGateways": null,
    "RootRedirect": ""
  },
  "Identity": {
    "PeerID": "12D3KooWBJNtmfaDN1G1enLgEuWVXAHGR98fjYDH2d3wNsKqWr4s"
  },
  "Import": {
    "CidVersion": null,
    "HashFunction": null,
    "UnixFSChunker": null,
    "UnixFSRawLeaves": null
  },
  "Internal": {},
  "Ipns": {
    "RecordLifetime": "",
    "RepublishPeriod": "",
    "ResolveCacheSize": 128
  },
  "Migration": {
    "DownloadSources": [],
    "Keep": ""
  },
  "Mounts": {
    "FuseAllowOther": false,
    "IPFS": "/ipfs",
    "IPNS": "/ipns"
  },
  "Peering": {
    "Peers": null
  },
  "Pinning": {
    "RemoteServices": {}
  },
  "Plugins": {
    "Plugins": null
  },
  "Provider": {
    "Strategy": ""
  },
  "Pubsub": {
    "DisableSigning": false,
    "Router": ""
  },
  "Reprovider": {},
  "Routing": {
    "Methods": null,
    "Routers": null
  },
  "Swarm": {
    "AddrFilters": null,
    "ConnMgr": {},
    "DisableBandwidthMetrics": false,
    "DisableNatPortMap": false,
    "RelayClient": {},
    "RelayService": {},
    "ResourceMgr": {},
    "Transports": {
      "Multiplexers": {},
      "Network": {},
      "Security": {}
    }
  },
  "Version": {}
}

Description

The first IPNS record to be created after starting Kubo is published multiple times on the pubsub topic with different TTLs, but still maintaining the same sequence number.

To reproduce:

  • Start a new kubo daemon with ipfs daemon --enable-namesys-pubsub
    • Has to be freshly started, don't use an already running daemon
  • Use this code to create a new IPNS and subscribe to its pubsub topic updates. It will eventually receive an IPNS record with ttl=1hour, even though we supplied ttl=60s
import { toString as uint8ArrayToString } from "uint8arrays/to-string";
import { fromString as uint8ArrayFromString } from "uint8arrays/from-string";
import PeerId from "peer-id";

import { create } from "kubo-rpc-client";

const ipnsNameToIpnsOverPubsubTopic = (ipnsName) => {
    // for ipns over pubsub, the topic is '/record/' + Base64Url(Uint8Array('/ipns/') + Uint8Array('12D...'))
    // https://github.com/ipfs/helia/blob/1561e4a106074b94e421a77b0b8776b065e48bc5/packages/ipns/src/routing/pubsub.ts#L169
    const ipnsNamespaceBytes = new TextEncoder().encode("/ipns/");
    const ipnsNameBytes = PeerId.parse(ipnsName).toBytes(); // accepts base58 (12D...) and base36 (k51...)
    const ipnsNameBytesWithNamespace = new Uint8Array(ipnsNamespaceBytes.length + ipnsNameBytes.length);
    ipnsNameBytesWithNamespace.set(ipnsNamespaceBytes, 0);
    ipnsNameBytesWithNamespace.set(ipnsNameBytes, ipnsNamespaceBytes.length);
    const pubsubTopic = "/record/" + uint8ArrayToString(ipnsNameBytesWithNamespace, "base64url");
    return pubsubTopic;
};

const ipfsApiUrl = "http://localhost:5001/api/v0";

(async () => {
    const client = create({ url: ipfsApiUrl });

    const ipnsKey = await client.key.gen("Random Name" + Math.random());

    const ttl = "1m0s";

    const ipfsFile = await client.add(JSON.stringify(Math.random()));

    const newIpns = await client.name.publish(ipfsFile.cid, { key: ipnsKey.id, ttl });

    console.log("Published the first IPNS");

    const ipnsOVerPubsubTopic = ipnsNameToIpnsOverPubsubTopic(newIpns.name);
    const promise = new Promise((resolve) => {});
    let lastIpnsEntry, lastPubsubMessage;
    client.pubsub.subscribe(ipnsOVerPubsubTopic, async (message) => {
        // Create FormData
        const formData = new FormData();
        formData.append("file", new Blob([message.data], { type: "application/octet-stream" }), "myfile");

        // Make request
        const inspectName = await fetch(ipfsApiUrl + "/name/inspect?dump=true", {
            method: "POST",
            body: formData
        }).then((res) => res.json());

        const ttlAsSeconds = Math.round(inspectName["Entry"]["TTL"] * 1e-9);
        if (ttlAsSeconds !== 60) {
            console.log("Received a TTL", ttlAsSeconds, ". Should not happen");
            debugger;
        } else {
            console.log("Received another IPNS pubsub message with TTL=60s. This is correct");
        }
        lastIpnsEntry = inspectName;
        lastPubsubMessage = message;
    });

    setInterval(async () => {
        const ipfsFile = await client.add(JSON.stringify(Math.random()));

        await client.name.publish(ipfsFile.cid, { key: ipnsKey.id, ttl });
        console.log("Published a new IPNS");
    }, 100000);

    await promise;
})();

Theory:
Kubo's IPNS Republisher is ignoring the user supplied TTL and going for the default of 1h. In that case, can we disable republishing of IPNS within Kubo? Ipns.RepublishPeriod can not be 0 apparently

@Rinse12 Rinse12 added kind/bug A bug in existing code (including security flaws) need/triage Needs initial labeling and prioritization labels Jan 9, 2025
@lidel lidel self-assigned this Jan 14, 2025
@lidel lidel added the need/analysis Needs further analysis before proceeding label Jan 14, 2025
@lidel
Copy link
Member

lidel commented Jan 15, 2025

@Rinse12 thank you for reporting this.

How long it takes on average for the issue to surface?
I was not able to reproduce so far.

Shot in the dark: is it possible you are publishing to the same IPNS Name from two places?
If you dont specify custom key, ipfs.name.publish uses the same key as your PeerID – is there a chance you have the same PeerID on two Kubo boxes?

@lidel lidel added the need/author-input Needs input from the original author label Jan 15, 2025
@Rinse12
Copy link
Author

Rinse12 commented Jan 15, 2025

How long it takes on average for the issue to surface?

Usually a few minutes

is it possible you are publishing to the same IPNS Name from two places?

I don't believe that's the case, the IPNS keypair is generated within the code and I don't have access to it anywhere else.

@Rinse12
Copy link
Author

Rinse12 commented Jan 17, 2025

Can we disable the auto-republisher in kubo's config somewhere? If I can do that we can tell for sure whether it's the republisher or something else.

@lidel
Copy link
Member

lidel commented Jan 21, 2025

@Rinse12 Were you able to reproduce it on a fresh node with clean repo?
I tried to run it for ~30m and TTL was not broken in thet timeframe.

To fix the bug, we need a working repro to confirm it is real.
Any idea how to trigger it?

@lidel lidel added need/author-input Needs input from the original author and removed need/author-input Needs input from the original author need/triage Needs initial labeling and prioritization labels Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug A bug in existing code (including security flaws) need/analysis Needs further analysis before proceeding need/author-input Needs input from the original author
Projects
None yet
Development

No branches or pull requests

2 participants