Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle multiple copies of Type due to bundling #136

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,7 @@ import { decode, Tokenizer, Type } from 'cborg'
class CustomTokeniser extends Tokenizer {
next () {
const nextToken = super.next()
if (nextToken.type === Type.bytes) {
if (nextToken.type.equals(Type.bytes)) {
throw new Error('Unsupported type: bytes')
}
return nextToken
Expand Down
11 changes: 11 additions & 0 deletions cborg.js
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@ import { encode } from './lib/encode.js'
import { decode, decodeFirst, Tokeniser, tokensToObject } from './lib/decode.js'
import { Token, Type } from './lib/token.js'

// is this needed for the json module and other independ encoders
export { encodeCustom } from './lib/encode.js'
export { encodeErrPrefix, decodeErrPrefix } from './lib/common.js'
export { asU8A, fromString, decodeCodePointsArray } from './lib/byte-utils.js'
export { quickEncodeToken } from './lib/jump.js'
export { makeCborEncoders, objectToTokens } from './lib/encode.js'
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this can't happen here, for two reasons:

  1. Users who don't need JSON support (a large portion of them), shouldn't be forced to get these, they should be entirely absent from the dependency tree and not add weight to a bundle if you don't use it. I know this seems to be something you're trying to solve for because you're using JSON, but this library is mainly about CBOR. So we need to find an alternative to doing this. If it means re-exporting a top-level export just for JSON that includes everything needed then that's fine.
  2. Anything exported on the top-level becomes part of the API contract and there's far too many things here that I don't want to be on the hook for supporting. Like asU8A, this is not something that I want to expose for general use, any time I want to make an internal change that involves these functions I now need to be thinking about whether it's a breaking change or not. Internal stuff must remain internal and these all look internal to me.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx,

I will run through it and try to move things around.
I would argue if they are used at the top-level namespace implementation the export does not reflect the bundle size.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi,

I did the move around --- ending up renaming lib -> cborg (which is more a hygiene thing)
I tried hard to move

  • encodeCustom
    but due to that's the real entry point for a configured encoder there is no simple way to
    no export this from the main package.
    This thing is a bit bigger than expected due to the de/encodeErrPrefix, which is now configurable.

If you are happy with that -> I will remove the comments that I left for now.

Pls do review and give some feedback. @rvagg

Due to my renaming of the directory, we should not have this PR lying around for too long.


/**
* Export the types that were present in the original manual cborg.d.ts
* @typedef {import('./interface').TagDecoder} TagDecoder
Expand All @@ -12,6 +19,10 @@ import { Token, Type } from './lib/token.js'
*/

export {
// this is needed to prevent the bundleing trouble which happens
// due to the fact that token.js is used in lib/json and so in
// cborg/json which ends up on bundling to have two copies of token.js
// which will fail stmts like token.type === Type.array
decode,
decodeFirst,
Tokeniser as Tokenizer,
Expand Down
2 changes: 1 addition & 1 deletion example-bytestrings.js
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ tags[tagUint64Array] = uint64ArrayDecoder
class ArrayBufferTransformingTokeniser extends Tokenizer {
next () {
const nextToken = super.next()
if (nextToken.type === Type.bytes) {
if (nextToken.type.equals(Type.bytes)) {
// Transform the (assumed) Uint8Array value to an ArrayBuffer of the same bytes, note though
// that all tags we care about are going to be <tag><bytes>, so we're also transforming those
// into ArrayBuffers, so our tag decoders need to also assume they are getting ArrayBuffers
Expand Down
2 changes: 1 addition & 1 deletion lib/2bytes.js
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ export function decodeBytes64 (data, pos, _minor, options) {
*/
function tokenBytes (token) {
if (token.encodedBytes === undefined) {
token.encodedBytes = token.type === Type.string ? fromString(token.value) : token.value
token.encodedBytes = token.type.equals(Type.string) ? fromString(token.value) : token.value
}
// @ts-ignore c'mon
return token.encodedBytes
Expand Down
8 changes: 4 additions & 4 deletions lib/decode.js
Original file line number Diff line number Diff line change
Expand Up @@ -145,23 +145,23 @@ function tokensToObject (tokeniser, options) {

const token = tokeniser.next()

if (token.type === Type.break) {
if (token.type.equals(Type.break)) {
return BREAK
}

if (token.type.terminal) {
return token.value
}

if (token.type === Type.array) {
if (token.type.equals(Type.array)) {
return tokenToArray(token, tokeniser, options)
}

if (token.type === Type.map) {
if (token.type.equals(Type.map)) {
return tokenToMap(token, tokeniser, options)
}

if (token.type === Type.tag) {
if (token.type.equals(Type.tag)) {
if (options.tags && typeof options.tags[token.value] === 'function') {
const tagged = tokensToObject(tokeniser, options)
return options.tags[token.value](tagged)
Expand Down
26 changes: 14 additions & 12 deletions lib/diagnostic.js
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import { Tokeniser } from './decode.js'
import { Type } from './token.js'
import { toHex, fromHex } from './byte-utils.js'
import { uintBoundaries } from './0uint.js'

Expand Down Expand Up @@ -31,25 +32,25 @@ function * tokensToDiagnostic (inp, width = 100) {
/** @type {string|number} */
let v = String(token.value)
let outp = `${margin}${slc(0, 1)}`
const str = token.type.name === 'bytes' || token.type.name === 'string'
if (token.type.name === 'string') {
const str = token.type.equals(Type.bytes) || token.type.equals(Type.string)
if (token.type.equals(Type.string)) {
v = v.length
vLength -= v
} else if (token.type.name === 'bytes') {
} else if (token.type.equals(Type.bytes)) {
v = token.value.length
// @ts-ignore
vLength -= v
}

let multilen
switch (token.type.name) {
case 'string':
case 'bytes':
case 'map':
case 'array':
case Type.string.name:
case Type.bytes.name:
case Type.map.name:
case Type.array.name:
// for bytes and string, we want to print out the length part of the value prefix if it
// exists - it exists for short lengths (<24) but does for longer lengths
multilen = token.type.name === 'string' ? utf8Encoder.encode(token.value).length : token.value.length
multilen = token.type.equals(Type.string) ? utf8Encoder.encode(token.value).length : token.value.length
if (multilen >= uintBoundaries[0]) {
if (multilen < uintBoundaries[1]) {
outp += ` ${slc(1, 1)}`
Expand All @@ -71,13 +72,14 @@ function * tokensToDiagnostic (inp, width = 100) {

outp = outp.padEnd(width / 2, ' ')
outp += `# ${margin}${token.type.name}`
// there should be a method to get a Type from a String
if (token.type.name !== v) {
outp += `(${v})`
}
yield outp

if (str) {
let asString = token.type.name === 'string'
let asString = token.type.equals(Type.string)
margin += ' '
let repr = asString ? utf8Encoder.encode(token.value) : token.value
if (asString && token.byteValue !== undefined) {
Expand Down Expand Up @@ -110,15 +112,15 @@ function * tokensToDiagnostic (inp, width = 100) {
}
if (!token.type.terminal) {
switch (token.type.name) {
case 'map':
case Type.map.name:
indent.push(token.value * 2)
break
case 'array':
case Type.array.name:
indent.push(token.value)
break
// TODO: test tags .. somehow
/* c8 ignore next 5 */
case 'tag':
case Type.tag.name:
indent.push(1)
break
default:
Expand Down
10 changes: 6 additions & 4 deletions lib/json/decode.js
Original file line number Diff line number Diff line change
@@ -1,7 +1,9 @@
import { decode as _decode, decodeFirst as _decodeFirst } from '../decode.js'
import { Token, Type } from '../token.js'
import { decodeCodePointsArray } from '../byte-utils.js'
import { decodeErrPrefix } from '../common.js'
// never reference the file directly to ensure the
// indepency of the json module
import { decode as _decode, decodeFirst as _decodeFirst } from 'cborg'
import { Token, Type } from 'cborg'
import { decodeCodePointsArray } from 'cborg'
import { decodeErrPrefix } from 'cborg'

/**
* @typedef {import('../../interface').DecodeOptions} DecodeOptions
Expand Down
Loading
Loading