diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 7bda171bfad..82f1f2f45d6 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -97,7 +97,7 @@ These are the commit types we make use of: Below are some conventions that aren't enforced by any of our tooling but we nonetheless do our best to adhere to: - **Disallow `export default` syntax** - - For our use case it is best if all imports / exports remain named. + - For our use case, it is best if all imports / exports remain named. - **As of 4.0 all code in src is in Typescript** - Typescript provides a nice developer experience. As a product of using TS, we should be using ES6 syntax features whenever possible. diff --git a/README.md b/README.md index d58a969ed3c..1a62b08d998 100644 --- a/README.md +++ b/README.md @@ -64,8 +64,8 @@ The following table describes add-on component version compatibility for the Nod We recommend using the latest version of typescript, however we currently ensure the driver's public types compile against `typescript@4.1.6`. This is the lowest typescript version guaranteed to work with our driver: older versions may or may not work - use at your own risk. -Since typescript [does not restrict breaking changes to major versions](https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes) we consider this support best effort. -If you run into any unexpected compiler failures against our supported TypeScript versions please let us know by filing an issue on our [JIRA](https://jira.mongodb.org/browse/NODE). +Since typescript [does not restrict breaking changes to major versions](https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes), we consider this support best effort. +If you run into any unexpected compiler failures against our supported TypeScript versions, please let us know by filing an issue on our [JIRA](https://jira.mongodb.org/browse/NODE). ## Installation @@ -153,13 +153,13 @@ Add code to connect to the server and the database **myProject**: > **NOTE:** Resolving DNS Connection issues > -> Node.js 18 changed the default DNS resolution ordering from always prioritizing ipv4 to the ordering +> Node.js 18 changed the default DNS resolution ordering from always prioritizing IPv4 to the ordering > returned by the DNS provider. In some environments, this can result in `localhost` resolving to -> an ipv6 address instead of ipv4 and a consequent failure to connect to the server. +> an IPv6 address instead of IPv4 and a consequent failure to connect to the server. > > This can be resolved by: > -> - specifying the ip address family using the MongoClient `family` option (`MongoClient(, { family: 4 } )`) +> - specifying the IP address family using the MongoClient `family` option (`MongoClient(, { family: 4 } )`) > - launching mongod or mongos with the ipv6 flag enabled ([--ipv6 mongod option documentation](https://www.mongodb.com/docs/manual/reference/program/mongod/#std-option-mongod.--ipv6)) > - using a host of `127.0.0.1` in place of localhost > - specifying the DNS resolution ordering with the `--dns-resolution-order` Node.js command line argument (e.g. `node --dns-resolution-order=ipv4first`) @@ -224,7 +224,7 @@ console.log('Found documents =>', findResult); ``` This query returns all the documents in the **documents** collection. -If you add this below the insertMany example you'll see the document's you've inserted. +If you add this below the insertMany example, you'll see the documents you've inserted. ### Find Documents with a Query Filter @@ -272,7 +272,7 @@ For more detailed information, see the [indexing strategies page](https://www.mo ## Error Handling -If you need to filter certain errors from our driver we have a helpful tree of errors described in [etc/notes/errors.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/errors.md). +If you need to filter certain errors from our driver, we have a helpful tree of errors described in [etc/notes/errors.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/errors.md). It is our recommendation to use `instanceof` checks on errors and to avoid relying on parsing `error.message` and `error.name` strings in your code. We guarantee `instanceof` checks will pass according to semver guidelines, but errors may be sub-classed or their messages may change at any time, even patch releases, as we see fit to increase the helpfulness of the errors. @@ -298,14 +298,14 @@ try { ## Nightly releases -If you need to test with a change from the latest `main` branch our `mongodb` npm package has nightly versions released under the `nightly` tag. +If you need to test with a change from the latest `main` branch, our `mongodb` npm package has nightly versions released under the `nightly` tag. ```sh npm install mongodb@nightly ``` Nightly versions are published regardless of testing outcome. -This means there could be sematic breakages or partially implemented features. +This means there could be semantic breakages or partially implemented features. The nightly build is not suitable for production use. ## Next Steps diff --git a/docs/_sources/api-articles/nodekoarticle1.txt b/docs/_sources/api-articles/nodekoarticle1.txt index a9481b392a3..8054009951f 100644 --- a/docs/_sources/api-articles/nodekoarticle1.txt +++ b/docs/_sources/api-articles/nodekoarticle1.txt @@ -1,7 +1,7 @@ -A Basic introduction to Mongo DB +A Basic introduction to MongoDB ================================ -Mongo DB has rapidly grown to become a popular database for web +MongoDB has rapidly grown to become a popular database for web applications and is a perfect fit for Node.JS applications, letting you write Javascript for the client, backend and database layer. Its schemaless nature is a better match to our constantly evolving data @@ -20,14 +20,14 @@ or go fetch it from github at `https://github.com/mongodb/node-mongodb-native `_ Once this business is taken care of, let's move through the types -available for the driver and then how to connect to your Mongo DB +available for the driver and then how to connect to your MongoDB instance before facing the usage of some CRUD operations. -Mongo DB data types +MongoDB data types ------------------- -So there is an important thing to keep in mind when working with Mongo -DB, and that is the slight mapping difference between types Mongo DB +So there is an important thing to keep in mind when working with MongoDB, +and that is the slight mapping difference between types MongoDB supports and native Javascript data types. Let's have a look at the types supported out of the box and then how types are promoted by the driver to fit as close to native Javascript types as possible. @@ -42,16 +42,16 @@ driver to fit as close to native Javascript types as possible. integer value is at a 53 bit. Mongo has two types for integers, a 32 bit and a 64 bit. The driver will try to fit the value into 32 bits if it can and promote it to 64 bits if it has to. Similarly it will - deserialize attempting to fit it into 53 bits if it can. If it cannot + deserialize attempting to fit it into 53 bits if it can. If it cannot, it will return an instance of **Long** to avoid losing precision. - **Long class** a special class that lets you store 64 bit integers and also lets you operate on the 64 bit integers. - **Date** maps directly to a Javascript Date - **RegExp** maps directly to a Javascript RegExp - **String** maps directly to a Javascript String (encoded in utf8) -- **Binary class** a special class that lets you store data in Mongo DB -- **Code class** a special class that lets you store javascript - functions in Mongo DB, can also provide a scope to run the method in +- **Binary class** a special class that lets you store data in MongoDB +- **Code class** a special class that lets you store Javascript + functions in MongoDB, can also provide a scope to run the method in - **ObjectID class** a special class that holds a MongoDB document identifier (the equivalent to a Primary key) - **DbRef class** a special class that lets you include a reference in @@ -63,13 +63,13 @@ driver to fit as close to native Javascript types as possible. As we see the number type can be a little tricky due to the way integers are implemented in Javascript. The latest driver will do correct conversion up to 53 bits of complexity. If you need to handle big -integers the recommendation is to use the Long class to operate on the +integers, the recommendation is to use the Long class to operate on the numbers. Getting that connection to the database --------------------------------------- -Let's get around to setting up a connection with the Mongo DB database. +Let's get around to setting up a connection with the MongoDB database. Jumping straight into the code let's do direct connection and then look at the code. @@ -86,9 +86,9 @@ at the code. }); Let's have a quick look at how the connection code works. The -**Db.connect** method let's use use a uri to connect to the Mongo +**Db.connect** method lets us use a uri to connect to the Mongo database, where **localhost:27017** is the server host and port and -**exampleDb** the db we wish to connect to. After the url notice the +**exampleDb** the db we wish to connect to. After the url, notice the hash containing the **auto\_reconnect** key. Auto reconnect tells the driver to retry sending a command to the server if there is a failure during its execution. @@ -103,12 +103,12 @@ dispatch and read from the tcp connection. We are up and running with a connection to the database. Let's move on and look at what collections are and how they work. -Mongo DB and Collections +MongoDB and Collections ------------------------ Collections are the equivalent of tables in traditional databases and contain all your documents. A database can have many collections. So how -do we go about defining and using collections. Well there are a couple +do we go about defining and using collections. Well, there are a couple of methods that we can use. Let's jump straight into code and then look at the code. @@ -154,7 +154,7 @@ check if the collection exists and issue an error if it does not. db.createCollection('test', function(err, collection) {}); -This command will create the collection on the Mongo DB database before +This command will create the collection on the MongoDB database before returning the collection object. If the collection already exists it will ignore the creation of the collection. @@ -165,13 +165,13 @@ will ignore the creation of the collection. The **{strict:true}** option will make the method return an error if the collection already exists. -With an open db connection and a collection defined we are ready to do +With an open db connection and a collection defined, we are ready to do some CRUD operation on the data. And then there was CRUD ----------------------- -So let's get dirty with the basic operations for Mongo DB. The Mongo DB +So let's get dirty with the basic operations for MongoDB. The MongoDB wire protocol is built around 4 main operations **insert/update/remove/query**. Most operations on the database are actually queries with special json objects defining the operation on the @@ -203,15 +203,15 @@ insert first and do it with some code. }); A couple of variations on the theme of inserting a document as we can -see. To understand why it's important to understand how Mongo DB works +see. To understand why it's important to understand how MongoDB works during inserts of documents. -Mongo DB has asynchronous **insert/update/remove** operations. This -means that when you issue an **insert** operation its a fire and forget +MongoDB has asynchronous **insert/update/remove** operations. This +means that when you issue an **insert** operation, it's a fire and forget operation where the database does not reply with the status of the -insert operation. To retrieve the status of the operation you have to +insert operation. To retrieve the status of the operation, you have to issue a query to retrieve the last error status of the connection. To -make it simpler to the developer the driver implements the **{w:1}** +make it simpler to the developer, the driver implements the **{w:1}** options so that this is done automatically when inserting the document. **{w:1}** becomes especially important when you do **update** or **remove** as otherwise it's not possible to determine the amount of @@ -225,7 +225,7 @@ above. collection.insert(doc1); Taking advantage of the async behavior and not needing confirmation -about the persisting of the data to Mongo DB we just fire off the insert +about the persisting of the data to MongoDB, we just fire off the insert (we are doing live analytics, loosing a couple of records does not matter). @@ -244,10 +244,10 @@ A batch insert of document with any errors being reported. This is much more efficient if you need to insert large batches of documents as you incur a lot less overhead. -Right that's the basics of insert's ironed out. We got some documents in +Right, that's the basics of insert's ironed out. We got some documents in there but want to update them as we need to change the content of a field. Let's have a look at a simple example and then we will dive into -how Mongo DB updates work and how to do them efficiently. +how MongoDB updates work and how to do them efficiently. **the requires and and other initializing stuff omitted for brevity** @@ -274,11 +274,11 @@ how Mongo DB updates work and how to do them efficiently. }); }); -Alright before we look at the code we want to understand how document +Alright, before we look at the code, we want to understand how document updates work and how to do the efficiently. The most basic and less efficient way is to replace the whole document, this is not really the -way to go if you want to change just a field in your document. Luckily -Mongo DB provides a whole set of operations that let you modify just +way to go if you want to change just a field in your document. Luckily, +MongoDB provides a whole set of operations that let you modify just pieces of the document `Atomic operations documentation `_. Basically outlined below. @@ -288,7 +288,7 @@ Basically outlined below. - $unset - delete a particular field (v1.3+) - $push - append a value to an array - $pushAll - append several values to an array -- $addToSet - adds value to the array only if its not in the array +- $addToSet - adds value to the array only if it's not in the array already - $pop - removes the last element in an array - $pull - remove a value(s) from an existing array @@ -296,17 +296,17 @@ Basically outlined below. - $rename - renames the field - $bit - bitwise operations -Now that the operations are outline let's dig into the specific cases +Now that the operations are outline, let's dig into the specific cases show in the code example. :: collection.update({mykey:1}, {$set:{fieldtoupdate:2}}, {w:1}, function(err, result) {}); -Right so this update will look for the document that has a field +Right, so this update will look for the document that has a field **mykey** equal to **1** and apply an update to the field **fieldtoupdate** setting the value to **2**. Since we are using the -**{w:1}** option the result parameter in the callback will return the +**{w:1}** option, the result parameter in the callback will return the value **1** indicating that 1 document was modified by the update statement. @@ -316,9 +316,9 @@ statement. This updates adds another document to the field **docs** in the document identified by **{mykey:2}** using the atomic operation **$push**. This -allows you to modify keep such structures as queues in Mongo DB. +allows you to modify keep such structures as queues in MongoDB. -Let's have a look at the remove operation for the driver. As before +Let's have a look at the remove operation for the driver. As before, let's start with a piece of code. **the requires and and other initializing stuff omitted for brevity** @@ -351,7 +351,7 @@ Let's examine the 3 remove variants and what they do. collection.remove({mykey:1}); -This leverages the fact that Mongo DB is asynchronous and that it does +This leverages the fact that MongoDB is asynchronous and that it does not return a result for **insert/update/remove** to allow for **synchronous** style execution. This particular remove query will remove the document where **mykey** equals **1**. @@ -361,7 +361,7 @@ remove the document where **mykey** equals **1**. collection.remove({mykey:2}, {w:1}, function(err, result) {}); This remove statement removes the document where **mykey** equals **2** -but since we are using **{w:1}** it will back to Mongo DB to get the +but since we are using **{w:1}**, it will back to MongoDB to get the status of the remove operation and return the number of documents removed in the result variable. @@ -375,15 +375,15 @@ Time to Query ------------- Queries is of course a fundamental part of interacting with a database -and Mongo DB is no exception. Fortunately for us it has a rich query +and MongoDB is no exception. Fortunately, for us it has a rich query interface with cursors and close to SQL concepts for slicing and dicing -your datasets. To build queries we have lots of operators to choose from -`Mongo DB advanced +your datasets. To build queries, we have lots of operators to choose from +`MongoDB advanced queries `_. There are literarily tons of ways to search and ways to limit the query. Let's look at some simple code for dealing with queries in different ways. -**the requires and and other initializing stuff omitted for brevity** +**the requires and other initializing stuff omitted for brevity** :: @@ -410,11 +410,11 @@ look at some simple code for dealing with queries in different ways. }); }); -Before we start picking apart the code there is one thing that needs to +Before we start picking apart the code, there is one thing that needs to be understood, the **find** method does not execute the actual query. It builds an instance of **Cursor** that you then use to retrieve the data. -This lets you manage how you retrieve the data from Mongo DB and keeps -state about your current Cursor state on Mongo DB. Now let's pick apart +This lets you manage how you retrieve the data from MongoDB and keeps +state about your current Cursor state on MongoDB. Now let's pick apart the queries we have here and look at what they do. :: @@ -425,7 +425,7 @@ This query will fetch all the document in the collection and return them as an array of items. Be careful with the function **toArray** as it might cause a lot of memory usage as it will instantiate all the document into memory before returning the final array of items. If you -have a big resultset you could run into memory issues. +have a big resultset, you could run into memory issues. :: @@ -448,10 +448,10 @@ done. This is special supported function to retrieve just one specific document bypassing the need for a cursor object. -That's pretty much it for the quick intro on how to use the database. I +That's pretty much info for the quick intro on how to use the database. I have also included a list of links to where to go to find more -information and also a sample crude location application I wrote using -express JS and mongo DB. +information and also a sample CRUD location application I wrote using +express JS and MongoDB. Links and stuff --------------- @@ -460,7 +460,7 @@ Links and stuff usage `_ - `All the integration tests, they have tons of different usage cases `_ -- `The Mongo DB wiki pages such as the advanced query +- `The MongoDB wiki pages such as the advanced query link `_ - `A silly simple location based application using Express JS and Mongo DB `_ diff --git a/docs/_sources/api-articles/nodekoarticle2.txt b/docs/_sources/api-articles/nodekoarticle2.txt index 15bdf6dfc31..9202eec2f9f 100644 --- a/docs/_sources/api-articles/nodekoarticle2.txt +++ b/docs/_sources/api-articles/nodekoarticle2.txt @@ -1,12 +1,12 @@ -A primer for GridFS using the Mongo DB driver +A primer for GridFS using the MongoDB driver ============================================= -In the first tutorial we targeted general usage of the database. But -Mongo DB is much more than this. One of the additional very useful -features is to act as a file storage system. This is accomplish in Mongo +In the first tutorial, we targeted general usage of the database. But +MongoDB is much more than this. One of the additional very useful +features is to act as a file storage system. This is accomplished in Mongo by having a file collection and a chunks collection where each document in the chunks collection makes up a **Block** of the file. In this -tutorial we will look at how to use the GridFS functionality and what +tutorial, we will look at how to use the GridFS functionality and what functions are available. A simple example @@ -33,21 +33,21 @@ grid using the simplified Grid class. }); }); -All right let's dissect the example. The first thing you'll notice is +Alright, let's dissect the example. The first thing you'll notice is the statement :: var grid = new Grid(db, 'fs'); -Since GridFS is actually a special structure stored as collections +Since GridFS is actually a special structure stored as collections, you'll notice that we are using the db connection that we used in the previous tutorial to operate on collections and documents. The second parameter **'fs'** allows you to change the collections you want to -store the data in. In this example the collections would be +store the data in. In this example, the collections would be **fs\_files** and **fs\_chunks**. -Having a live grid instance we now go ahead and create some test data +Having a live grid instance, we now go ahead and create some test data stored in a Buffer instance, although you can pass in a string instead. We then write our data to disk. @@ -62,20 +62,20 @@ We then write our data to disk. Let's deconstruct the call we just made. The **put** call will write the data you passed in as one or more chunks. The second parameter is a hash -of options for the Grid class. In this case we wish to annotate the file -we are writing to Mongo DB with some metadata and also specify a content +of options for the Grid class. In this case, we wish to annotate the file +we are writing to MongoDB with some metadata and also specify a content type. Each file entry in GridFS has support for metadata documents which -might be very useful if you are for example storing images in you Mongo -DB and need to store all the data associated with the image. +might be very useful if you are for example storing images in your MongoDB +and need to store all the data associated with the image. One important thing is to take not that the put method return a document containing a **\_id**, this is an **ObjectID** identifier that you'll need to use if you wish to retrieve the file contents later. -Right so we have written out first file, let's look at the other two +Right, so we have written out first file, let's look at the other two simple functions supported by the Grid class. -**the requires and and other initializing stuff omitted for brevity** +**the requires and other initializing stuff omitted for brevity** :: @@ -104,7 +104,7 @@ Let's have a look at the two operations **get** and **delete** grid.get(fileInfo._id, function(err, data) {}); The **get** method takes an ObjectID as the first argument and as we can -se in the code we are using the one provided in **fileInfo.\_id**. This +see in the code we are using the one provided in **fileInfo.\_id**. This will read all the chunks for the file and return it as a Buffer object. The **delete** method also takes an ObjectID as the first argument but @@ -112,27 +112,27 @@ will delete the file entry and the chunks associated with the file in Mongo. This **api** is the simplest one you can use to interact with GridFS but -it's not suitable for all kinds of files. One of it's main drawbacks is +it's not suitable for all kinds of files. One of its main drawbacks is you are trying to write large files to Mongo. This api will require you to read the entire file into memory when writing and reading from Mongo which most likely is not feasible if you have to store large files like -Video or RAW Pictures. Luckily this is not the only way to work with +Video or RAW Pictures. Luckily, this is not the only way to work with GridFS. That's not to say this api is not useful. If you are storing -tons of small files the memory usage vs the simplicity might be a +tons of small files, the memory usage vs the simplicity might be a worthwhile tradeoff. Let's dive into some of the more advanced ways of using GridFS. Advanced GridFS or how not to run out of memory ----------------------------------------------- -As we just said controlling memory consumption for you file writing and +As we just said, controlling memory consumption for your file writing and reading is key if you want to scale up the application. That means not -reading in entire files before either writing or reading from Mongo DB. +reading in entire files before either writing or reading from MongoDB. The good news is, it's supported. Let's throw some code out there straight away and look at how to do chunk sized streaming writes and reads. -**the requires and and other initializing stuff omitted for brevity** +**the requires and other initializing stuff omitted for brevity** :: @@ -158,13 +158,13 @@ reads. ) }); -Before we jump into picking apart the code let's look at +Before we jump into picking apart the code, let's look at :: var gridStore = new GridStore(db, fileId, "w", {root:'fs'}); -Notice the parameter **"w"** this is important. It tells the driver that +Notice the parameter **"w"** is important. It tells the driver that you are planning to write a new file. The parameters you can use here are. @@ -172,27 +172,27 @@ are. - **"w"** - write in truncate mode. Existing data will be overwritten - **"w+"** - write in edit mode -Right so there is a fair bit to digest here. We are simulating writing a -file that's about 1MB big to Mongo DB using GridFS. To do this we are -writing it in chunks of 5000 bytes. So to not live with a difficult -callback setup we are using the Step library with its' group +Right, so there is a fair bit to digest here. We are simulating writing a +file that's about 1MB big to MongoDB using GridFS. To do this, we are +writing it in chunks of 5000 bytes. So, to not live with a difficult +callback setup, we are using the Step library with its group functionality to ensure that we are notified when all of the writes are -done. After all the writes are done Step will invoke the next function +done. After all the writes are done, Step will invoke the next function (or step) called **doneWithWrite** where we finish up by closing the -file that flushes out any remaining data to Mongo DB and updates the +file that flushes out any remaining data to MongoDB and updates the file document. -As we are doing it in chunks of 5000 bytes we will notice that memory -consumption is low. This is the trick to write large files to GridFS. In +As we are doing it in chunks of 5000 bytes, we will notice that memory +consumption is low. This is the trick to write large files to GridFS, in pieces. Also notice this line. :: gridStore.chunkSize = 1024 * 256; -This allows you to adjust how big the chunks are in bytes that Mongo DB +This allows you to adjust how big the chunks are in bytes that MongoDB will write. You can tune the Chunk Size to your needs. If you need to -write large files to GridFS it might be worthwhile to trade of memory +write large files to GridFS, it might be worthwhile to trade of memory for CPU by setting a larger Chunk Size. Now let's see how the actual streaming read works. @@ -215,7 +215,7 @@ Now let's see how the actual streaming read works. }); }); -Right let's have a quick lock at the streaming functionality supplied +Right, let's have a quick look at the streaming functionality supplied with the driver **(make sure you are using 0.9.6-12 or higher as there is a bug fix for custom chunksizes that you need)** @@ -225,7 +225,7 @@ is a bug fix for custom chunksizes that you need)** This opens a stream to our file, you can pass in a boolean parameter to tell the driver to close the file automatically when it reaches the end. -This will fire the **close** event automatically. Otherwise you'll have +This will fire the **close** event automatically. Otherwise, you'll have to handle cleanup when you receive the **end** event. Let's have a look at the events supported. @@ -236,8 +236,8 @@ at the events supported. }); The **data** event is called for each chunk read. This means that it's -by the chunk size of the written file. So if you file is 1MB big and the -file has chunkSize 256K then you'll get 4 calls to the event handler for +by the chunk size of the written file. So, if your file is 1MB big and the +file has chunkSize 256K, then you'll get 4 calls to the event handler for **data**. The chunk returned is a **Buffer** object. :: @@ -255,11 +255,11 @@ the file. console.log("Finished reading the file"); }); -The **close** event is only called if you the **autoclose** parameter on -the **gridStore.stream** method as shown above. If it's false or not set +The **close** event is only called if you use the **autoclose** parameter on +the **gridStore.stream** method as shown above. If it's false or not set, handle cleanup of the streaming in the **end** event handler. -Right that's it for writing to GridFS in an efficient Manner. I'll +Right, that's it for writing to GridFS in an efficient Manner. I'll outline some other useful function on the Gridstore object. Other useful methods on the Gridstore object @@ -300,7 +300,7 @@ It can be one of three values. files) {}) **list** lists all the files in the collection in GridFS. If you have a -lot of files the current version will not work very well as it's getting +lot of files, the current version will not work very well as it's getting all files into memory first. You can have it return either the filenames or the ids for the files using option. @@ -308,7 +308,7 @@ or the ids for the files using option. gridStore.unlink(function(err, result) {}); -**unlink** deletes the file from Mongo DB, that's to say all the file +**unlink** deletes the file from MongoDB, that's to say all the file info and all the chunks. This should be plenty to get you on your way building your first GridFS diff --git a/etc/docs/README.md b/etc/docs/README.md index 537a95e7240..37e0a574bda 100644 --- a/etc/docs/README.md +++ b/etc/docs/README.md @@ -1,11 +1,11 @@ # docs_utils -This directory contains scripts to generate api docs as well our the Hugo site template used for the MongoDB node driver documentation. +This directory contains scripts to generate API docs as well our the Hugo site template used for the MongoDB node driver documentation. There are two scripts contained in this folder. - `legacy-generate.sh` was used to generate API documentation before the driver's docs -were moved into the main repository. This script has the ability to generate api docs for older versions of the driver (in case it becomes +were moved into the main repository. This script has the ability to generate API docs for older versions of the driver (in case it becomes necessary to backport a feature). - `build.ts` is used to generate API docs for a major or minor release. diff --git a/etc/docs/template/layouts/partials/quickStart.html b/etc/docs/template/layouts/partials/quickStart.html index 379ba6ebc19..8bb47973c96 100644 --- a/etc/docs/template/layouts/partials/quickStart.html +++ b/etc/docs/template/layouts/partials/quickStart.html @@ -1,6 +1,6 @@

Quick Start

-

Given that you have created your own project using `npm init` we install the mongodb driver and it's dependencies by executing the following `NPM` command.

+

Given that you have created your own project using `npm init` we install the mongodb driver and its dependencies by executing the following `npm` command.

npm install mongodb --save diff --git a/etc/notes/CHANGES_3.0.0.md b/etc/notes/CHANGES_3.0.0.md index 66cd5d75e5b..b0f5292a11f 100644 --- a/etc/notes/CHANGES_3.0.0.md +++ b/etc/notes/CHANGES_3.0.0.md @@ -8,7 +8,7 @@ Support has been added for retryable writes through the connection string. Mongo will utilize server sessions to allow some write commands to specify a transaction ID to enforce at-most-once semantics for the write operation(s) and allow for retrying the operation if the driver fails to obtain a write result (e.g. network error or "not master" error after a replica set -failover)Full details can be found in the [Retryable Writes Specification](https://github.com/mongodb/specifications/blob/master/source/retryable-writes/retryable-writes.rst). +failover). Full details can be found in the [Retryable Writes Specification](https://github.com/mongodb/specifications/blob/master/source/retryable-writes/retryable-writes.rst). ### DNS Seedlist Support @@ -54,9 +54,9 @@ We've added the following API methods. - `Db.prototype.setProfilingLevel` - `Db.prototype.profilingInfo` -In core we have removed the possibility of authenticating multiple credentials against the same +In core, we have removed the possibility of authenticating multiple credentials against the same connection pool. This is to avoid problems with MongoDB 3.6 or higher where all users will reside in -the admin database and thus database level authentication is no longer supported. +the admin database and thus database-level authentication is no longer supported. The legacy construct @@ -135,7 +135,7 @@ For more information about connection strings, read the [connection string speci ### `BulkWriteResult` & `BulkWriteError` When errors occured with bulk write operations in the past, the driver would callback or reject with -the first write error, as well as passing the resulting `BulkWriteResult`. For example: +the first write error, as well as passing the resulting `BulkWriteResult`. For example: ```js MongoClient.connect('mongodb://localhost', function(err, client) { @@ -164,14 +164,14 @@ MongoClient.connect('mongodb://localhost', function(err, client) { ``` Where the result of the failed operation is a `BulkWriteError` which has a child value `result` -which is the original `BulkWriteResult`. Similarly, the callback form no longer calls back with an +which is the original `BulkWriteResult`. Similarly, the callback form no longer calls back with an `(Error, BulkWriteResult)`, but instead just a `(BulkWriteError)`. ### `mapReduce` inlined results When `Collection.prototype.mapReduce` is invoked with a callback that includes `out: 'inline'`, it would diverge from the `Promise`-based variant by returning additional data as positional -arguments to the callback (`(err, result, stats, ...)`). This is no longer the case, both variants +arguments to the callback (`(err, result, stats, ...)`). This is no longer the case, both variants of the method will now return a single object for all results - a single value for the default case, and an object similar to the existing `Promise` form for cases where there is more data to pass to the user. @@ -180,7 +180,7 @@ the user. `find` and `findOne` no longer support the `fields` parameter. You can achieve the same results as the `fields` parameter by using `Cursor.prototype.project` or by passing the `projection` property -in on the options object . Additionally, `find` does not support individual options like `skip` and +in on the options object. Additionally, `find` does not support individual options like `skip` and `limit` as positional parameters. You must either pass in these parameters in the `options` object, or add them via `Cursor` methods like `Cursor.prototype.skip`. @@ -289,7 +289,7 @@ testCollection.updateOne({_id: 'test'}, {}); ### `keepAlive` -Wherever it occurs, the option `keepAlive` has been changed. `keepAlive` is now a boolean that enables/disables `keepAlive`, while `keepAliveInitialDelay` specifies how long to wait before initiating keepAlive. This brings the API in line with [NodeJS's socket api](https://nodejs.org/dist/latest-v9.x/docs/api/all.html#net_socket_setkeepalive_enable_initialdelay) +Wherever it occurs, the option `keepAlive` has been changed. `keepAlive` is now a boolean that enables/disables `keepAlive`, while `keepAliveInitialDelay` specifies how long to wait before initiating keepAlive. This brings the API in line with [NodeJS's socket API](https://nodejs.org/dist/latest-v9.x/docs/api/all.html#net_socket_setkeepalive_enable_initialdelay) ### `insertMany` diff --git a/etc/notes/CHANGES_4.0.0.md b/etc/notes/CHANGES_4.0.0.md index 42c1476835d..7bd77348228 100644 --- a/etc/notes/CHANGES_4.0.0.md +++ b/etc/notes/CHANGES_4.0.0.md @@ -2,7 +2,7 @@ _Hello dear reader, **thank you** for adopting version 4.x of the MongoDB Node.js driver, from the bottom of our developer hearts we thank you so much for taking the time to upgrade to our latest and greatest offering of a stunning database experience. We hope you enjoy your upgrade experience and this guide gives you all the answers you are searching for. -If anything, and we mean anything, hinders your upgrade experience please let us know via [JIRA](https://jira.mongodb.org/browse/NODE). +If anything, and we mean anything, hinders your upgrade experience, please let us know via [JIRA](https://jira.mongodb.org/browse/NODE). We know breaking changes are hard but they are sometimes for the best. Anyway, enjoy the guide, see you at the end!_ @@ -18,8 +18,8 @@ Recently we migrated our BSON library to TypeScript as well, this version of the #### Community Types users (@types/mongodb) -If you are a user of the community types (@types/mongodb) there will likely be compilation errors while adopting the types from our codebase. -Unfortunately we could not achieve a one to one match in types due to the details of writing the codebase in Typescript vs definitions for the user layer API along with the breaking changes of this major version. Please let us know if there's anything that is a blocker to upgrading [on JIRA](https://jira.mongodb.org/browse/NODE). +If you are a user of the community types (@types/mongodb), there will likely be compilation errors while adopting the types from our codebase. +Unfortunately, we could not achieve a one-to-one match in types due to the details of writing the codebase in Typescript vs definitions for the user layer API along with the breaking changes of this major version. Please let us know if there's anything that is a blocker to upgrading [on JIRA](https://jira.mongodb.org/browse/NODE). ### Node.js Version @@ -93,7 +93,7 @@ for await (const doc of cursor) { } ``` -Prior to the this release there was inconsistency surrounding how the cursor would error if a setting like limit was applied after cursor execution had begun. +Prior to this release, there was an inconsistency surrounding how the cursor would error if a setting like limit was applied after cursor execution had begun. Now, an error along the lines of: `Cursor is already initialized` is thrown. ##### Cursor.count always respects skip and limit @@ -106,7 +106,7 @@ It is recommended that users utilize the `collection.countDocuments` or `collect #### ChangeStream must be used as an iterator or an event emitter You cannot use ChangeStream as an iterator after using as an EventEmitter nor visa versa. -Previously the driver would permit this kind of usage but it could lead to unpredictable behavior and obscure errors. +Previously, the driver would permit this kind of usage but it could lead to unpredictable behavior and obscure errors. It's unlikely this kind of usage was useful but to be sure we now prevent it by throwing a clear error. ```javascript @@ -171,7 +171,7 @@ Specifying `checkServerIdentity === false` (along with enabling tls) is differen The 3.x version intercepted `checkServerIdentity: false` and turned it into a no-op function which is the required way to skip checking the server identity by nodejs. Setting this option to `false` is only for testing anyway as it disables essential verification to TLS. So it made sense for our library to directly expose the option validation from Node.js. -If you need to test TLS connections without verifying server identity pass in `{ checkServerIdentity: () => {} }`. +If you need to test TLS connections without verifying server identity, pass in `{ checkServerIdentity: () => {} }`. #### Kerberos / GSSAPI @@ -218,7 +218,7 @@ The same functionality can be achieved using the aggregation pipeline's `$group` ### GridStore removed The deprecated GridStore API has been removed from the driver. -For more information on GridFS [see the mongodb manual](https://www.mongodb.com/docs/manual/core/gridfs/). +For more information on GridFS, [see the mongodb manual](https://www.mongodb.com/docs/manual/core/gridfs/). Below are some snippets that represent equivalent operations: @@ -293,7 +293,7 @@ const fileMetaDataList: GridFSFile[] = bucket.find({}).toArray(); #### Hashing an upload The automatic MD5 hashing has been removed from the upload family of functions. -This makes the default Grid FS behavior compliant with systems that do not permit usage of MD5 hashing. +This makes the default GridFS behavior compliant with systems that do not permit usage of MD5 hashing. The `disableMD5` option is no longer used and has no effect. If you still want to add an MD5 hash to your file upload, here's a simple example that can be used with [any hashing algorithm](https://nodejs.org/dist/latest-v14.x/docs/api/crypto.html#crypto_crypto_createhash_algorithm_options) provided by Node.js: @@ -333,9 +333,9 @@ This version includes an upgrade from js-bson 1.x to js-bson 4.x. #### Timestamps math operations return Javascript `Long`s In versions prior to 4.x of the BSON library, Timestamps were represented with a custom class. In version 4.x of the BSON library, the Timestamp class was refactored to -be a subclass of the Javascript Long class. As a result of this refactor, math operations on Timestamp objects now return Long objects instead of Timestamp objects. +be a subclass of the Javascript Long class. As a result of this refactor, math operations on Timestamp objects now return Long objects instead of Timestamp objects. -Math operations with Timestamps is not recommended. However, if Timestamp math must be used, the old behavior can be replicated by using the Timestamp +Math operations with Timestamps is not recommended. However, if Timestamp math must be used, the old behavior can be replicated by using the Timestamp constructor, which takes a Long as an argument. ```typescript diff --git a/etc/notes/CHANGES_5.0.0.md b/etc/notes/CHANGES_5.0.0.md index 13947bc0489..a662eb15774 100644 --- a/etc/notes/CHANGES_5.0.0.md +++ b/etc/notes/CHANGES_5.0.0.md @@ -128,7 +128,7 @@ driver v4 and v5. However, new features will **only** support a Promise-based AP ##### Example usage of equivalent callback and Promise usage -After installing the package and modifying imports the following example demonstrates equivalent usages of either `async`/`await` syntax, `.then`/`.catch` chaining, or callbacks: +After installing the package and modifying imports, the following example demonstrates equivalent usages of either `async`/`await` syntax, `.then`/`.catch` chaining, or callbacks: ```typescript // Just add '-legacy' to my mongodb import @@ -260,7 +260,7 @@ Three legacy operation helpers on the collection class have been removed: | `update(filter)` | `updateMany(filter)` | | `remove(filter)` | `deleteMany(filter)` | -The `insert` method accepted an array of documents for multi-document inserts and a single document for single document inserts. `insertOne` should now be used for single-document inserts and `insertMany` should be used for multi-document inserts. +The `insert` method accepted an array of documents for multi-document inserts and a single document for single-document inserts. `insertOne` should now be used for single-document inserts and `insertMany` should be used for multi-document inserts. ```typescript // Single document insert: @@ -355,7 +355,7 @@ The `digestPassword` option has been removed from the add user helper. ### `ObjectID` type removed in favor of `ObjectId` -For clarity the deprecated and duplicate export `ObjectID` has been removed. `ObjectId` matches the class name and is equal in every way to the capital "D" export. +For clarity, the deprecated and duplicate export `ObjectID` has been removed. `ObjectId` matches the class name and is equal in every way to the capital "D" export. ### `slaveOk` options removed diff --git a/etc/notes/CHANGES_6.0.0.md b/etc/notes/CHANGES_6.0.0.md index 41e2eeadfd7..75862f16996 100644 --- a/etc/notes/CHANGES_6.0.0.md +++ b/etc/notes/CHANGES_6.0.0.md @@ -110,7 +110,7 @@ The `await client.withSession(async session => {})` now returns the value that t The `await session.withTransaction(async () => {})` method now returns the value that the provided function returns. Previously, this function returned the server command response which is subject to change depending on the server version or type the driver is connected to. The return value got in the way of writing robust, reliable, consistent code no matter the backing database supporting the application. > [!WARNING] -> When upgrading to this version of the driver, be sure to audit any usages of `withTransaction` for `if` statements or other conditional checks on the return value of `withTransaction`. Previously, the return value was the command response if the transaction was committed and `undefined` if it had been manually aborted. It would only throw if an operation or the author of the function threw an error. Since prior to this release it was not possible to get the result of the function passed to `withTransaction` we suspect most existing functions passed to this method return `void`, making `withTransaction` a `void` returning function in this major release. Take care to ensure that the return values of your function match the expectation of the code that follows the completion of `withTransaction`. +> When upgrading to this version of the driver, be sure to audit any usages of `withTransaction` for `if` statements or other conditional checks on the return value of `withTransaction`. Previously, the return value was the command response if the transaction was committed and `undefined` if it had been manually aborted. It would only throw if an operation or the author of the function threw an error. Since prior to this release, it was not possible to get the result of the function passed to `withTransaction` we suspect most existing functions passed to this method return `void`, making `withTransaction` a `void` returning function in this major release. Take care to ensure that the return values of your function match the expectation of the code that follows the completion of `withTransaction`. ### Driver methods throw if a session is provided from a different `MongoClient` @@ -155,7 +155,7 @@ const client = new MongoClient('mongodb://localhost:27017?tls=true'); ### Repeated options are no longer allowed in connection strings -In order to avoid accidental misconfiguration the driver will no longer prioritize the first instance of an option provided on the URI. Instead repeated options that are not permitted to be repeated will throw an error. +In order to avoid accidental misconfiguration, the driver will no longer prioritize the first instance of an option provided on the URI. Instead, repeated options that are not permitted to be repeated will throw an error. This change will ensure that connection strings that contain options like `tls=true&tls=false` are no longer ambiguous. diff --git a/etc/notes/editor.md b/etc/notes/editor.md index 3560a589f65..3ba866f3754 100644 --- a/etc/notes/editor.md +++ b/etc/notes/editor.md @@ -12,7 +12,7 @@ Here's a quick description of each: - `dbaeumer.vscode-eslint` - Runs ESLint automatically after file save, saves you the need to run the linter manually most of the time. - `hbenl.vscode-test-explorer` - Lets you navigate our tests and run them through button presses. - `hbenl.vscode-mocha-test-adapter` - The mocha specific module to the common extension mentioned above. -- `github.vscode-pull-request-github` - With this you can manage and make pull requests right from VSCode, even reviews can be done via the editor. +- `github.vscode-pull-request-github` - With this, you can manage and make pull requests right from VSCode, even reviews can be done via the editor. - `eamodio.gitlens` - Gives spectacular insight into git history, has many helpful git navigation UI features. - `mongodb.mongodb-vscode` - Our VScode extension can be connected to your locally running MongoDB instance (to help debug tests, etc.) - `rbuckton.deoptexplorer-vscode` - De-opt explorer can visualize the results of running Node.js profiling results on top of your source code (Run things with `dexnode` for easy use!) @@ -87,7 +87,7 @@ Here's a quick description of each: The two non-default things in the configuration are marked with `CHANGE THIS` comments. - You need to add `node_modules/node-addon-api` to the include path to find `napi.h`. - - For libmongocrypt the path might be: `bindings/node/node_modules/node-addon-api` depending on your workspace root + - For libmongocrypt, the path might be: `bindings/node/node_modules/node-addon-api` depending on your workspace root - Bump up the cpp standard to whatever the relevant standard is In VSCode install `ms-vscode.cpptools` and in a `.vscode/c_cpp_properties.json` file add: diff --git a/etc/notes/native-extensions.md b/etc/notes/native-extensions.md index e3875e8da81..e3298946d0c 100644 --- a/etc/notes/native-extensions.md +++ b/etc/notes/native-extensions.md @@ -20,14 +20,14 @@ If all the steps complete, you have the right toolchain installed. If you get th npm install -g node-gyp ``` -If it correctly compiles and runs the tests you are golden. We can now try to install the `mongod` driver by performing the following command. +If it correctly compiles and runs the tests, you are golden. We can now try to install the `mongod` driver by performing the following command. ```bash cd yourproject npm install mongodb --save ``` -If it still fails the next step is to examine the npm log. Rerun the command but in this case in verbose mode. +If it still fails, the next step is to examine the npm log. Rerun the command but in this case in verbose mode. ```bash npm --loglevel verbose install mongodb