This commit is contained in:
PLBXNebulia-Formation 2025-11-21 09:23:11 +01:00
commit d1c8cae2c1
1417 changed files with 326736 additions and 0 deletions

201
node_modules/mongodb/LICENSE.md generated vendored Normal file
View file

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright {yyyy} {name of copyright owner}
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

355
node_modules/mongodb/README.md generated vendored Normal file
View file

@ -0,0 +1,355 @@
# MongoDB Node.js Driver
The official [MongoDB](https://www.mongodb.com/) driver for Node.js.
**Upgrading to version 7? Take a look at our [upgrade guide here](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/CHANGES_7.0.0.md)!**
## Quick Links
| Site | Link |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------- |
| Documentation | [www.mongodb.com/docs/drivers/node](https://www.mongodb.com/docs/drivers/node) |
| API Docs | [mongodb.github.io/node-mongodb-native](https://mongodb.github.io/node-mongodb-native) |
| `npm` package | [www.npmjs.com/package/mongodb](https://www.npmjs.com/package/mongodb) |
| MongoDB | [www.mongodb.com](https://www.mongodb.com) |
| MongoDB University | [learn.mongodb.com](https://learn.mongodb.com/catalog?labels=%5B%22Language%22%5D&values=%5B%22Node.js%22%5D) |
| MongoDB Developer Center | [www.mongodb.com/developer](https://www.mongodb.com/developer/languages/javascript/) |
| Stack Overflow | [stackoverflow.com](https://stackoverflow.com/search?q=%28%5Btypescript%5D+or+%5Bjavascript%5D+or+%5Bnode.js%5D%29+and+%5Bmongodb%5D) |
| Source Code | [github.com/mongodb/node-mongodb-native](https://github.com/mongodb/node-mongodb-native) |
| Upgrade to v7 | [etc/notes/CHANGES_7.0.0.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/CHANGES_7.0.0.md) |
| Contributing | [CONTRIBUTING.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/CONTRIBUTING.md) |
| Changelog | [HISTORY.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/HISTORY.md) |
### Release Integrity
Releases are created automatically and signed using the [Node team's GPG key](https://pgp.mongodb.com/node-driver.asc). This applies to the git tag as well as all release packages provided as part of a GitHub release. To verify the provided packages, download the key and import it using gpg:
```shell
gpg --import node-driver.asc
```
The GitHub release contains a detached signature file for the NPM package (named
`mongodb-X.Y.Z.tgz.sig`).
The following command returns the link npm package.
```shell
npm view mongodb@vX.Y.Z dist.tarball
```
Using the result of the above command, a `curl` command can return the official npm package for the release.
To verify the integrity of the downloaded package, run the following command:
```shell
gpg --verify mongodb-X.Y.Z.tgz.sig mongodb-X.Y.Z.tgz
```
> [!Note]
> No verification is done when using npm to install the package. The contents of the Github tarball and npm's tarball are identical.
The MongoDB Node.js driver follows [semantic versioning](https://semver.org/) for its releases.
### Bugs / Feature Requests
Think youve found a bug? Want to see a new feature in `node-mongodb-native`? Please open a
case in our issue management tool, JIRA:
- Create an account and login [jira.mongodb.org](https://jira.mongodb.org).
- Navigate to the NODE project [jira.mongodb.org/browse/NODE](https://jira.mongodb.org/browse/NODE).
- Click **Create Issue** - Please provide as much information as possible about the issue type and how to reproduce it.
Bug reports in JIRA for all driver projects (i.e. NODE, PYTHON, CSHARP, JAVA) and the
Core Server (i.e. SERVER) project are **public**.
### Support / Feedback
For issues with, questions about, or feedback for the Node.js driver, please look into our [support channels](https://www.mongodb.com/docs/manual/support). Please do not email any of the driver developers directly with issues or questions - you're more likely to get an answer on the [MongoDB Community Forums](https://community.mongodb.com/tags/c/drivers-odms-connectors/7/node-js-driver).
### Change Log
Change history can be found in [`HISTORY.md`](https://github.com/mongodb/node-mongodb-native/blob/HEAD/HISTORY.md).
### Compatibility
The driver currently supports 4.2+ servers.
For exhaustive server and runtime version compatibility matrices, please refer to the following links:
- [MongoDB](https://www.mongodb.com/docs/drivers/node/current/compatibility/#mongodb-compatibility)
- [NodeJS](https://www.mongodb.com/docs/drivers/node/current/compatibility/#language-compatibility)
#### Component Support Matrix
The following table describes add-on component version compatibility for the Node.js driver. Only packages with versions in these supported ranges are stable when used in combination.
| Component | `mongodb@3.x` | `mongodb@4.x` | `mongodb@5.x` | `mongodb@<6.12` | `mongodb@>=6.12` | `mongodb@7.x` |
| ------------------------------------------------------------------------------------ | ------------------ | ------------------ | ------------------ | --------------- | ------------------ | ------------- |
| [bson](https://www.npmjs.com/package/bson) | ^1.0.0 | ^4.0.0 | ^5.0.0 | ^6.0.0 | ^6.0.0 | ^7.0.0 |
| [bson-ext](https://www.npmjs.com/package/bson-ext) | ^1.0.0 \|\| ^2.0.0 | ^4.0.0 | N/A | N/A | N/A | N/A |
| [kerberos](https://www.npmjs.com/package/kerberos) | ^1.0.0 | ^1.0.0 \|\| ^2.0.0 | ^1.0.0 \|\| ^2.0.0 | ^2.0.1 | ^2.0.1 | ^7.0.0 |
| [mongodb-client-encryption](https://www.npmjs.com/package/mongodb-client-encryption) | ^1.0.0 | ^1.0.0 \|\| ^2.0.0 | ^2.3.0 | ^6.0.0 | ^6.0.0 | ^7.0.0 |
| [mongodb-legacy](https://www.npmjs.com/package/mongodb-legacy) | N/A | ^4.0.0 | ^5.0.0 | ^6.0.0 | ^6.0.0 | N/A |
| [@mongodb-js/zstd](https://www.npmjs.com/package/@mongodb-js/zstd) | N/A | ^1.0.0 | ^1.0.0 | ^1.1.0 | ^1.1.0 \|\| ^2.0.0 | ^7.0.0 |
#### Typescript Version
We recommend using the latest version of typescript, however we currently ensure the driver's public types compile against `typescript@5.6.0`.
This is the lowest typescript version guaranteed to work with our driver: older versions may or may not work - use at your own risk.
Since typescript [does not restrict breaking changes to major versions](https://github.com/Microsoft/TypeScript/wiki/Breaking-Changes), we consider this support best effort.
If you run into any unexpected compiler failures against our supported TypeScript versions, please let us know by filing an issue on our [JIRA](https://jira.mongodb.org/browse/NODE).
Additionally, our Typescript types are compatible with the ECMAScript standard for our minimum supported Node version. Currently, our Typescript targets es2023.
## Installation
The recommended way to get started using the Node.js driver is by using the `npm` (Node Package Manager) to install the dependency in your project.
After you've created your own project using `npm init`, you can run:
```bash
npm install mongodb
```
This will download the MongoDB driver and add a dependency entry in your `package.json` file.
If you are a Typescript user, you will need the Node.js type definitions to use the driver's definitions:
```sh
npm install -D @types/node
```
## Driver Extensions
The MongoDB driver can optionally be enhanced by the following feature packages:
Maintained by MongoDB:
- Zstd network compression - [@mongodb-js/zstd](https://github.com/mongodb-js/zstd)
- MongoDB field level and queryable encryption - [mongodb-client-encryption](https://github.com/mongodb/libmongocrypt#readme)
- GSSAPI / SSPI / Kerberos authentication - [kerberos](https://github.com/mongodb-js/kerberos)
Some of these packages include native C++ extensions.
Consult the [trouble shooting guide here](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/native-extensions.md) if you run into compilation issues.
Third party:
- Snappy network compression - [snappy](https://github.com/Brooooooklyn/snappy)
- AWS authentication - [@aws-sdk/credential-providers](https://github.com/aws/aws-sdk-js-v3/tree/main/packages/credential-providers)
## Quick Start
This guide will show you how to set up a simple application using Node.js and MongoDB. Its scope is only how to set up the driver and perform the simple CRUD operations. For more in-depth coverage, see the [official documentation](https://www.mongodb.com/docs/drivers/node/).
### Create the `package.json` file
First, create a directory where your application will live.
```bash
mkdir myProject
cd myProject
```
Enter the following command and answer the questions to create the initial structure for your new project:
```bash
npm init -y
```
Next, install the driver as a dependency.
```bash
npm install mongodb
```
### Start a MongoDB Server
For complete MongoDB installation instructions, see [the manual](https://www.mongodb.com/docs/manual/installation/).
1. Download the right MongoDB version from [MongoDB](https://www.mongodb.org/downloads)
2. Create a database directory (in this case under **/data**).
3. Install and start a `mongod` process.
```bash
mongod --dbpath=/data
```
You should see the **mongod** process start up and print some status information.
### Connect to MongoDB
Create a new **app.js** file and add the following code to try out some basic CRUD
operations using the MongoDB driver.
Add code to connect to the server and the database **myProject**:
> **NOTE:** Resolving DNS Connection issues
>
> Node.js 18 changed the default DNS resolution ordering from always prioritizing IPv4 to the ordering
> returned by the DNS provider. In some environments, this can result in `localhost` resolving to
> an IPv6 address instead of IPv4 and a consequent failure to connect to the server.
>
> This can be resolved by:
>
> - specifying the IP address family using the MongoClient `family` option (`MongoClient(<uri>, { family: 4 } )`)
> - launching mongod or mongos with the ipv6 flag enabled ([--ipv6 mongod option documentation](https://www.mongodb.com/docs/manual/reference/program/mongod/#std-option-mongod.--ipv6))
> - using a host of `127.0.0.1` in place of localhost
> - specifying the DNS resolution ordering with the `--dns-resolution-order` Node.js command line argument (e.g. `node --dns-resolution-order=ipv4first`)
```js
const { MongoClient } = require('mongodb');
// or as an es module:
// import { MongoClient } from 'mongodb'
// Connection URL
const url = 'mongodb://localhost:27017';
const client = new MongoClient(url);
// Database Name
const dbName = 'myProject';
async function main() {
// Use connect method to connect to the server
await client.connect();
console.log('Connected successfully to server');
const db = client.db(dbName);
const collection = db.collection('documents');
// the following code examples can be pasted here...
return 'done.';
}
main()
.then(console.log)
.catch(console.error)
.finally(() => client.close());
```
Run your app from the command line with:
```bash
node app.js
```
The application should print **Connected successfully to server** to the console.
### Insert a Document
Add to **app.js** the following function which uses the **insertMany**
method to add three documents to the **documents** collection.
```js
const insertResult = await collection.insertMany([{ a: 1 }, { a: 2 }, { a: 3 }]);
console.log('Inserted documents =>', insertResult);
```
The **insertMany** command returns an object with information about the insert operations.
### Find All Documents
Add a query that returns all the documents.
```js
const findResult = await collection.find({}).toArray();
console.log('Found documents =>', findResult);
```
This query returns all the documents in the **documents** collection.
If you add this below the insertMany example, you'll see the documents you've inserted.
### Find Documents with a Query Filter
Add a query filter to find only documents which meet the query criteria.
```js
const filteredDocs = await collection.find({ a: 3 }).toArray();
console.log('Found documents filtered by { a: 3 } =>', filteredDocs);
```
Only the documents which match `'a' : 3` should be returned.
### Update a document
The following operation updates a document in the **documents** collection.
```js
const updateResult = await collection.updateOne({ a: 3 }, { $set: { b: 1 } });
console.log('Updated documents =>', updateResult);
```
The method updates the first document where the field **a** is equal to **3** by adding a new field **b** to the document set to **1**. `updateResult` contains information about whether there was a matching document to update or not.
### Remove a document
Remove the document where the field **a** is equal to **3**.
```js
const deleteResult = await collection.deleteMany({ a: 3 });
console.log('Deleted documents =>', deleteResult);
```
### Index a Collection
[Indexes](https://www.mongodb.com/docs/manual/indexes/) can improve your application's
performance. The following function creates an index on the **a** field in the
**documents** collection.
```js
const indexName = await collection.createIndex({ a: 1 });
console.log('index name =', indexName);
```
For more detailed information, see the [indexing strategies page](https://www.mongodb.com/docs/manual/applications/indexes/).
## Error Handling
If you need to filter certain errors from our driver, we have a helpful tree of errors described in [etc/notes/errors.md](https://github.com/mongodb/node-mongodb-native/blob/HEAD/etc/notes/errors.md).
It is our recommendation to use `instanceof` checks on errors and to avoid relying on parsing `error.message` and `error.name` strings in your code.
We guarantee `instanceof` checks will pass according to semver guidelines, but errors may be sub-classed or their messages may change at any time, even patch releases, as we see fit to increase the helpfulness of the errors.
Any new errors we add to the driver will directly extend an existing error class and no existing error will be moved to a different parent class outside of a major release.
This means `instanceof` will always be able to accurately capture the errors that our driver throws.
```typescript
const client = new MongoClient(url);
await client.connect();
const collection = client.db().collection('collection');
try {
await collection.insertOne({ _id: 1 });
await collection.insertOne({ _id: 1 }); // duplicate key error
} catch (error) {
if (error instanceof MongoServerError) {
console.log(`Error worth logging: ${error}`); // special case for some reason
}
throw error; // still want to crash
}
```
## Nightly releases
If you need to test with a change from the latest `main` branch, our `mongodb` npm package has nightly versions released under the `nightly` tag.
```sh
npm install mongodb@nightly
```
Nightly versions are published regardless of testing outcome.
This means there could be semantic breakages or partially implemented features.
The nightly build is not suitable for production use.
## Next Steps
- [MongoDB Documentation](https://www.mongodb.com/docs/manual/)
- [MongoDB Node Driver Documentation](https://www.mongodb.com/docs/drivers/node/)
- [Read about Schemas](https://www.mongodb.com/docs/manual/core/data-modeling-introduction/)
- [Star us on GitHub](https://github.com/mongodb/node-mongodb-native)
## License
[Apache 2.0](LICENSE.md)
© 2012-present MongoDB [Contributors](https://github.com/mongodb/node-mongodb-native/blob/HEAD/CONTRIBUTORS.md) \
© 2009-2012 Christian Amor Kvalheim

12
node_modules/mongodb/etc/prepare.js generated vendored Executable file
View file

@ -0,0 +1,12 @@
#! /usr/bin/env node
var cp = require('child_process');
var fs = require('fs');
var os = require('os');
if (fs.existsSync('src')) {
cp.spawn('npm', ['run', 'build:dts'], { stdio: 'inherit', shell: os.platform() === 'win32' });
} else {
if (!fs.existsSync('lib')) {
console.warn('MongoDB: No compiled javascript present, the driver is not installed correctly.');
}
}

136
node_modules/mongodb/lib/admin.js generated vendored Normal file
View file

@ -0,0 +1,136 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Admin = void 0;
const bson_1 = require("./bson");
const execute_operation_1 = require("./operations/execute_operation");
const list_databases_1 = require("./operations/list_databases");
const remove_user_1 = require("./operations/remove_user");
const run_command_1 = require("./operations/run_command");
const validate_collection_1 = require("./operations/validate_collection");
const utils_1 = require("./utils");
/**
* The **Admin** class is an internal class that allows convenient access to
* the admin functionality and commands for MongoDB.
*
* **ADMIN Cannot directly be instantiated**
* @public
*
* @example
* ```ts
* import { MongoClient } from 'mongodb';
*
* const client = new MongoClient('mongodb://localhost:27017');
* const admin = client.db().admin();
* const dbInfo = await admin.listDatabases();
* for (const db of dbInfo.databases) {
* console.log(db.name);
* }
* ```
*/
class Admin {
/**
* Create a new Admin instance
* @internal
*/
constructor(db) {
this.s = { db };
}
/**
* Execute a command
*
* The driver will ensure the following fields are attached to the command sent to the server:
* - `lsid` - sourced from an implicit session or options.session
* - `$readPreference` - defaults to primary or can be configured by options.readPreference
* - `$db` - sourced from the name of this database
*
* If the client has a serverApi setting:
* - `apiVersion`
* - `apiStrict`
* - `apiDeprecationErrors`
*
* When in a transaction:
* - `readConcern` - sourced from readConcern set on the TransactionOptions
* - `writeConcern` - sourced from writeConcern set on the TransactionOptions
*
* Attaching any of the above fields to the command will have no effect as the driver will overwrite the value.
*
* @param command - The command to execute
* @param options - Optional settings for the command
*/
async command(command, options) {
return await (0, execute_operation_1.executeOperation)(this.s.db.client, new run_command_1.RunCommandOperation(new utils_1.MongoDBNamespace('admin'), command, {
...(0, bson_1.resolveBSONOptions)(options),
session: options?.session,
readPreference: options?.readPreference,
timeoutMS: options?.timeoutMS ?? this.s.db.timeoutMS
}));
}
/**
* Retrieve the server build information
*
* @param options - Optional settings for the command
*/
async buildInfo(options) {
return await this.command({ buildinfo: 1 }, options);
}
/**
* Retrieve the server build information
*
* @param options - Optional settings for the command
*/
async serverInfo(options) {
return await this.command({ buildinfo: 1 }, options);
}
/**
* Retrieve this db's server status.
*
* @param options - Optional settings for the command
*/
async serverStatus(options) {
return await this.command({ serverStatus: 1 }, options);
}
/**
* Ping the MongoDB server and retrieve results
*
* @param options - Optional settings for the command
*/
async ping(options) {
return await this.command({ ping: 1 }, options);
}
/**
* Remove a user from a database
*
* @param username - The username to remove
* @param options - Optional settings for the command
*/
async removeUser(username, options) {
return await (0, execute_operation_1.executeOperation)(this.s.db.client, new remove_user_1.RemoveUserOperation(this.s.db, username, { dbName: 'admin', ...options }));
}
/**
* Validate an existing collection
*
* @param collectionName - The name of the collection to validate.
* @param options - Optional settings for the command
*/
async validateCollection(collectionName, options = {}) {
return await (0, execute_operation_1.executeOperation)(this.s.db.client, new validate_collection_1.ValidateCollectionOperation(this, collectionName, options));
}
/**
* List the available databases
*
* @param options - Optional settings for the command
*/
async listDatabases(options) {
return await (0, execute_operation_1.executeOperation)(this.s.db.client, new list_databases_1.ListDatabasesOperation(this.s.db, { timeoutMS: this.s.db.timeoutMS, ...options }));
}
/**
* Get ReplicaSet status
*
* @param options - Optional settings for the command
*/
async replSetGetStatus(options) {
return await this.command({ replSetGetStatus: 1 }, options);
}
}
exports.Admin = Admin;
//# sourceMappingURL=admin.js.map

1
node_modules/mongodb/lib/admin.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"admin.js","sourceRoot":"","sources":["../src/admin.ts"],"names":[],"mappings":";;;AAAA,iCAA2D;AAG3D,sEAAkE;AAClE,gEAIqC;AACrC,0DAAuF;AACvF,0DAAuF;AACvF,0EAG0C;AAC1C,mCAA2C;AAO3C;;;;;;;;;;;;;;;;;;GAkBG;AACH,MAAa,KAAK;IAIhB;;;OAGG;IACH,YAAY,EAAM;QAChB,IAAI,CAAC,CAAC,GAAG,EAAE,EAAE,EAAE,CAAC;IAClB,CAAC;IAED;;;;;;;;;;;;;;;;;;;;;OAqBG;IACH,KAAK,CAAC,OAAO,CAAC,OAAiB,EAAE,OAA2B;QAC1D,OAAO,MAAM,IAAA,oCAAgB,EAC3B,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,MAAM,EAChB,IAAI,iCAAmB,CAAC,IAAI,wBAAgB,CAAC,OAAO,CAAC,EAAE,OAAO,EAAE;YAC9D,GAAG,IAAA,yBAAkB,EAAC,OAAO,CAAC;YAC9B,OAAO,EAAE,OAAO,EAAE,OAAO;YACzB,cAAc,EAAE,OAAO,EAAE,cAAc;YACvC,SAAS,EAAE,OAAO,EAAE,SAAS,IAAI,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,SAAS;SACrD,CAAC,CACH,CAAC;IACJ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,SAAS,CAAC,OAAiC;QAC/C,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IACvD,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,UAAU,CAAC,OAAiC;QAChD,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,EAAE,SAAS,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IACvD,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,YAAY,CAAC,OAAiC;QAClD,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,EAAE,YAAY,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IAC1D,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,IAAI,CAAC,OAAiC;QAC1C,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,EAAE,IAAI,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IAClD,CAAC;IAED;;;;;OAKG;IACH,KAAK,CAAC,UAAU,CAAC,QAAgB,EAAE,OAA2B;QAC5D,OAAO,MAAM,IAAA,oCAAgB,EAC3B,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,MAAM,EAChB,IAAI,iCAAmB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,QAAQ,EAAE,EAAE,MAAM,EAAE,OAAO,EAAE,GAAG,OAAO,EAAE,CAAC,CAC9E,CAAC;IACJ,CAAC;IAED;;;;;OAKG;IACH,KAAK,CAAC,kBAAkB,CACtB,cAAsB,EACtB,UAAqC,EAAE;QAEvC,OAAO,MAAM,IAAA,oCAAgB,EAC3B,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,MAAM,EAChB,IAAI,iDAA2B,CAAC,IAAI,EAAE,cAAc,EAAE,OAAO,CAAC,CAC/D,CAAC;IACJ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,aAAa,CAAC,OAA8B;QAChD,OAAO,MAAM,IAAA,oCAAgB,EAC3B,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,MAAM,EAChB,IAAI,uCAAsB,CAAC,IAAI,CAAC,CAAC,CAAC,EAAE,EAAE,EAAE,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,EAAE,CAAC,SAAS,EAAE,GAAG,OAAO,EAAE,CAAC,CACtF,CAAC;IACJ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,gBAAgB,CAAC,OAAiC;QACtD,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,EAAE,gBAAgB,EAAE,CAAC,EAAE,EAAE,OAAO,CAAC,CAAC;IAC9D,CAAC;CACF;AAnID,sBAmIC"}

84
node_modules/mongodb/lib/bson.js generated vendored Normal file
View file

@ -0,0 +1,84 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.toUTF8 = exports.getBigInt64LE = exports.getFloat64LE = exports.getInt32LE = exports.UUID = exports.Timestamp = exports.serialize = exports.ObjectId = exports.MinKey = exports.MaxKey = exports.Long = exports.Int32 = exports.EJSON = exports.Double = exports.deserialize = exports.Decimal128 = exports.DBRef = exports.Code = exports.calculateObjectSize = exports.BSONType = exports.BSONSymbol = exports.BSONRegExp = exports.BSONError = exports.BSON = exports.Binary = void 0;
exports.parseToElementsToArray = parseToElementsToArray;
exports.pluckBSONSerializeOptions = pluckBSONSerializeOptions;
exports.resolveBSONOptions = resolveBSONOptions;
exports.parseUtf8ValidationOption = parseUtf8ValidationOption;
/* eslint-disable no-restricted-imports */
const bson_1 = require("bson");
var bson_2 = require("bson");
Object.defineProperty(exports, "Binary", { enumerable: true, get: function () { return bson_2.Binary; } });
Object.defineProperty(exports, "BSON", { enumerable: true, get: function () { return bson_2.BSON; } });
Object.defineProperty(exports, "BSONError", { enumerable: true, get: function () { return bson_2.BSONError; } });
Object.defineProperty(exports, "BSONRegExp", { enumerable: true, get: function () { return bson_2.BSONRegExp; } });
Object.defineProperty(exports, "BSONSymbol", { enumerable: true, get: function () { return bson_2.BSONSymbol; } });
Object.defineProperty(exports, "BSONType", { enumerable: true, get: function () { return bson_2.BSONType; } });
Object.defineProperty(exports, "calculateObjectSize", { enumerable: true, get: function () { return bson_2.calculateObjectSize; } });
Object.defineProperty(exports, "Code", { enumerable: true, get: function () { return bson_2.Code; } });
Object.defineProperty(exports, "DBRef", { enumerable: true, get: function () { return bson_2.DBRef; } });
Object.defineProperty(exports, "Decimal128", { enumerable: true, get: function () { return bson_2.Decimal128; } });
Object.defineProperty(exports, "deserialize", { enumerable: true, get: function () { return bson_2.deserialize; } });
Object.defineProperty(exports, "Double", { enumerable: true, get: function () { return bson_2.Double; } });
Object.defineProperty(exports, "EJSON", { enumerable: true, get: function () { return bson_2.EJSON; } });
Object.defineProperty(exports, "Int32", { enumerable: true, get: function () { return bson_2.Int32; } });
Object.defineProperty(exports, "Long", { enumerable: true, get: function () { return bson_2.Long; } });
Object.defineProperty(exports, "MaxKey", { enumerable: true, get: function () { return bson_2.MaxKey; } });
Object.defineProperty(exports, "MinKey", { enumerable: true, get: function () { return bson_2.MinKey; } });
Object.defineProperty(exports, "ObjectId", { enumerable: true, get: function () { return bson_2.ObjectId; } });
Object.defineProperty(exports, "serialize", { enumerable: true, get: function () { return bson_2.serialize; } });
Object.defineProperty(exports, "Timestamp", { enumerable: true, get: function () { return bson_2.Timestamp; } });
Object.defineProperty(exports, "UUID", { enumerable: true, get: function () { return bson_2.UUID; } });
function parseToElementsToArray(bytes, offset) {
const res = bson_1.BSON.onDemand.parseToElements(bytes, offset);
return Array.isArray(res) ? res : [...res];
}
exports.getInt32LE = bson_1.BSON.onDemand.NumberUtils.getInt32LE;
exports.getFloat64LE = bson_1.BSON.onDemand.NumberUtils.getFloat64LE;
exports.getBigInt64LE = bson_1.BSON.onDemand.NumberUtils.getBigInt64LE;
exports.toUTF8 = bson_1.BSON.onDemand.ByteUtils.toUTF8;
function pluckBSONSerializeOptions(options) {
const { fieldsAsRaw, useBigInt64, promoteValues, promoteBuffers, promoteLongs, serializeFunctions, ignoreUndefined, bsonRegExp, raw, enableUtf8Validation } = options;
return {
fieldsAsRaw,
useBigInt64,
promoteValues,
promoteBuffers,
promoteLongs,
serializeFunctions,
ignoreUndefined,
bsonRegExp,
raw,
enableUtf8Validation
};
}
/**
* Merge the given BSONSerializeOptions, preferring options over the parent's options, and
* substituting defaults for values not set.
*
* @internal
*/
function resolveBSONOptions(options, parent) {
const parentOptions = parent?.bsonOptions;
return {
raw: options?.raw ?? parentOptions?.raw ?? false,
useBigInt64: options?.useBigInt64 ?? parentOptions?.useBigInt64 ?? false,
promoteLongs: options?.promoteLongs ?? parentOptions?.promoteLongs ?? true,
promoteValues: options?.promoteValues ?? parentOptions?.promoteValues ?? true,
promoteBuffers: options?.promoteBuffers ?? parentOptions?.promoteBuffers ?? false,
ignoreUndefined: options?.ignoreUndefined ?? parentOptions?.ignoreUndefined ?? false,
bsonRegExp: options?.bsonRegExp ?? parentOptions?.bsonRegExp ?? false,
serializeFunctions: options?.serializeFunctions ?? parentOptions?.serializeFunctions ?? false,
fieldsAsRaw: options?.fieldsAsRaw ?? parentOptions?.fieldsAsRaw ?? {},
enableUtf8Validation: options?.enableUtf8Validation ?? parentOptions?.enableUtf8Validation ?? true
};
}
/** @internal */
function parseUtf8ValidationOption(options) {
const enableUtf8Validation = options?.enableUtf8Validation;
if (enableUtf8Validation === false) {
return { utf8: false };
}
return { utf8: { writeErrors: false } };
}
//# sourceMappingURL=bson.js.map

1
node_modules/mongodb/lib/bson.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"bson.js","sourceRoot":"","sources":["../src/bson.ts"],"names":[],"mappings":";;;AAkCA,wDAGC;AAgDD,8DAyBC;AAQD,gDAkBC;AAGD,8DAQC;AAnJD,0CAA0C;AAC1C,+BAA4E;AAE5E,6BA0Bc;AAzBZ,8FAAA,MAAM,OAAA;AACN,4FAAA,IAAI,OAAA;AACJ,iGAAA,SAAS,OAAA;AACT,kGAAA,UAAU,OAAA;AACV,kGAAA,UAAU,OAAA;AACV,gGAAA,QAAQ,OAAA;AACR,2GAAA,mBAAmB,OAAA;AACnB,4FAAA,IAAI,OAAA;AACJ,6FAAA,KAAK,OAAA;AACL,kGAAA,UAAU,OAAA;AACV,mGAAA,WAAW,OAAA;AAGX,8FAAA,MAAM,OAAA;AACN,6FAAA,KAAK,OAAA;AAEL,6FAAA,KAAK,OAAA;AACL,4FAAA,IAAI,OAAA;AACJ,8FAAA,MAAM,OAAA;AACN,8FAAA,MAAM,OAAA;AACN,gGAAA,QAAQ,OAAA;AAER,iGAAA,SAAS,OAAA;AACT,iGAAA,SAAS,OAAA;AACT,4FAAA,IAAI,OAAA;AAMN,SAAgB,sBAAsB,CAAC,KAAiB,EAAE,MAAe;IACvE,MAAM,GAAG,GAAG,WAAI,CAAC,QAAQ,CAAC,eAAe,CAAC,KAAK,EAAE,MAAM,CAAC,CAAC;IACzD,OAAO,KAAK,CAAC,OAAO,CAAC,GAAG,CAAC,CAAC,CAAC,CAAC,GAAG,CAAC,CAAC,CAAC,CAAC,GAAG,GAAG,CAAC,CAAC;AAC7C,CAAC;AAEY,QAAA,UAAU,GAAG,WAAI,CAAC,QAAQ,CAAC,WAAW,CAAC,UAAU,CAAC;AAClD,QAAA,YAAY,GAAG,WAAI,CAAC,QAAQ,CAAC,WAAW,CAAC,YAAY,CAAC;AACtD,QAAA,aAAa,GAAG,WAAI,CAAC,QAAQ,CAAC,WAAW,CAAC,aAAa,CAAC;AACxD,QAAA,MAAM,GAAG,WAAI,CAAC,QAAQ,CAAC,SAAS,CAAC,MAAM,CAAC;AA2CrD,SAAgB,yBAAyB,CAAC,OAA6B;IACrE,MAAM,EACJ,WAAW,EACX,WAAW,EACX,aAAa,EACb,cAAc,EACd,YAAY,EACZ,kBAAkB,EAClB,eAAe,EACf,UAAU,EACV,GAAG,EACH,oBAAoB,EACrB,GAAG,OAAO,CAAC;IACZ,OAAO;QACL,WAAW;QACX,WAAW;QACX,aAAa;QACb,cAAc;QACd,YAAY;QACZ,kBAAkB;QAClB,eAAe;QACf,UAAU;QACV,GAAG;QACH,oBAAoB;KACrB,CAAC;AACJ,CAAC;AAED;;;;;GAKG;AACH,SAAgB,kBAAkB,CAChC,OAA8B,EAC9B,MAA+C;IAE/C,MAAM,aAAa,GAAG,MAAM,EAAE,WAAW,CAAC;IAC1C,OAAO;QACL,GAAG,EAAE,OAAO,EAAE,GAAG,IAAI,aAAa,EAAE,GAAG,IAAI,KAAK;QAChD,WAAW,EAAE,OAAO,EAAE,WAAW,IAAI,aAAa,EAAE,WAAW,IAAI,KAAK;QACxE,YAAY,EAAE,OAAO,EAAE,YAAY,IAAI,aAAa,EAAE,YAAY,IAAI,IAAI;QAC1E,aAAa,EAAE,OAAO,EAAE,aAAa,IAAI,aAAa,EAAE,aAAa,IAAI,IAAI;QAC7E,cAAc,EAAE,OAAO,EAAE,cAAc,IAAI,aAAa,EAAE,cAAc,IAAI,KAAK;QACjF,eAAe,EAAE,OAAO,EAAE,eAAe,IAAI,aAAa,EAAE,eAAe,IAAI,KAAK;QACpF,UAAU,EAAE,OAAO,EAAE,UAAU,IAAI,aAAa,EAAE,UAAU,IAAI,KAAK;QACrE,kBAAkB,EAAE,OAAO,EAAE,kBAAkB,IAAI,aAAa,EAAE,kBAAkB,IAAI,KAAK;QAC7F,WAAW,EAAE,OAAO,EAAE,WAAW,IAAI,aAAa,EAAE,WAAW,IAAI,EAAE;QACrE,oBAAoB,EAClB,OAAO,EAAE,oBAAoB,IAAI,aAAa,EAAE,oBAAoB,IAAI,IAAI;KAC/E,CAAC;AACJ,CAAC;AAED,gBAAgB;AAChB,SAAgB,yBAAyB,CAAC,OAA4C;IAGpF,MAAM,oBAAoB,GAAG,OAAO,EAAE,oBAAoB,CAAC;IAC3D,IAAI,oBAAoB,KAAK,KAAK,EAAE,CAAC;QACnC,OAAO,EAAE,IAAI,EAAE,KAAK,EAAE,CAAC;IACzB,CAAC;IACD,OAAO,EAAE,IAAI,EAAE,EAAE,WAAW,EAAE,KAAK,EAAE,EAAE,CAAC;AAC1C,CAAC"}

835
node_modules/mongodb/lib/bulk/common.js generated vendored Normal file
View file

@ -0,0 +1,835 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.BulkOperationBase = exports.FindOperators = exports.MongoBulkWriteError = exports.WriteError = exports.WriteConcernError = exports.BulkWriteResult = exports.Batch = exports.BatchType = void 0;
exports.mergeBatchResults = mergeBatchResults;
const bson_1 = require("../bson");
const error_1 = require("../error");
const delete_1 = require("../operations/delete");
const execute_operation_1 = require("../operations/execute_operation");
const insert_1 = require("../operations/insert");
const update_1 = require("../operations/update");
const timeout_1 = require("../timeout");
const utils_1 = require("../utils");
const write_concern_1 = require("../write_concern");
/** @public */
exports.BatchType = Object.freeze({
INSERT: 1,
UPDATE: 2,
DELETE: 3
});
/**
* Keeps the state of a unordered batch so we can rewrite the results
* correctly after command execution
*
* @public
*/
class Batch {
constructor(batchType, originalZeroIndex) {
this.originalZeroIndex = originalZeroIndex;
this.currentIndex = 0;
this.originalIndexes = [];
this.batchType = batchType;
this.operations = [];
this.size = 0;
this.sizeBytes = 0;
}
}
exports.Batch = Batch;
/**
* @public
* The result of a bulk write.
*/
class BulkWriteResult {
static generateIdMap(ids) {
const idMap = {};
for (const doc of ids) {
idMap[doc.index] = doc._id;
}
return idMap;
}
/**
* Create a new BulkWriteResult instance
* @internal
*/
constructor(bulkResult, isOrdered) {
this.result = bulkResult;
this.insertedCount = this.result.nInserted ?? 0;
this.matchedCount = this.result.nMatched ?? 0;
this.modifiedCount = this.result.nModified ?? 0;
this.deletedCount = this.result.nRemoved ?? 0;
this.upsertedCount = this.result.upserted.length ?? 0;
this.upsertedIds = BulkWriteResult.generateIdMap(this.result.upserted);
this.insertedIds = BulkWriteResult.generateIdMap(this.getSuccessfullyInsertedIds(bulkResult, isOrdered));
Object.defineProperty(this, 'result', { value: this.result, enumerable: false });
}
/** Evaluates to true if the bulk operation correctly executes */
get ok() {
return this.result.ok;
}
/**
* Returns document_ids that were actually inserted
* @internal
*/
getSuccessfullyInsertedIds(bulkResult, isOrdered) {
if (bulkResult.writeErrors.length === 0)
return bulkResult.insertedIds;
if (isOrdered) {
return bulkResult.insertedIds.slice(0, bulkResult.writeErrors[0].index);
}
return bulkResult.insertedIds.filter(({ index }) => !bulkResult.writeErrors.some(writeError => index === writeError.index));
}
/** Returns the upserted id at the given index */
getUpsertedIdAt(index) {
return this.result.upserted[index];
}
/** Returns raw internal result */
getRawResponse() {
return this.result;
}
/** Returns true if the bulk operation contains a write error */
hasWriteErrors() {
return this.result.writeErrors.length > 0;
}
/** Returns the number of write errors from the bulk operation */
getWriteErrorCount() {
return this.result.writeErrors.length;
}
/** Returns a specific write error object */
getWriteErrorAt(index) {
return index < this.result.writeErrors.length ? this.result.writeErrors[index] : undefined;
}
/** Retrieve all write errors */
getWriteErrors() {
return this.result.writeErrors;
}
/** Retrieve the write concern error if one exists */
getWriteConcernError() {
if (this.result.writeConcernErrors.length === 0) {
return;
}
else if (this.result.writeConcernErrors.length === 1) {
// Return the error
return this.result.writeConcernErrors[0];
}
else {
// Combine the errors
let errmsg = '';
for (let i = 0; i < this.result.writeConcernErrors.length; i++) {
const err = this.result.writeConcernErrors[i];
errmsg = errmsg + err.errmsg;
// TODO: Something better
if (i === 0)
errmsg = errmsg + ' and ';
}
return new WriteConcernError({ errmsg, code: error_1.MONGODB_ERROR_CODES.WriteConcernTimeout });
}
}
toString() {
return `BulkWriteResult(${bson_1.EJSON.stringify(this.result)})`;
}
isOk() {
return this.result.ok === 1;
}
}
exports.BulkWriteResult = BulkWriteResult;
/**
* An error representing a failure by the server to apply the requested write concern to the bulk operation.
* @public
* @category Error
*/
class WriteConcernError {
constructor(error) {
this.serverError = error;
}
/** Write concern error code. */
get code() {
return this.serverError.code;
}
/** Write concern error message. */
get errmsg() {
return this.serverError.errmsg;
}
/** Write concern error info. */
get errInfo() {
return this.serverError.errInfo;
}
toJSON() {
return this.serverError;
}
toString() {
return `WriteConcernError(${this.errmsg})`;
}
}
exports.WriteConcernError = WriteConcernError;
/**
* An error that occurred during a BulkWrite on the server.
* @public
* @category Error
*/
class WriteError {
constructor(err) {
this.err = err;
}
/** WriteError code. */
get code() {
return this.err.code;
}
/** WriteError original bulk operation index. */
get index() {
return this.err.index;
}
/** WriteError message. */
get errmsg() {
return this.err.errmsg;
}
/** WriteError details. */
get errInfo() {
return this.err.errInfo;
}
/** Returns the underlying operation that caused the error */
getOperation() {
return this.err.op;
}
toJSON() {
return { code: this.err.code, index: this.err.index, errmsg: this.err.errmsg, op: this.err.op };
}
toString() {
return `WriteError(${JSON.stringify(this.toJSON())})`;
}
}
exports.WriteError = WriteError;
/** Merges results into shared data structure */
function mergeBatchResults(batch, bulkResult, err, result) {
// If we have an error set the result to be the err object
if (err) {
result = err;
}
else if (result && result.result) {
result = result.result;
}
if (result == null) {
return;
}
// Do we have a top level error stop processing and return
if (result.ok === 0 && bulkResult.ok === 1) {
bulkResult.ok = 0;
const writeError = {
index: 0,
code: result.code || 0,
errmsg: result.message,
errInfo: result.errInfo,
op: batch.operations[0]
};
bulkResult.writeErrors.push(new WriteError(writeError));
return;
}
else if (result.ok === 0 && bulkResult.ok === 0) {
return;
}
// If we have an insert Batch type
if (isInsertBatch(batch) && result.n) {
bulkResult.nInserted = bulkResult.nInserted + result.n;
}
// If we have an insert Batch type
if (isDeleteBatch(batch) && result.n) {
bulkResult.nRemoved = bulkResult.nRemoved + result.n;
}
let nUpserted = 0;
// We have an array of upserted values, we need to rewrite the indexes
if (Array.isArray(result.upserted)) {
nUpserted = result.upserted.length;
for (let i = 0; i < result.upserted.length; i++) {
bulkResult.upserted.push({
index: result.upserted[i].index + batch.originalZeroIndex,
_id: result.upserted[i]._id
});
}
}
else if (result.upserted) {
nUpserted = 1;
bulkResult.upserted.push({
index: batch.originalZeroIndex,
_id: result.upserted
});
}
// If we have an update Batch type
if (isUpdateBatch(batch) && result.n) {
const nModified = result.nModified;
bulkResult.nUpserted = bulkResult.nUpserted + nUpserted;
bulkResult.nMatched = bulkResult.nMatched + (result.n - nUpserted);
if (typeof nModified === 'number') {
bulkResult.nModified = bulkResult.nModified + nModified;
}
else {
bulkResult.nModified = 0;
}
}
if (Array.isArray(result.writeErrors)) {
for (let i = 0; i < result.writeErrors.length; i++) {
const writeError = {
index: batch.originalIndexes[result.writeErrors[i].index],
code: result.writeErrors[i].code,
errmsg: result.writeErrors[i].errmsg,
errInfo: result.writeErrors[i].errInfo,
op: batch.operations[result.writeErrors[i].index]
};
bulkResult.writeErrors.push(new WriteError(writeError));
}
}
if (result.writeConcernError) {
bulkResult.writeConcernErrors.push(new WriteConcernError(result.writeConcernError));
}
}
async function executeCommands(bulkOperation, options) {
if (bulkOperation.s.batches.length === 0) {
return new BulkWriteResult(bulkOperation.s.bulkResult, bulkOperation.isOrdered);
}
for (const batch of bulkOperation.s.batches) {
const finalOptions = (0, utils_1.resolveOptions)(bulkOperation, {
...options,
ordered: bulkOperation.isOrdered
});
if (finalOptions.bypassDocumentValidation !== true) {
delete finalOptions.bypassDocumentValidation;
}
// Is the bypassDocumentValidation options specific
if (bulkOperation.s.bypassDocumentValidation === true) {
finalOptions.bypassDocumentValidation = true;
}
// Is the checkKeys option disabled
if (bulkOperation.s.checkKeys === false) {
finalOptions.checkKeys = false;
}
if (bulkOperation.retryWrites) {
if (isUpdateBatch(batch)) {
bulkOperation.retryWrites =
bulkOperation.retryWrites && !batch.operations.some(op => op.multi);
}
if (isDeleteBatch(batch)) {
bulkOperation.retryWrites =
bulkOperation.retryWrites && !batch.operations.some(op => op.limit === 0);
}
}
const operation = isInsertBatch(batch)
? new insert_1.InsertOperation(bulkOperation.s.namespace, batch.operations, finalOptions)
: isUpdateBatch(batch)
? new update_1.UpdateOperation(bulkOperation.s.namespace, batch.operations, finalOptions)
: isDeleteBatch(batch)
? new delete_1.DeleteOperation(bulkOperation.s.namespace, batch.operations, finalOptions)
: null;
if (operation == null)
throw new error_1.MongoRuntimeError(`Unknown batchType: ${batch.batchType}`);
let thrownError = null;
let result;
try {
result = await (0, execute_operation_1.executeOperation)(bulkOperation.s.collection.client, operation, finalOptions.timeoutContext);
}
catch (error) {
thrownError = error;
}
if (thrownError != null) {
if (thrownError instanceof error_1.MongoWriteConcernError) {
mergeBatchResults(batch, bulkOperation.s.bulkResult, thrownError, result);
const writeResult = new BulkWriteResult(bulkOperation.s.bulkResult, bulkOperation.isOrdered);
throw new MongoBulkWriteError({
message: thrownError.result.writeConcernError.errmsg,
code: thrownError.result.writeConcernError.code
}, writeResult);
}
else {
// Error is a driver related error not a bulk op error, return early
throw new MongoBulkWriteError(thrownError, new BulkWriteResult(bulkOperation.s.bulkResult, bulkOperation.isOrdered));
}
}
mergeBatchResults(batch, bulkOperation.s.bulkResult, thrownError, result);
const writeResult = new BulkWriteResult(bulkOperation.s.bulkResult, bulkOperation.isOrdered);
bulkOperation.handleWriteError(writeResult);
}
bulkOperation.s.batches.length = 0;
const writeResult = new BulkWriteResult(bulkOperation.s.bulkResult, bulkOperation.isOrdered);
bulkOperation.handleWriteError(writeResult);
return writeResult;
}
/**
* An error indicating an unsuccessful Bulk Write
* @public
* @category Error
*/
class MongoBulkWriteError extends error_1.MongoServerError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(error, result) {
super(error);
this.writeErrors = [];
if (error instanceof WriteConcernError)
this.err = error;
else if (!(error instanceof Error)) {
this.message = error.message;
this.code = error.code;
this.writeErrors = error.writeErrors ?? [];
}
this.result = result;
Object.assign(this, error);
}
get name() {
return 'MongoBulkWriteError';
}
/** Number of documents inserted. */
get insertedCount() {
return this.result.insertedCount;
}
/** Number of documents matched for update. */
get matchedCount() {
return this.result.matchedCount;
}
/** Number of documents modified. */
get modifiedCount() {
return this.result.modifiedCount;
}
/** Number of documents deleted. */
get deletedCount() {
return this.result.deletedCount;
}
/** Number of documents upserted. */
get upsertedCount() {
return this.result.upsertedCount;
}
/** Inserted document generated Id's, hash key is the index of the originating operation */
get insertedIds() {
return this.result.insertedIds;
}
/** Upserted document generated Id's, hash key is the index of the originating operation */
get upsertedIds() {
return this.result.upsertedIds;
}
}
exports.MongoBulkWriteError = MongoBulkWriteError;
/**
* A builder object that is returned from {@link BulkOperationBase#find}.
* Is used to build a write operation that involves a query filter.
*
* @public
*/
class FindOperators {
/**
* Creates a new FindOperators object.
* @internal
*/
constructor(bulkOperation) {
this.bulkOperation = bulkOperation;
}
/** Add a multiple update operation to the bulk operation */
update(updateDocument) {
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.UPDATE, (0, update_1.makeUpdateStatement)(currentOp.selector, updateDocument, {
...currentOp,
multi: true
}));
}
/** Add a single update operation to the bulk operation */
updateOne(updateDocument) {
if (!(0, utils_1.hasAtomicOperators)(updateDocument, this.bulkOperation.bsonOptions)) {
throw new error_1.MongoInvalidArgumentError('Update document requires atomic operators');
}
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.UPDATE, (0, update_1.makeUpdateStatement)(currentOp.selector, updateDocument, { ...currentOp, multi: false }));
}
/** Add a replace one operation to the bulk operation */
replaceOne(replacement) {
if ((0, utils_1.hasAtomicOperators)(replacement)) {
throw new error_1.MongoInvalidArgumentError('Replacement document must not use atomic operators');
}
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.UPDATE, (0, update_1.makeUpdateStatement)(currentOp.selector, replacement, { ...currentOp, multi: false }));
}
/** Add a delete one operation to the bulk operation */
deleteOne() {
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(currentOp.selector, { ...currentOp, limit: 1 }));
}
/** Add a delete many operation to the bulk operation */
delete() {
const currentOp = buildCurrentOp(this.bulkOperation);
return this.bulkOperation.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(currentOp.selector, { ...currentOp, limit: 0 }));
}
/** Upsert modifier for update bulk operation, noting that this operation is an upsert. */
upsert() {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.upsert = true;
return this;
}
/** Specifies the collation for the query condition. */
collation(collation) {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.collation = collation;
return this;
}
/** Specifies arrayFilters for UpdateOne or UpdateMany bulk operations. */
arrayFilters(arrayFilters) {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.arrayFilters = arrayFilters;
return this;
}
/** Specifies hint for the bulk operation. */
hint(hint) {
if (!this.bulkOperation.s.currentOp) {
this.bulkOperation.s.currentOp = {};
}
this.bulkOperation.s.currentOp.hint = hint;
return this;
}
}
exports.FindOperators = FindOperators;
/** @public */
class BulkOperationBase {
/**
* Create a new OrderedBulkOperation or UnorderedBulkOperation instance
* @internal
*/
constructor(collection, options, isOrdered) {
this.collection = collection;
this.retryWrites = collection.db.options?.retryWrites;
// determine whether bulkOperation is ordered or unordered
this.isOrdered = isOrdered;
const topology = (0, utils_1.getTopology)(collection);
options = options == null ? {} : options;
// TODO Bring from driver information in hello
// Get the namespace for the write operations
const namespace = collection.s.namespace;
// Used to mark operation as executed
const executed = false;
// Current item
const currentOp = undefined;
// Set max byte size
const hello = topology.lastHello();
// If we have autoEncryption on, batch-splitting must be done on 2mb chunks, but single documents
// over 2mb are still allowed
const usingAutoEncryption = !!(topology.s.options && topology.s.options.autoEncrypter);
const maxBsonObjectSize = hello && hello.maxBsonObjectSize ? hello.maxBsonObjectSize : 1024 * 1024 * 16;
const maxBatchSizeBytes = usingAutoEncryption ? 1024 * 1024 * 2 : maxBsonObjectSize;
const maxWriteBatchSize = hello && hello.maxWriteBatchSize ? hello.maxWriteBatchSize : 1000;
// Calculates the largest possible size of an Array key, represented as a BSON string
// element. This calculation:
// 1 byte for BSON type
// # of bytes = length of (string representation of (maxWriteBatchSize - 1))
// + 1 bytes for null terminator
const maxKeySize = (maxWriteBatchSize - 1).toString(10).length + 2;
// Final results
const bulkResult = {
ok: 1,
writeErrors: [],
writeConcernErrors: [],
insertedIds: [],
nInserted: 0,
nUpserted: 0,
nMatched: 0,
nModified: 0,
nRemoved: 0,
upserted: []
};
// Internal state
this.s = {
// Final result
bulkResult,
// Current batch state
currentBatch: undefined,
currentIndex: 0,
// ordered specific
currentBatchSize: 0,
currentBatchSizeBytes: 0,
// unordered specific
currentInsertBatch: undefined,
currentUpdateBatch: undefined,
currentRemoveBatch: undefined,
batches: [],
// Write concern
writeConcern: write_concern_1.WriteConcern.fromOptions(options),
// Max batch size options
maxBsonObjectSize,
maxBatchSizeBytes,
maxWriteBatchSize,
maxKeySize,
// Namespace
namespace,
// Topology
topology,
// Options
options: options,
// BSON options
bsonOptions: (0, bson_1.resolveBSONOptions)(options),
// Current operation
currentOp,
// Executed
executed,
// Collection
collection,
// Fundamental error
err: undefined,
// check keys
checkKeys: typeof options.checkKeys === 'boolean' ? options.checkKeys : false
};
// bypass Validation
if (options.bypassDocumentValidation === true) {
this.s.bypassDocumentValidation = true;
}
}
/**
* Add a single insert document to the bulk operation
*
* @example
* ```ts
* const bulkOp = collection.initializeOrderedBulkOp();
*
* // Adds three inserts to the bulkOp.
* bulkOp
* .insert({ a: 1 })
* .insert({ b: 2 })
* .insert({ c: 3 });
* await bulkOp.execute();
* ```
*/
insert(document) {
(0, utils_1.maybeAddIdToDocuments)(this.collection, document, {
forceServerObjectId: this.shouldForceServerObjectId()
});
return this.addToOperationsList(exports.BatchType.INSERT, document);
}
/**
* Builds a find operation for an update/updateOne/delete/deleteOne/replaceOne.
* Returns a builder object used to complete the definition of the operation.
*
* @example
* ```ts
* const bulkOp = collection.initializeOrderedBulkOp();
*
* // Add an updateOne to the bulkOp
* bulkOp.find({ a: 1 }).updateOne({ $set: { b: 2 } });
*
* // Add an updateMany to the bulkOp
* bulkOp.find({ c: 3 }).update({ $set: { d: 4 } });
*
* // Add an upsert
* bulkOp.find({ e: 5 }).upsert().updateOne({ $set: { f: 6 } });
*
* // Add a deletion
* bulkOp.find({ g: 7 }).deleteOne();
*
* // Add a multi deletion
* bulkOp.find({ h: 8 }).delete();
*
* // Add a replaceOne
* bulkOp.find({ i: 9 }).replaceOne({writeConcern: { j: 10 }});
*
* // Update using a pipeline (requires Mongodb 4.2 or higher)
* bulk.find({ k: 11, y: { $exists: true }, z: { $exists: true } }).updateOne([
* { $set: { total: { $sum: [ '$y', '$z' ] } } }
* ]);
*
* // All of the ops will now be executed
* await bulkOp.execute();
* ```
*/
find(selector) {
if (!selector) {
throw new error_1.MongoInvalidArgumentError('Bulk find operation must specify a selector');
}
// Save a current selector
this.s.currentOp = {
selector: selector
};
return new FindOperators(this);
}
/** Specifies a raw operation to perform in the bulk write. */
raw(op) {
if (op == null || typeof op !== 'object') {
throw new error_1.MongoInvalidArgumentError('Operation must be an object with an operation key');
}
if ('insertOne' in op) {
const forceServerObjectId = this.shouldForceServerObjectId();
const document = op.insertOne && op.insertOne.document == null
? // TODO(NODE-6003): remove support for omitting the `documents` subdocument in bulk inserts
op.insertOne
: op.insertOne.document;
(0, utils_1.maybeAddIdToDocuments)(this.collection, document, { forceServerObjectId });
return this.addToOperationsList(exports.BatchType.INSERT, document);
}
if ('replaceOne' in op || 'updateOne' in op || 'updateMany' in op) {
if ('replaceOne' in op) {
if ('q' in op.replaceOne) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
const updateStatement = (0, update_1.makeUpdateStatement)(op.replaceOne.filter, op.replaceOne.replacement, { ...op.replaceOne, multi: false });
if ((0, utils_1.hasAtomicOperators)(updateStatement.u)) {
throw new error_1.MongoInvalidArgumentError('Replacement document must not use atomic operators');
}
return this.addToOperationsList(exports.BatchType.UPDATE, updateStatement);
}
if ('updateOne' in op) {
if ('q' in op.updateOne) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
const updateStatement = (0, update_1.makeUpdateStatement)(op.updateOne.filter, op.updateOne.update, {
...op.updateOne,
multi: false
});
if (!(0, utils_1.hasAtomicOperators)(updateStatement.u, this.bsonOptions)) {
throw new error_1.MongoInvalidArgumentError('Update document requires atomic operators');
}
return this.addToOperationsList(exports.BatchType.UPDATE, updateStatement);
}
if ('updateMany' in op) {
if ('q' in op.updateMany) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
const updateStatement = (0, update_1.makeUpdateStatement)(op.updateMany.filter, op.updateMany.update, {
...op.updateMany,
multi: true
});
if (!(0, utils_1.hasAtomicOperators)(updateStatement.u, this.bsonOptions)) {
throw new error_1.MongoInvalidArgumentError('Update document requires atomic operators');
}
return this.addToOperationsList(exports.BatchType.UPDATE, updateStatement);
}
}
if ('deleteOne' in op) {
if ('q' in op.deleteOne) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
return this.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(op.deleteOne.filter, { ...op.deleteOne, limit: 1 }));
}
if ('deleteMany' in op) {
if ('q' in op.deleteMany) {
throw new error_1.MongoInvalidArgumentError('Raw operations are not allowed');
}
return this.addToOperationsList(exports.BatchType.DELETE, (0, delete_1.makeDeleteStatement)(op.deleteMany.filter, { ...op.deleteMany, limit: 0 }));
}
// otherwise an unknown operation was provided
throw new error_1.MongoInvalidArgumentError('bulkWrite only supports insertOne, updateOne, updateMany, deleteOne, deleteMany');
}
get length() {
return this.s.currentIndex;
}
get bsonOptions() {
return this.s.bsonOptions;
}
get writeConcern() {
return this.s.writeConcern;
}
get batches() {
const batches = [...this.s.batches];
if (this.isOrdered) {
if (this.s.currentBatch)
batches.push(this.s.currentBatch);
}
else {
if (this.s.currentInsertBatch)
batches.push(this.s.currentInsertBatch);
if (this.s.currentUpdateBatch)
batches.push(this.s.currentUpdateBatch);
if (this.s.currentRemoveBatch)
batches.push(this.s.currentRemoveBatch);
}
return batches;
}
async execute(options = {}) {
if (this.s.executed) {
throw new error_1.MongoBatchReExecutionError();
}
const writeConcern = write_concern_1.WriteConcern.fromOptions(options);
if (writeConcern) {
this.s.writeConcern = writeConcern;
}
// If we have current batch
if (this.isOrdered) {
if (this.s.currentBatch)
this.s.batches.push(this.s.currentBatch);
}
else {
if (this.s.currentInsertBatch)
this.s.batches.push(this.s.currentInsertBatch);
if (this.s.currentUpdateBatch)
this.s.batches.push(this.s.currentUpdateBatch);
if (this.s.currentRemoveBatch)
this.s.batches.push(this.s.currentRemoveBatch);
}
// If we have no operations in the bulk raise an error
if (this.s.batches.length === 0) {
throw new error_1.MongoInvalidArgumentError('Invalid BulkOperation, Batch cannot be empty');
}
this.s.executed = true;
const finalOptions = (0, utils_1.resolveOptions)(this.collection, { ...this.s.options, ...options });
// if there is no timeoutContext provided, create a timeoutContext and use it for
// all batches in the bulk operation
finalOptions.timeoutContext ??= timeout_1.TimeoutContext.create({
session: finalOptions.session,
timeoutMS: finalOptions.timeoutMS,
serverSelectionTimeoutMS: this.collection.client.s.options.serverSelectionTimeoutMS,
waitQueueTimeoutMS: this.collection.client.s.options.waitQueueTimeoutMS
});
if (finalOptions.session == null) {
// if there is not an explicit session provided to `execute()`, create
// an implicit session and use that for all batches in the bulk operation
return await this.collection.client.withSession({ explicit: false }, async (session) => {
return await executeCommands(this, { ...finalOptions, session });
});
}
return await executeCommands(this, { ...finalOptions });
}
/**
* Handles the write error before executing commands
* @internal
*/
handleWriteError(writeResult) {
if (this.s.bulkResult.writeErrors.length > 0) {
const msg = this.s.bulkResult.writeErrors[0].errmsg
? this.s.bulkResult.writeErrors[0].errmsg
: 'write operation failed';
throw new MongoBulkWriteError({
message: msg,
code: this.s.bulkResult.writeErrors[0].code,
writeErrors: this.s.bulkResult.writeErrors
}, writeResult);
}
const writeConcernError = writeResult.getWriteConcernError();
if (writeConcernError) {
throw new MongoBulkWriteError(writeConcernError, writeResult);
}
}
shouldForceServerObjectId() {
return (this.s.options.forceServerObjectId === true ||
this.s.collection.db.options?.forceServerObjectId === true);
}
}
exports.BulkOperationBase = BulkOperationBase;
function isInsertBatch(batch) {
return batch.batchType === exports.BatchType.INSERT;
}
function isUpdateBatch(batch) {
return batch.batchType === exports.BatchType.UPDATE;
}
function isDeleteBatch(batch) {
return batch.batchType === exports.BatchType.DELETE;
}
function buildCurrentOp(bulkOp) {
let { currentOp } = bulkOp.s;
bulkOp.s.currentOp = undefined;
if (!currentOp)
currentOp = {};
return currentOp;
}
//# sourceMappingURL=common.js.map

1
node_modules/mongodb/lib/bulk/common.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

67
node_modules/mongodb/lib/bulk/ordered.js generated vendored Normal file
View file

@ -0,0 +1,67 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.OrderedBulkOperation = void 0;
const BSON = require("../bson");
const error_1 = require("../error");
const common_1 = require("./common");
/** @public */
class OrderedBulkOperation extends common_1.BulkOperationBase {
/** @internal */
constructor(collection, options) {
super(collection, options, true);
}
addToOperationsList(batchType, document) {
// Get the bsonSize
const bsonSize = BSON.calculateObjectSize(document, {
checkKeys: false,
// Since we don't know what the user selected for BSON options here,
// err on the safe side, and check the size with ignoreUndefined: false.
ignoreUndefined: false
});
// Throw error if the doc is bigger than the max BSON size
if (bsonSize >= this.s.maxBsonObjectSize)
// TODO(NODE-3483): Change this to MongoBSONError
throw new error_1.MongoInvalidArgumentError(`Document is larger than the maximum size ${this.s.maxBsonObjectSize}`);
// Create a new batch object if we don't have a current one
if (this.s.currentBatch == null) {
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
}
const maxKeySize = this.s.maxKeySize;
// Check if we need to create a new batch
if (
// New batch if we exceed the max batch op size
this.s.currentBatchSize + 1 >= this.s.maxWriteBatchSize ||
// New batch if we exceed the maxBatchSizeBytes. Only matters if batch already has a doc,
// since we can't sent an empty batch
(this.s.currentBatchSize > 0 &&
this.s.currentBatchSizeBytes + maxKeySize + bsonSize >= this.s.maxBatchSizeBytes) ||
// New batch if the new op does not have the same op type as the current batch
this.s.currentBatch.batchType !== batchType) {
// Save the batch to the execution stack
this.s.batches.push(this.s.currentBatch);
// Create a new batch
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
// Reset the current size trackers
this.s.currentBatchSize = 0;
this.s.currentBatchSizeBytes = 0;
}
if (batchType === common_1.BatchType.INSERT) {
this.s.bulkResult.insertedIds.push({
index: this.s.currentIndex,
_id: document._id
});
}
// We have an array of documents
if (Array.isArray(document)) {
throw new error_1.MongoInvalidArgumentError('Operation passed in cannot be an Array');
}
this.s.currentBatch.originalIndexes.push(this.s.currentIndex);
this.s.currentBatch.operations.push(document);
this.s.currentBatchSize += 1;
this.s.currentBatchSizeBytes += maxKeySize + bsonSize;
this.s.currentIndex += 1;
return this;
}
}
exports.OrderedBulkOperation = OrderedBulkOperation;
//# sourceMappingURL=ordered.js.map

1
node_modules/mongodb/lib/bulk/ordered.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"ordered.js","sourceRoot":"","sources":["../../src/bulk/ordered.ts"],"names":[],"mappings":";;;AACA,gCAAgC;AAEhC,oCAAqD;AAGrD,qCAAsF;AAEtF,cAAc;AACd,MAAa,oBAAqB,SAAQ,0BAAiB;IACzD,gBAAgB;IAChB,YAAY,UAAsB,EAAE,OAAyB;QAC3D,KAAK,CAAC,UAAU,EAAE,OAAO,EAAE,IAAI,CAAC,CAAC;IACnC,CAAC;IAED,mBAAmB,CACjB,SAAoB,EACpB,QAAsD;QAEtD,mBAAmB;QACnB,MAAM,QAAQ,GAAG,IAAI,CAAC,mBAAmB,CAAC,QAAQ,EAAE;YAClD,SAAS,EAAE,KAAK;YAChB,oEAAoE;YACpE,wEAAwE;YACxE,eAAe,EAAE,KAAK;SAChB,CAAC,CAAC;QAEV,0DAA0D;QAC1D,IAAI,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB;YACtC,iDAAiD;YACjD,MAAM,IAAI,iCAAyB,CACjC,4CAA4C,IAAI,CAAC,CAAC,CAAC,iBAAiB,EAAE,CACvE,CAAC;QAEJ,2DAA2D;QAC3D,IAAI,IAAI,CAAC,CAAC,CAAC,YAAY,IAAI,IAAI,EAAE,CAAC;YAChC,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAClE,CAAC;QAED,MAAM,UAAU,GAAG,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC;QAErC,yCAAyC;QACzC;QACE,+CAA+C;QAC/C,IAAI,CAAC,CAAC,CAAC,gBAAgB,GAAG,CAAC,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB;YACvD,yFAAyF;YACzF,qCAAqC;YACrC,CAAC,IAAI,CAAC,CAAC,CAAC,gBAAgB,GAAG,CAAC;gBAC1B,IAAI,CAAC,CAAC,CAAC,qBAAqB,GAAG,UAAU,GAAG,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB,CAAC;YACnF,8EAA8E;YAC9E,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,KAAK,SAAS,EAC3C,CAAC;YACD,wCAAwC;YACxC,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;YAEzC,qBAAqB;YACrB,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;YAEhE,kCAAkC;YAClC,IAAI,CAAC,CAAC,CAAC,gBAAgB,GAAG,CAAC,CAAC;YAC5B,IAAI,CAAC,CAAC,CAAC,qBAAqB,GAAG,CAAC,CAAC;QACnC,CAAC;QAED,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YACnC,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC,WAAW,CAAC,IAAI,CAAC;gBACjC,KAAK,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY;gBAC1B,GAAG,EAAG,QAAqB,CAAC,GAAG;aAChC,CAAC,CAAC;QACL,CAAC;QAED,gCAAgC;QAChC,IAAI,KAAK,CAAC,OAAO,CAAC,QAAQ,CAAC,EAAE,CAAC;YAC5B,MAAM,IAAI,iCAAyB,CAAC,wCAAwC,CAAC,CAAC;QAChF,CAAC;QAED,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,eAAe,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAC9D,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,UAAU,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAC;QAC9C,IAAI,CAAC,CAAC,CAAC,gBAAgB,IAAI,CAAC,CAAC;QAC7B,IAAI,CAAC,CAAC,CAAC,qBAAqB,IAAI,UAAU,GAAG,QAAQ,CAAC;QACtD,IAAI,CAAC,CAAC,CAAC,YAAY,IAAI,CAAC,CAAC;QACzB,OAAO,IAAI,CAAC;IACd,CAAC;CACF;AAzED,oDAyEC"}

92
node_modules/mongodb/lib/bulk/unordered.js generated vendored Normal file
View file

@ -0,0 +1,92 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.UnorderedBulkOperation = void 0;
const BSON = require("../bson");
const error_1 = require("../error");
const common_1 = require("./common");
/** @public */
class UnorderedBulkOperation extends common_1.BulkOperationBase {
/** @internal */
constructor(collection, options) {
super(collection, options, false);
}
handleWriteError(writeResult) {
if (this.s.batches.length) {
return;
}
return super.handleWriteError(writeResult);
}
addToOperationsList(batchType, document) {
// Get the bsonSize
const bsonSize = BSON.calculateObjectSize(document, {
checkKeys: false,
// Since we don't know what the user selected for BSON options here,
// err on the safe side, and check the size with ignoreUndefined: false.
ignoreUndefined: false
});
// Throw error if the doc is bigger than the max BSON size
if (bsonSize >= this.s.maxBsonObjectSize) {
// TODO(NODE-3483): Change this to MongoBSONError
throw new error_1.MongoInvalidArgumentError(`Document is larger than the maximum size ${this.s.maxBsonObjectSize}`);
}
// Holds the current batch
this.s.currentBatch = undefined;
// Get the right type of batch
if (batchType === common_1.BatchType.INSERT) {
this.s.currentBatch = this.s.currentInsertBatch;
}
else if (batchType === common_1.BatchType.UPDATE) {
this.s.currentBatch = this.s.currentUpdateBatch;
}
else if (batchType === common_1.BatchType.DELETE) {
this.s.currentBatch = this.s.currentRemoveBatch;
}
const maxKeySize = this.s.maxKeySize;
// Create a new batch object if we don't have a current one
if (this.s.currentBatch == null) {
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
}
// Check if we need to create a new batch
if (
// New batch if we exceed the max batch op size
this.s.currentBatch.size + 1 >= this.s.maxWriteBatchSize ||
// New batch if we exceed the maxBatchSizeBytes. Only matters if batch already has a doc,
// since we can't sent an empty batch
(this.s.currentBatch.size > 0 &&
this.s.currentBatch.sizeBytes + maxKeySize + bsonSize >= this.s.maxBatchSizeBytes) ||
// New batch if the new op does not have the same op type as the current batch
this.s.currentBatch.batchType !== batchType) {
// Save the batch to the execution stack
this.s.batches.push(this.s.currentBatch);
// Create a new batch
this.s.currentBatch = new common_1.Batch(batchType, this.s.currentIndex);
}
// We have an array of documents
if (Array.isArray(document)) {
throw new error_1.MongoInvalidArgumentError('Operation passed in cannot be an Array');
}
this.s.currentBatch.operations.push(document);
this.s.currentBatch.originalIndexes.push(this.s.currentIndex);
this.s.currentIndex = this.s.currentIndex + 1;
// Save back the current Batch to the right type
if (batchType === common_1.BatchType.INSERT) {
this.s.currentInsertBatch = this.s.currentBatch;
this.s.bulkResult.insertedIds.push({
index: this.s.bulkResult.insertedIds.length,
_id: document._id
});
}
else if (batchType === common_1.BatchType.UPDATE) {
this.s.currentUpdateBatch = this.s.currentBatch;
}
else if (batchType === common_1.BatchType.DELETE) {
this.s.currentRemoveBatch = this.s.currentBatch;
}
// Update current batch size
this.s.currentBatch.size += 1;
this.s.currentBatch.sizeBytes += maxKeySize + bsonSize;
return this;
}
}
exports.UnorderedBulkOperation = UnorderedBulkOperation;
//# sourceMappingURL=unordered.js.map

1
node_modules/mongodb/lib/bulk/unordered.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"unordered.js","sourceRoot":"","sources":["../../src/bulk/unordered.ts"],"names":[],"mappings":";;;AACA,gCAAgC;AAEhC,oCAAqD;AAGrD,qCAMkB;AAElB,cAAc;AACd,MAAa,sBAAuB,SAAQ,0BAAiB;IAC3D,gBAAgB;IAChB,YAAY,UAAsB,EAAE,OAAyB;QAC3D,KAAK,CAAC,UAAU,EAAE,OAAO,EAAE,KAAK,CAAC,CAAC;IACpC,CAAC;IAEQ,gBAAgB,CAAC,WAA4B;QACpD,IAAI,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,MAAM,EAAE,CAAC;YAC1B,OAAO;QACT,CAAC;QAED,OAAO,KAAK,CAAC,gBAAgB,CAAC,WAAW,CAAC,CAAC;IAC7C,CAAC;IAED,mBAAmB,CACjB,SAAoB,EACpB,QAAsD;QAEtD,mBAAmB;QACnB,MAAM,QAAQ,GAAG,IAAI,CAAC,mBAAmB,CAAC,QAAQ,EAAE;YAClD,SAAS,EAAE,KAAK;YAEhB,oEAAoE;YACpE,wEAAwE;YACxE,eAAe,EAAE,KAAK;SAChB,CAAC,CAAC;QAEV,0DAA0D;QAC1D,IAAI,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB,EAAE,CAAC;YACzC,iDAAiD;YACjD,MAAM,IAAI,iCAAyB,CACjC,4CAA4C,IAAI,CAAC,CAAC,CAAC,iBAAiB,EAAE,CACvE,CAAC;QACJ,CAAC;QAED,0BAA0B;QAC1B,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,SAAS,CAAC;QAChC,8BAA8B;QAC9B,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YACnC,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,kBAAkB,CAAC;QAClD,CAAC;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YAC1C,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,kBAAkB,CAAC;QAClD,CAAC;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YAC1C,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,kBAAkB,CAAC;QAClD,CAAC;QAED,MAAM,UAAU,GAAG,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC;QAErC,2DAA2D;QAC3D,IAAI,IAAI,CAAC,CAAC,CAAC,YAAY,IAAI,IAAI,EAAE,CAAC;YAChC,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAClE,CAAC;QAED,yCAAyC;QACzC;QACE,+CAA+C;QAC/C,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,IAAI,GAAG,CAAC,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB;YACxD,yFAAyF;YACzF,qCAAqC;YACrC,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,IAAI,GAAG,CAAC;gBAC3B,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,GAAG,UAAU,GAAG,QAAQ,IAAI,IAAI,CAAC,CAAC,CAAC,iBAAiB,CAAC;YACpF,8EAA8E;YAC9E,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,KAAK,SAAS,EAC3C,CAAC;YACD,wCAAwC;YACxC,IAAI,CAAC,CAAC,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;YAEzC,qBAAqB;YACrB,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,cAAK,CAAC,SAAS,EAAE,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAClE,CAAC;QAED,gCAAgC;QAChC,IAAI,KAAK,CAAC,OAAO,CAAC,QAAQ,CAAC,EAAE,CAAC;YAC5B,MAAM,IAAI,iCAAyB,CAAC,wCAAwC,CAAC,CAAC;QAChF,CAAC;QAED,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,UAAU,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAC;QAC9C,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,eAAe,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,CAAC;QAC9D,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,GAAG,CAAC,CAAC;QAE9C,gDAAgD;QAChD,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YACnC,IAAI,CAAC,CAAC,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC;YAChD,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC,WAAW,CAAC,IAAI,CAAC;gBACjC,KAAK,EAAE,IAAI,CAAC,CAAC,CAAC,UAAU,CAAC,WAAW,CAAC,MAAM;gBAC3C,GAAG,EAAG,QAAqB,CAAC,GAAG;aAChC,CAAC,CAAC;QACL,CAAC;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YAC1C,IAAI,CAAC,CAAC,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC;QAClD,CAAC;aAAM,IAAI,SAAS,KAAK,kBAAS,CAAC,MAAM,EAAE,CAAC;YAC1C,IAAI,CAAC,CAAC,CAAC,kBAAkB,GAAG,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC;QAClD,CAAC;QAED,4BAA4B;QAC5B,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,IAAI,IAAI,CAAC,CAAC;QAC9B,IAAI,CAAC,CAAC,CAAC,YAAY,CAAC,SAAS,IAAI,UAAU,GAAG,QAAQ,CAAC;QAEvD,OAAO,IAAI,CAAC;IACd,CAAC;CACF;AAnGD,wDAmGC"}

511
node_modules/mongodb/lib/change_stream.js generated vendored Normal file
View file

@ -0,0 +1,511 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ChangeStream = void 0;
exports.filterOutOptions = filterOutOptions;
const collection_1 = require("./collection");
const constants_1 = require("./constants");
const abstract_cursor_1 = require("./cursor/abstract_cursor");
const change_stream_cursor_1 = require("./cursor/change_stream_cursor");
const db_1 = require("./db");
const error_1 = require("./error");
const mongo_client_1 = require("./mongo_client");
const mongo_types_1 = require("./mongo_types");
const timeout_1 = require("./timeout");
const utils_1 = require("./utils");
const CHANGE_DOMAIN_TYPES = {
COLLECTION: Symbol('Collection'),
DATABASE: Symbol('Database'),
CLUSTER: Symbol('Cluster')
};
const CHANGE_STREAM_EVENTS = [constants_1.RESUME_TOKEN_CHANGED, constants_1.END, constants_1.CLOSE];
const NO_RESUME_TOKEN_ERROR = 'A change stream document has been received that lacks a resume token (_id).';
const CHANGESTREAM_CLOSED_ERROR = 'ChangeStream is closed';
const INVALID_STAGE_OPTIONS = buildDisallowedChangeStreamOptions();
function filterOutOptions(options) {
return Object.fromEntries(Object.entries(options).filter(([k, _]) => !INVALID_STAGE_OPTIONS.has(k)));
}
/**
* Creates a new Change Stream instance. Normally created using {@link Collection#watch|Collection.watch()}.
* @public
*/
class ChangeStream extends mongo_types_1.TypedEventEmitter {
/**
* @experimental
* An alias for {@link ChangeStream.close|ChangeStream.close()}.
*/
async [Symbol.asyncDispose]() {
await this.close();
}
/** @event */
static { this.RESPONSE = constants_1.RESPONSE; }
/** @event */
static { this.MORE = constants_1.MORE; }
/** @event */
static { this.INIT = constants_1.INIT; }
/** @event */
static { this.CLOSE = constants_1.CLOSE; }
/**
* Fired for each new matching change in the specified namespace. Attaching a `change`
* event listener to a Change Stream will switch the stream into flowing mode. Data will
* then be passed as soon as it is available.
* @event
*/
static { this.CHANGE = constants_1.CHANGE; }
/** @event */
static { this.END = constants_1.END; }
/** @event */
static { this.ERROR = constants_1.ERROR; }
/**
* Emitted each time the change stream stores a new resume token.
* @event
*/
static { this.RESUME_TOKEN_CHANGED = constants_1.RESUME_TOKEN_CHANGED; }
/**
* @internal
*
* @param parent - The parent object that created this change stream
* @param pipeline - An array of {@link https://www.mongodb.com/docs/manual/reference/operator/aggregation-pipeline/|aggregation pipeline stages} through which to pass change stream documents
*/
constructor(parent, pipeline = [], options = {}) {
super();
this.pipeline = pipeline;
this.options = { ...options };
let serverSelectionTimeoutMS;
delete this.options.writeConcern;
if (parent instanceof collection_1.Collection) {
this.type = CHANGE_DOMAIN_TYPES.COLLECTION;
serverSelectionTimeoutMS = parent.s.db.client.options.serverSelectionTimeoutMS;
}
else if (parent instanceof db_1.Db) {
this.type = CHANGE_DOMAIN_TYPES.DATABASE;
serverSelectionTimeoutMS = parent.client.options.serverSelectionTimeoutMS;
}
else if (parent instanceof mongo_client_1.MongoClient) {
this.type = CHANGE_DOMAIN_TYPES.CLUSTER;
serverSelectionTimeoutMS = parent.options.serverSelectionTimeoutMS;
}
else {
throw new error_1.MongoChangeStreamError('Parent provided to ChangeStream constructor must be an instance of Collection, Db, or MongoClient');
}
this.contextOwner = Symbol();
this.parent = parent;
this.namespace = parent.s.namespace;
if (!this.options.readPreference && parent.readPreference) {
this.options.readPreference = parent.readPreference;
}
// Create contained Change Stream cursor
this.cursor = this._createChangeStreamCursor(options);
this.isClosed = false;
this.mode = false;
// Listen for any `change` listeners being added to ChangeStream
this.on('newListener', eventName => {
if (eventName === 'change' && this.cursor && this.listenerCount('change') === 0) {
this._streamEvents(this.cursor);
}
});
this.on('removeListener', eventName => {
if (eventName === 'change' && this.listenerCount('change') === 0 && this.cursor) {
this.cursorStream?.removeAllListeners('data');
}
});
if (this.options.timeoutMS != null) {
this.timeoutContext = new timeout_1.CSOTTimeoutContext({
timeoutMS: this.options.timeoutMS,
serverSelectionTimeoutMS
});
}
}
/** The cached resume token that is used to resume after the most recently returned change. */
get resumeToken() {
return this.cursor?.resumeToken;
}
/** Check if there is any document still available in the Change Stream */
async hasNext() {
this._setIsIterator();
// Change streams must resume indefinitely while each resume event succeeds.
// This loop continues until either a change event is received or until a resume attempt
// fails.
this.timeoutContext?.refresh();
try {
while (true) {
try {
const hasNext = await this.cursor.hasNext();
return hasNext;
}
catch (error) {
try {
await this._processErrorIteratorMode(error, this.cursor.id != null);
}
catch (error) {
if (error instanceof error_1.MongoOperationTimeoutError && this.cursor.id == null) {
throw error;
}
try {
await this.close();
}
catch (error) {
(0, utils_1.squashError)(error);
}
throw error;
}
}
}
}
finally {
this.timeoutContext?.clear();
}
}
/** Get the next available document from the Change Stream. */
async next() {
this._setIsIterator();
// Change streams must resume indefinitely while each resume event succeeds.
// This loop continues until either a change event is received or until a resume attempt
// fails.
this.timeoutContext?.refresh();
try {
while (true) {
try {
const change = await this.cursor.next();
const processedChange = this._processChange(change ?? null);
return processedChange;
}
catch (error) {
try {
await this._processErrorIteratorMode(error, this.cursor.id != null);
}
catch (error) {
if (error instanceof error_1.MongoOperationTimeoutError && this.cursor.id == null) {
throw error;
}
try {
await this.close();
}
catch (error) {
(0, utils_1.squashError)(error);
}
throw error;
}
}
}
}
finally {
this.timeoutContext?.clear();
}
}
/**
* Try to get the next available document from the Change Stream's cursor or `null` if an empty batch is returned
*/
async tryNext() {
this._setIsIterator();
// Change streams must resume indefinitely while each resume event succeeds.
// This loop continues until either a change event is received or until a resume attempt
// fails.
this.timeoutContext?.refresh();
try {
while (true) {
try {
const change = await this.cursor.tryNext();
if (!change) {
return null;
}
const processedChange = this._processChange(change);
return processedChange;
}
catch (error) {
try {
await this._processErrorIteratorMode(error, this.cursor.id != null);
}
catch (error) {
if (error instanceof error_1.MongoOperationTimeoutError && this.cursor.id == null)
throw error;
try {
await this.close();
}
catch (error) {
(0, utils_1.squashError)(error);
}
throw error;
}
}
}
}
finally {
this.timeoutContext?.clear();
}
}
async *[Symbol.asyncIterator]() {
if (this.closed) {
return;
}
try {
// Change streams run indefinitely as long as errors are resumable
// So the only loop breaking condition is if `next()` throws
while (true) {
yield await this.next();
}
}
finally {
try {
await this.close();
}
catch (error) {
(0, utils_1.squashError)(error);
}
}
}
/** Is the cursor closed */
get closed() {
return this.isClosed || this.cursor.closed;
}
/**
* Frees the internal resources used by the change stream.
*/
async close() {
this.timeoutContext?.clear();
this.timeoutContext = undefined;
this.isClosed = true;
const cursor = this.cursor;
try {
await cursor.close();
}
finally {
this._endStream();
}
}
/**
* Return a modified Readable stream including a possible transform method.
*
* NOTE: When using a Stream to process change stream events, the stream will
* NOT automatically resume in the case a resumable error is encountered.
*
* @throws MongoChangeStreamError if the underlying cursor or the change stream is closed
*/
stream() {
if (this.closed) {
throw new error_1.MongoChangeStreamError(CHANGESTREAM_CLOSED_ERROR);
}
return this.cursor.stream();
}
/** @internal */
_setIsEmitter() {
if (this.mode === 'iterator') {
// TODO(NODE-3485): Replace with MongoChangeStreamModeError
throw new error_1.MongoAPIError('ChangeStream cannot be used as an EventEmitter after being used as an iterator');
}
this.mode = 'emitter';
}
/** @internal */
_setIsIterator() {
if (this.mode === 'emitter') {
// TODO(NODE-3485): Replace with MongoChangeStreamModeError
throw new error_1.MongoAPIError('ChangeStream cannot be used as an iterator after being used as an EventEmitter');
}
this.mode = 'iterator';
}
/**
* Create a new change stream cursor based on self's configuration
* @internal
*/
_createChangeStreamCursor(options) {
const changeStreamStageOptions = filterOutOptions(options);
if (this.type === CHANGE_DOMAIN_TYPES.CLUSTER) {
changeStreamStageOptions.allChangesForCluster = true;
}
const pipeline = [{ $changeStream: changeStreamStageOptions }, ...this.pipeline];
const client = this.type === CHANGE_DOMAIN_TYPES.CLUSTER
? this.parent
: this.type === CHANGE_DOMAIN_TYPES.DATABASE
? this.parent.client
: this.type === CHANGE_DOMAIN_TYPES.COLLECTION
? this.parent.client
: null;
if (client == null) {
// This should never happen because of the assertion in the constructor
throw new error_1.MongoRuntimeError(`Changestream type should only be one of cluster, database, collection. Found ${this.type.toString()}`);
}
const changeStreamCursor = new change_stream_cursor_1.ChangeStreamCursor(client, this.namespace, pipeline, {
...options,
timeoutContext: this.timeoutContext
? new abstract_cursor_1.CursorTimeoutContext(this.timeoutContext, this.contextOwner)
: undefined
});
for (const event of CHANGE_STREAM_EVENTS) {
changeStreamCursor.on(event, e => this.emit(event, e));
}
if (this.listenerCount(ChangeStream.CHANGE) > 0) {
this._streamEvents(changeStreamCursor);
}
return changeStreamCursor;
}
/** @internal */
_closeEmitterModeWithError(error) {
this.emit(ChangeStream.ERROR, error);
this.close().then(undefined, utils_1.squashError);
}
/** @internal */
_streamEvents(cursor) {
this._setIsEmitter();
const stream = this.cursorStream ?? cursor.stream();
this.cursorStream = stream;
stream.on('data', change => {
try {
const processedChange = this._processChange(change);
this.emit(ChangeStream.CHANGE, processedChange);
}
catch (error) {
this.emit(ChangeStream.ERROR, error);
}
this.timeoutContext?.refresh();
});
stream.on('error', error => this._processErrorStreamMode(error, this.cursor.id != null));
}
/** @internal */
_endStream() {
this.cursorStream?.removeAllListeners('data');
this.cursorStream?.removeAllListeners('close');
this.cursorStream?.removeAllListeners('end');
this.cursorStream?.destroy();
this.cursorStream = undefined;
}
/** @internal */
_processChange(change) {
if (this.isClosed) {
// TODO(NODE-3485): Replace with MongoChangeStreamClosedError
throw new error_1.MongoAPIError(CHANGESTREAM_CLOSED_ERROR);
}
// a null change means the cursor has been notified, implicitly closing the change stream
if (change == null) {
// TODO(NODE-3485): Replace with MongoChangeStreamClosedError
throw new error_1.MongoRuntimeError(CHANGESTREAM_CLOSED_ERROR);
}
if (change && !change._id) {
throw new error_1.MongoChangeStreamError(NO_RESUME_TOKEN_ERROR);
}
// cache the resume token
this.cursor.cacheResumeToken(change._id);
// wipe the startAtOperationTime if there was one so that there won't be a conflict
// between resumeToken and startAtOperationTime if we need to reconnect the cursor
this.options.startAtOperationTime = undefined;
return change;
}
/** @internal */
_processErrorStreamMode(changeStreamError, cursorInitialized) {
// If the change stream has been closed explicitly, do not process error.
if (this.isClosed)
return;
if (cursorInitialized &&
((0, error_1.isResumableError)(changeStreamError, this.cursor.maxWireVersion) ||
changeStreamError instanceof error_1.MongoOperationTimeoutError)) {
this._endStream();
this.cursor
.close()
.then(() => this._resume(changeStreamError), e => {
(0, utils_1.squashError)(e);
return this._resume(changeStreamError);
})
.then(() => {
if (changeStreamError instanceof error_1.MongoOperationTimeoutError)
this.emit(ChangeStream.ERROR, changeStreamError);
}, () => this._closeEmitterModeWithError(changeStreamError));
}
else {
this._closeEmitterModeWithError(changeStreamError);
}
}
/** @internal */
async _processErrorIteratorMode(changeStreamError, cursorInitialized) {
if (this.isClosed) {
// TODO(NODE-3485): Replace with MongoChangeStreamClosedError
throw new error_1.MongoAPIError(CHANGESTREAM_CLOSED_ERROR);
}
if (cursorInitialized &&
((0, error_1.isResumableError)(changeStreamError, this.cursor.maxWireVersion) ||
changeStreamError instanceof error_1.MongoOperationTimeoutError)) {
try {
await this.cursor.close();
}
catch (error) {
(0, utils_1.squashError)(error);
}
await this._resume(changeStreamError);
if (changeStreamError instanceof error_1.MongoOperationTimeoutError)
throw changeStreamError;
}
else {
try {
await this.close();
}
catch (error) {
(0, utils_1.squashError)(error);
}
throw changeStreamError;
}
}
async _resume(changeStreamError) {
this.timeoutContext?.refresh();
const topology = (0, utils_1.getTopology)(this.parent);
try {
await topology.selectServer(this.cursor.readPreference, {
operationName: 'reconnect topology in change stream',
timeoutContext: this.timeoutContext
});
this.cursor = this._createChangeStreamCursor(this.cursor.resumeOptions);
}
catch {
// if the topology can't reconnect, close the stream
await this.close();
throw changeStreamError;
}
}
}
exports.ChangeStream = ChangeStream;
/**
* This function returns a list of options that are *not* supported by the $changeStream
* aggregation stage. This is best-effort - it uses the options "officially supported" by the driver
* to derive a list of known, unsupported options for the $changeStream stage.
*
* Notably, at runtime, users can still provide options unknown to the driver and the driver will
* *not* filter them out of the options object (see NODE-5510).
*/
function buildDisallowedChangeStreamOptions() {
const denyList = {
allowDiskUse: '',
authdb: '',
batchSize: '',
bsonRegExp: '',
bypassDocumentValidation: '',
bypassPinningCheck: '',
checkKeys: '',
collation: '',
comment: '',
cursor: '',
dbName: '',
enableUtf8Validation: '',
explain: '',
fieldsAsRaw: '',
hint: '',
ignoreUndefined: '',
let: '',
maxAwaitTimeMS: '',
maxTimeMS: '',
omitMaxTimeMS: '',
out: '',
promoteBuffers: '',
promoteLongs: '',
promoteValues: '',
raw: '',
rawData: '',
readConcern: '',
readPreference: '',
serializeFunctions: '',
session: '',
timeoutContext: '',
timeoutMS: '',
timeoutMode: '',
useBigInt64: '',
willRetryWrite: '',
writeConcern: ''
};
return new Set(Object.keys(denyList));
}
//# sourceMappingURL=change_stream.js.map

1
node_modules/mongodb/lib/change_stream.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,272 @@
"use strict";
var _a;
Object.defineProperty(exports, "__esModule", { value: true });
exports.AutoEncrypter = exports.AutoEncryptionLoggerLevel = void 0;
const net = require("net");
const bson_1 = require("../bson");
const constants_1 = require("../constants");
const deps_1 = require("../deps");
const error_1 = require("../error");
const mongo_client_1 = require("../mongo_client");
const utils_1 = require("../utils");
const client_encryption_1 = require("./client_encryption");
const errors_1 = require("./errors");
const mongocryptd_manager_1 = require("./mongocryptd_manager");
const providers_1 = require("./providers");
const state_machine_1 = require("./state_machine");
/** @public */
exports.AutoEncryptionLoggerLevel = Object.freeze({
FatalError: 0,
Error: 1,
Warning: 2,
Info: 3,
Trace: 4
});
/**
* @internal An internal class to be used by the driver for auto encryption
* **NOTE**: Not meant to be instantiated directly, this is for internal use only.
*/
class AutoEncrypter {
static { _a = constants_1.kDecorateResult; }
/** @internal */
static getMongoCrypt() {
const encryption = (0, deps_1.getMongoDBClientEncryption)();
if ('kModuleError' in encryption) {
throw encryption.kModuleError;
}
return encryption.MongoCrypt;
}
/**
* Create an AutoEncrypter
*
* **Note**: Do not instantiate this class directly. Rather, supply the relevant options to a MongoClient
*
* **Note**: Supplying `options.schemaMap` provides more security than relying on JSON Schemas obtained from the server.
* It protects against a malicious server advertising a false JSON Schema, which could trick the client into sending unencrypted data that should be encrypted.
* Schemas supplied in the schemaMap only apply to configuring automatic encryption for Client-Side Field Level Encryption.
* Other validation rules in the JSON schema will not be enforced by the driver and will result in an error.
*
* @example <caption>Create an AutoEncrypter that makes use of mongocryptd</caption>
* ```ts
* // Enabling autoEncryption via a MongoClient using mongocryptd
* const { MongoClient } = require('mongodb');
* const client = new MongoClient(URL, {
* autoEncryption: {
* kmsProviders: {
* aws: {
* accessKeyId: AWS_ACCESS_KEY,
* secretAccessKey: AWS_SECRET_KEY
* }
* }
* }
* });
* ```
*
* await client.connect();
* // From here on, the client will be encrypting / decrypting automatically
* @example <caption>Create an AutoEncrypter that makes use of libmongocrypt's CSFLE shared library</caption>
* ```ts
* // Enabling autoEncryption via a MongoClient using CSFLE shared library
* const { MongoClient } = require('mongodb');
* const client = new MongoClient(URL, {
* autoEncryption: {
* kmsProviders: {
* aws: {}
* },
* extraOptions: {
* cryptSharedLibPath: '/path/to/local/crypt/shared/lib',
* cryptSharedLibRequired: true
* }
* }
* });
* ```
*
* await client.connect();
* // From here on, the client will be encrypting / decrypting automatically
*/
constructor(client, options) {
/**
* Used by devtools to enable decorating decryption results.
*
* When set and enabled, `decrypt` will automatically recursively
* traverse a decrypted document and if a field has been decrypted,
* it will mark it as decrypted. Compass uses this to determine which
* fields were decrypted.
*/
this[_a] = false;
this._client = client;
this._bypassEncryption = options.bypassAutoEncryption === true;
this._keyVaultNamespace = options.keyVaultNamespace || 'admin.datakeys';
this._keyVaultClient = options.keyVaultClient || client;
this._metaDataClient = options.metadataClient || client;
this._proxyOptions = options.proxyOptions || {};
this._tlsOptions = options.tlsOptions || {};
this._kmsProviders = options.kmsProviders || {};
this._credentialProviders = options.credentialProviders;
if (options.credentialProviders?.aws && !(0, providers_1.isEmptyCredentials)('aws', this._kmsProviders)) {
throw new errors_1.MongoCryptInvalidArgumentError('Can only provide a custom AWS credential provider when the state machine is configured for automatic AWS credential fetching');
}
const mongoCryptOptions = {
errorWrapper: errors_1.defaultErrorWrapper
};
if (options.schemaMap) {
mongoCryptOptions.schemaMap = Buffer.isBuffer(options.schemaMap)
? options.schemaMap
: (0, bson_1.serialize)(options.schemaMap);
}
if (options.encryptedFieldsMap) {
mongoCryptOptions.encryptedFieldsMap = Buffer.isBuffer(options.encryptedFieldsMap)
? options.encryptedFieldsMap
: (0, bson_1.serialize)(options.encryptedFieldsMap);
}
mongoCryptOptions.kmsProviders = !Buffer.isBuffer(this._kmsProviders)
? (0, bson_1.serialize)(this._kmsProviders)
: this._kmsProviders;
if (options.options?.logger) {
mongoCryptOptions.logger = options.options.logger;
}
if (options.extraOptions && options.extraOptions.cryptSharedLibPath) {
mongoCryptOptions.cryptSharedLibPath = options.extraOptions.cryptSharedLibPath;
}
if (options.bypassQueryAnalysis) {
mongoCryptOptions.bypassQueryAnalysis = options.bypassQueryAnalysis;
}
if (options.keyExpirationMS != null) {
mongoCryptOptions.keyExpirationMS = options.keyExpirationMS;
}
this._bypassMongocryptdAndCryptShared = this._bypassEncryption || !!options.bypassQueryAnalysis;
if (options.extraOptions && options.extraOptions.cryptSharedLibSearchPaths) {
// Only for driver testing
mongoCryptOptions.cryptSharedLibSearchPaths = options.extraOptions.cryptSharedLibSearchPaths;
}
else if (!this._bypassMongocryptdAndCryptShared) {
mongoCryptOptions.cryptSharedLibSearchPaths = ['$SYSTEM'];
}
const MongoCrypt = AutoEncrypter.getMongoCrypt();
this._mongocrypt = new MongoCrypt(mongoCryptOptions);
this._contextCounter = 0;
if (options.extraOptions &&
options.extraOptions.cryptSharedLibRequired &&
!this.cryptSharedLibVersionInfo) {
throw new errors_1.MongoCryptInvalidArgumentError('`cryptSharedLibRequired` set but no crypt_shared library loaded');
}
// Only instantiate mongocryptd manager/client once we know for sure
// that we are not using the CSFLE shared library.
if (!this._bypassMongocryptdAndCryptShared && !this.cryptSharedLibVersionInfo) {
this._mongocryptdManager = new mongocryptd_manager_1.MongocryptdManager(options.extraOptions);
const clientOptions = {
serverSelectionTimeoutMS: 10000
};
if ((options.extraOptions == null || typeof options.extraOptions.mongocryptdURI !== 'string') &&
!net.getDefaultAutoSelectFamily) {
// Only set family if autoSelectFamily options are not supported.
clientOptions.family = 4;
}
// eslint-disable-next-line @typescript-eslint/ban-ts-comment
// @ts-ignore: TS complains as this always returns true on versions where it is present.
if (net.getDefaultAutoSelectFamily) {
// AutoEncrypter is made inside of MongoClient constructor while options are being parsed,
// we do not have access to the options that are in progress.
// TODO(NODE-6449): AutoEncrypter does not use client options for autoSelectFamily
Object.assign(clientOptions, (0, client_encryption_1.autoSelectSocketOptions)(this._client.s?.options ?? {}));
}
this._mongocryptdClient = new mongo_client_1.MongoClient(this._mongocryptdManager.uri, clientOptions);
}
}
/**
* Initializes the auto encrypter by spawning a mongocryptd and connecting to it.
*
* This function is a no-op when bypassSpawn is set or the crypt shared library is used.
*/
async init() {
if (this._bypassMongocryptdAndCryptShared || this.cryptSharedLibVersionInfo) {
return;
}
if (!this._mongocryptdManager) {
throw new error_1.MongoRuntimeError('Reached impossible state: mongocryptdManager is undefined when neither bypassSpawn nor the shared lib are specified.');
}
if (!this._mongocryptdClient) {
throw new error_1.MongoRuntimeError('Reached impossible state: mongocryptdClient is undefined when neither bypassSpawn nor the shared lib are specified.');
}
if (!this._mongocryptdManager.bypassSpawn) {
await this._mongocryptdManager.spawn();
}
try {
const client = await this._mongocryptdClient.connect();
return client;
}
catch (error) {
throw new error_1.MongoRuntimeError('Unable to connect to `mongocryptd`, please make sure it is running or in your PATH for auto-spawn', { cause: error });
}
}
/**
* Cleans up the `_mongocryptdClient`, if present.
*/
async close() {
await this._mongocryptdClient?.close();
}
/**
* Encrypt a command for a given namespace.
*/
async encrypt(ns, cmd, options = {}) {
options.signal?.throwIfAborted();
if (this._bypassEncryption) {
// If `bypassAutoEncryption` has been specified, don't encrypt
return cmd;
}
const commandBuffer = Buffer.isBuffer(cmd) ? cmd : (0, bson_1.serialize)(cmd, options);
const context = this._mongocrypt.makeEncryptionContext(utils_1.MongoDBCollectionNamespace.fromString(ns).db, commandBuffer);
context.id = this._contextCounter++;
context.ns = ns;
context.document = cmd;
const stateMachine = new state_machine_1.StateMachine({
promoteValues: false,
promoteLongs: false,
proxyOptions: this._proxyOptions,
tlsOptions: this._tlsOptions,
socketOptions: (0, client_encryption_1.autoSelectSocketOptions)(this._client.s.options)
});
return (0, bson_1.deserialize)(await stateMachine.execute(this, context, options), {
promoteValues: false,
promoteLongs: false
});
}
/**
* Decrypt a command response
*/
async decrypt(response, options = {}) {
options.signal?.throwIfAborted();
const context = this._mongocrypt.makeDecryptionContext(response);
context.id = this._contextCounter++;
const stateMachine = new state_machine_1.StateMachine({
...options,
proxyOptions: this._proxyOptions,
tlsOptions: this._tlsOptions,
socketOptions: (0, client_encryption_1.autoSelectSocketOptions)(this._client.s.options)
});
return await stateMachine.execute(this, context, options);
}
/**
* Ask the user for KMS credentials.
*
* This returns anything that looks like the kmsProviders original input
* option. It can be empty, and any provider specified here will override
* the original ones.
*/
async askForKMSCredentials() {
return await (0, providers_1.refreshKMSCredentials)(this._kmsProviders, this._credentialProviders);
}
/**
* Return the current libmongocrypt's CSFLE shared library version
* as `{ version: bigint, versionStr: string }`, or `null` if no CSFLE
* shared library was loaded.
*/
get cryptSharedLibVersionInfo() {
return this._mongocrypt.cryptSharedLibVersionInfo;
}
static get libmongocryptVersion() {
return AutoEncrypter.getMongoCrypt().libmongocryptVersion;
}
}
exports.AutoEncrypter = AutoEncrypter;
//# sourceMappingURL=auto_encrypter.js.map

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,609 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ClientEncryption = void 0;
exports.autoSelectSocketOptions = autoSelectSocketOptions;
const bson_1 = require("../bson");
const deps_1 = require("../deps");
const timeout_1 = require("../timeout");
const utils_1 = require("../utils");
const errors_1 = require("./errors");
const index_1 = require("./providers/index");
const state_machine_1 = require("./state_machine");
/**
* @public
* The public interface for explicit in-use encryption
*/
class ClientEncryption {
/** @internal */
static getMongoCrypt() {
const encryption = (0, deps_1.getMongoDBClientEncryption)();
if ('kModuleError' in encryption) {
throw encryption.kModuleError;
}
return encryption.MongoCrypt;
}
/**
* Create a new encryption instance
*
* @example
* ```ts
* new ClientEncryption(mongoClient, {
* keyVaultNamespace: 'client.encryption',
* kmsProviders: {
* local: {
* key: masterKey // The master key used for encryption/decryption. A 96-byte long Buffer
* }
* }
* });
* ```
*
* @example
* ```ts
* new ClientEncryption(mongoClient, {
* keyVaultNamespace: 'client.encryption',
* kmsProviders: {
* aws: {
* accessKeyId: AWS_ACCESS_KEY,
* secretAccessKey: AWS_SECRET_KEY
* }
* }
* });
* ```
*/
constructor(client, options) {
this._client = client;
this._proxyOptions = options.proxyOptions ?? {};
this._tlsOptions = options.tlsOptions ?? {};
this._kmsProviders = options.kmsProviders || {};
const { timeoutMS } = (0, utils_1.resolveTimeoutOptions)(client, options);
this._timeoutMS = timeoutMS;
this._credentialProviders = options.credentialProviders;
if (options.credentialProviders?.aws && !(0, index_1.isEmptyCredentials)('aws', this._kmsProviders)) {
throw new errors_1.MongoCryptInvalidArgumentError('Can only provide a custom AWS credential provider when the state machine is configured for automatic AWS credential fetching');
}
if (options.keyVaultNamespace == null) {
throw new errors_1.MongoCryptInvalidArgumentError('Missing required option `keyVaultNamespace`');
}
const mongoCryptOptions = {
...options,
kmsProviders: !Buffer.isBuffer(this._kmsProviders)
? (0, bson_1.serialize)(this._kmsProviders)
: this._kmsProviders,
errorWrapper: errors_1.defaultErrorWrapper
};
this._keyVaultNamespace = options.keyVaultNamespace;
this._keyVaultClient = options.keyVaultClient || client;
const MongoCrypt = ClientEncryption.getMongoCrypt();
this._mongoCrypt = new MongoCrypt(mongoCryptOptions);
}
/**
* Creates a data key used for explicit encryption and inserts it into the key vault namespace
*
* @example
* ```ts
* // Using async/await to create a local key
* const dataKeyId = await clientEncryption.createDataKey('local');
* ```
*
* @example
* ```ts
* // Using async/await to create an aws key
* const dataKeyId = await clientEncryption.createDataKey('aws', {
* masterKey: {
* region: 'us-east-1',
* key: 'xxxxxxxxxxxxxx' // CMK ARN here
* }
* });
* ```
*
* @example
* ```ts
* // Using async/await to create an aws key with a keyAltName
* const dataKeyId = await clientEncryption.createDataKey('aws', {
* masterKey: {
* region: 'us-east-1',
* key: 'xxxxxxxxxxxxxx' // CMK ARN here
* },
* keyAltNames: [ 'mySpecialKey' ]
* });
* ```
*/
async createDataKey(provider, options = {}) {
if (options.keyAltNames && !Array.isArray(options.keyAltNames)) {
throw new errors_1.MongoCryptInvalidArgumentError(`Option "keyAltNames" must be an array of strings, but was of type ${typeof options.keyAltNames}.`);
}
let keyAltNames = undefined;
if (options.keyAltNames && options.keyAltNames.length > 0) {
keyAltNames = options.keyAltNames.map((keyAltName, i) => {
if (typeof keyAltName !== 'string') {
throw new errors_1.MongoCryptInvalidArgumentError(`Option "keyAltNames" must be an array of strings, but item at index ${i} was of type ${typeof keyAltName}`);
}
return (0, bson_1.serialize)({ keyAltName });
});
}
let keyMaterial = undefined;
if (options.keyMaterial) {
keyMaterial = (0, bson_1.serialize)({ keyMaterial: options.keyMaterial });
}
const dataKeyBson = (0, bson_1.serialize)({
provider,
...options.masterKey
});
const context = this._mongoCrypt.makeDataKeyContext(dataKeyBson, {
keyAltNames,
keyMaterial
});
const stateMachine = new state_machine_1.StateMachine({
proxyOptions: this._proxyOptions,
tlsOptions: this._tlsOptions,
socketOptions: autoSelectSocketOptions(this._client.s.options)
});
const timeoutContext = options?.timeoutContext ??
timeout_1.TimeoutContext.create((0, utils_1.resolveTimeoutOptions)(this._client, { timeoutMS: this._timeoutMS }));
const dataKey = (0, bson_1.deserialize)(await stateMachine.execute(this, context, { timeoutContext }));
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
const { insertedId } = await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.insertOne(dataKey, {
writeConcern: { w: 'majority' },
timeoutMS: timeoutContext?.csotEnabled()
? timeoutContext?.getRemainingTimeMSOrThrow()
: undefined
});
return insertedId;
}
/**
* Searches the keyvault for any data keys matching the provided filter. If there are matches, rewrapManyDataKey then attempts to re-wrap the data keys using the provided options.
*
* If no matches are found, then no bulk write is performed.
*
* @example
* ```ts
* // rewrapping all data data keys (using a filter that matches all documents)
* const filter = {};
*
* const result = await clientEncryption.rewrapManyDataKey(filter);
* if (result.bulkWriteResult != null) {
* // keys were re-wrapped, results will be available in the bulkWrite object.
* }
* ```
*
* @example
* ```ts
* // attempting to rewrap all data keys with no matches
* const filter = { _id: new Binary() } // assume _id matches no documents in the database
* const result = await clientEncryption.rewrapManyDataKey(filter);
*
* if (result.bulkWriteResult == null) {
* // no keys matched, `bulkWriteResult` does not exist on the result object
* }
* ```
*/
async rewrapManyDataKey(filter, options) {
let keyEncryptionKeyBson = undefined;
if (options) {
const keyEncryptionKey = Object.assign({ provider: options.provider }, options.masterKey);
keyEncryptionKeyBson = (0, bson_1.serialize)(keyEncryptionKey);
}
const filterBson = (0, bson_1.serialize)(filter);
const context = this._mongoCrypt.makeRewrapManyDataKeyContext(filterBson, keyEncryptionKeyBson);
const stateMachine = new state_machine_1.StateMachine({
proxyOptions: this._proxyOptions,
tlsOptions: this._tlsOptions,
socketOptions: autoSelectSocketOptions(this._client.s.options)
});
const timeoutContext = timeout_1.TimeoutContext.create((0, utils_1.resolveTimeoutOptions)(this._client, { timeoutMS: this._timeoutMS }));
const { v: dataKeys } = (0, bson_1.deserialize)(await stateMachine.execute(this, context, { timeoutContext }));
if (dataKeys.length === 0) {
return {};
}
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
const replacements = dataKeys.map((key) => ({
updateOne: {
filter: { _id: key._id },
update: {
$set: {
masterKey: key.masterKey,
keyMaterial: key.keyMaterial
},
$currentDate: {
updateDate: true
}
}
}
}));
const result = await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.bulkWrite(replacements, {
writeConcern: { w: 'majority' },
timeoutMS: timeoutContext.csotEnabled() ? timeoutContext?.remainingTimeMS : undefined
});
return { bulkWriteResult: result };
}
/**
* Deletes the key with the provided id from the keyvault, if it exists.
*
* @example
* ```ts
* // delete a key by _id
* const id = new Binary(); // id is a bson binary subtype 4 object
* const { deletedCount } = await clientEncryption.deleteKey(id);
*
* if (deletedCount != null && deletedCount > 0) {
* // successful deletion
* }
* ```
*
*/
async deleteKey(_id) {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
return await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.deleteOne({ _id }, { writeConcern: { w: 'majority' }, timeoutMS: this._timeoutMS });
}
/**
* Finds all the keys currently stored in the keyvault.
*
* This method will not throw.
*
* @returns a FindCursor over all keys in the keyvault.
* @example
* ```ts
* // fetching all keys
* const keys = await clientEncryption.getKeys().toArray();
* ```
*/
getKeys() {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
return this._keyVaultClient
.db(dbName)
.collection(collectionName)
.find({}, { readConcern: { level: 'majority' }, timeoutMS: this._timeoutMS });
}
/**
* Finds a key in the keyvault with the specified _id.
*
* Returns a promise that either resolves to a {@link DataKey} if a document matches the key or null if no documents
* match the id. The promise rejects with an error if an error is thrown.
* @example
* ```ts
* // getting a key by id
* const id = new Binary(); // id is a bson binary subtype 4 object
* const key = await clientEncryption.getKey(id);
* if (!key) {
* // key is null if there was no matching key
* }
* ```
*/
async getKey(_id) {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
return await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.findOne({ _id }, { readConcern: { level: 'majority' }, timeoutMS: this._timeoutMS });
}
/**
* Finds a key in the keyvault which has the specified keyAltName.
*
* @param keyAltName - a keyAltName to search for a key
* @returns Returns a promise that either resolves to a {@link DataKey} if a document matches the key or null if no documents
* match the keyAltName. The promise rejects with an error if an error is thrown.
* @example
* ```ts
* // get a key by alt name
* const keyAltName = 'keyAltName';
* const key = await clientEncryption.getKeyByAltName(keyAltName);
* if (!key) {
* // key is null if there is no matching key
* }
* ```
*/
async getKeyByAltName(keyAltName) {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
return await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.findOne({ keyAltNames: keyAltName }, { readConcern: { level: 'majority' }, timeoutMS: this._timeoutMS });
}
/**
* Adds a keyAltName to a key identified by the provided _id.
*
* This method resolves to/returns the *old* key value (prior to adding the new altKeyName).
*
* @param _id - The id of the document to update.
* @param keyAltName - a keyAltName to search for a key
* @returns Returns a promise that either resolves to a {@link DataKey} if a document matches the key or null if no documents
* match the id. The promise rejects with an error if an error is thrown.
* @example
* ```ts
* // adding an keyAltName to a data key
* const id = new Binary(); // id is a bson binary subtype 4 object
* const keyAltName = 'keyAltName';
* const oldKey = await clientEncryption.addKeyAltName(id, keyAltName);
* if (!oldKey) {
* // null is returned if there is no matching document with an id matching the supplied id
* }
* ```
*/
async addKeyAltName(_id, keyAltName) {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
const value = await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.findOneAndUpdate({ _id }, { $addToSet: { keyAltNames: keyAltName } }, { writeConcern: { w: 'majority' }, returnDocument: 'before', timeoutMS: this._timeoutMS });
return value;
}
/**
* Adds a keyAltName to a key identified by the provided _id.
*
* This method resolves to/returns the *old* key value (prior to removing the new altKeyName).
*
* If the removed keyAltName is the last keyAltName for that key, the `altKeyNames` property is unset from the document.
*
* @param _id - The id of the document to update.
* @param keyAltName - a keyAltName to search for a key
* @returns Returns a promise that either resolves to a {@link DataKey} if a document matches the key or null if no documents
* match the id. The promise rejects with an error if an error is thrown.
* @example
* ```ts
* // removing a key alt name from a data key
* const id = new Binary(); // id is a bson binary subtype 4 object
* const keyAltName = 'keyAltName';
* const oldKey = await clientEncryption.removeKeyAltName(id, keyAltName);
*
* if (!oldKey) {
* // null is returned if there is no matching document with an id matching the supplied id
* }
* ```
*/
async removeKeyAltName(_id, keyAltName) {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(this._keyVaultNamespace);
const pipeline = [
{
$set: {
keyAltNames: {
$cond: [
{
$eq: ['$keyAltNames', [keyAltName]]
},
'$$REMOVE',
{
$filter: {
input: '$keyAltNames',
cond: {
$ne: ['$$this', keyAltName]
}
}
}
]
}
}
}
];
const value = await this._keyVaultClient
.db(dbName)
.collection(collectionName)
.findOneAndUpdate({ _id }, pipeline, {
writeConcern: { w: 'majority' },
returnDocument: 'before',
timeoutMS: this._timeoutMS
});
return value;
}
/**
* A convenience method for creating an encrypted collection.
* This method will create data keys for any encryptedFields that do not have a `keyId` defined
* and then create a new collection with the full set of encryptedFields.
*
* @param db - A Node.js driver Db object with which to create the collection
* @param name - The name of the collection to be created
* @param options - Options for createDataKey and for createCollection
* @returns created collection and generated encryptedFields
* @throws MongoCryptCreateDataKeyError - If part way through the process a createDataKey invocation fails, an error will be rejected that has the partial `encryptedFields` that were created.
* @throws MongoCryptCreateEncryptedCollectionError - If creating the collection fails, an error will be rejected that has the entire `encryptedFields` that were created.
*/
async createEncryptedCollection(db, name, options) {
const { provider, masterKey, createCollectionOptions: { encryptedFields: { ...encryptedFields }, ...createCollectionOptions } } = options;
const timeoutContext = this._timeoutMS != null
? timeout_1.TimeoutContext.create((0, utils_1.resolveTimeoutOptions)(this._client, { timeoutMS: this._timeoutMS }))
: undefined;
if (Array.isArray(encryptedFields.fields)) {
const createDataKeyPromises = encryptedFields.fields.map(async (field) => field == null || typeof field !== 'object' || field.keyId != null
? field
: {
...field,
keyId: await this.createDataKey(provider, {
masterKey,
// clone the timeoutContext
// in order to avoid sharing the same timeout for server selection and connection checkout across different concurrent operations
timeoutContext: timeoutContext?.csotEnabled() ? timeoutContext?.clone() : undefined
})
});
const createDataKeyResolutions = await Promise.allSettled(createDataKeyPromises);
encryptedFields.fields = createDataKeyResolutions.map((resolution, index) => resolution.status === 'fulfilled' ? resolution.value : encryptedFields.fields[index]);
const rejection = createDataKeyResolutions.find((result) => result.status === 'rejected');
if (rejection != null) {
throw new errors_1.MongoCryptCreateDataKeyError(encryptedFields, { cause: rejection.reason });
}
}
try {
const collection = await db.createCollection(name, {
...createCollectionOptions,
encryptedFields,
timeoutMS: timeoutContext?.csotEnabled()
? timeoutContext?.getRemainingTimeMSOrThrow()
: undefined
});
return { collection, encryptedFields };
}
catch (cause) {
throw new errors_1.MongoCryptCreateEncryptedCollectionError(encryptedFields, { cause });
}
}
/**
* Explicitly encrypt a provided value. Note that either `options.keyId` or `options.keyAltName` must
* be specified. Specifying both `options.keyId` and `options.keyAltName` is considered an error.
*
* @param value - The value that you wish to serialize. Must be of a type that can be serialized into BSON
* @param options -
* @returns a Promise that either resolves with the encrypted value, or rejects with an error.
*
* @example
* ```ts
* // Encryption with async/await api
* async function encryptMyData(value) {
* const keyId = await clientEncryption.createDataKey('local');
* return clientEncryption.encrypt(value, { keyId, algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' });
* }
* ```
*
* @example
* ```ts
* // Encryption using a keyAltName
* async function encryptMyData(value) {
* await clientEncryption.createDataKey('local', { keyAltNames: 'mySpecialKey' });
* return clientEncryption.encrypt(value, { keyAltName: 'mySpecialKey', algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic' });
* }
* ```
*/
async encrypt(value, options) {
return await this._encrypt(value, false, options);
}
/**
* Encrypts a Match Expression or Aggregate Expression to query a range index.
*
* Only supported when queryType is "range" and algorithm is "Range".
*
* @param expression - a BSON document of one of the following forms:
* 1. A Match Expression of this form:
* `{$and: [{<field>: {$gt: <value1>}}, {<field>: {$lt: <value2> }}]}`
* 2. An Aggregate Expression of this form:
* `{$and: [{$gt: [<fieldpath>, <value1>]}, {$lt: [<fieldpath>, <value2>]}]}`
*
* `$gt` may also be `$gte`. `$lt` may also be `$lte`.
*
* @param options -
* @returns Returns a Promise that either resolves with the encrypted value or rejects with an error.
*/
async encryptExpression(expression, options) {
return await this._encrypt(expression, true, options);
}
/**
* Explicitly decrypt a provided encrypted value
*
* @param value - An encrypted value
* @returns a Promise that either resolves with the decrypted value, or rejects with an error
*
* @example
* ```ts
* // Decrypting value with async/await API
* async function decryptMyValue(value) {
* return clientEncryption.decrypt(value);
* }
* ```
*/
async decrypt(value) {
const valueBuffer = (0, bson_1.serialize)({ v: value });
const context = this._mongoCrypt.makeExplicitDecryptionContext(valueBuffer);
const stateMachine = new state_machine_1.StateMachine({
proxyOptions: this._proxyOptions,
tlsOptions: this._tlsOptions,
socketOptions: autoSelectSocketOptions(this._client.s.options)
});
const timeoutContext = this._timeoutMS != null
? timeout_1.TimeoutContext.create((0, utils_1.resolveTimeoutOptions)(this._client, { timeoutMS: this._timeoutMS }))
: undefined;
const { v } = (0, bson_1.deserialize)(await stateMachine.execute(this, context, { timeoutContext }));
return v;
}
/**
* @internal
* Ask the user for KMS credentials.
*
* This returns anything that looks like the kmsProviders original input
* option. It can be empty, and any provider specified here will override
* the original ones.
*/
async askForKMSCredentials() {
return await (0, index_1.refreshKMSCredentials)(this._kmsProviders, this._credentialProviders);
}
static get libmongocryptVersion() {
return ClientEncryption.getMongoCrypt().libmongocryptVersion;
}
/**
* @internal
* A helper that perform explicit encryption of values and expressions.
* Explicitly encrypt a provided value. Note that either `options.keyId` or `options.keyAltName` must
* be specified. Specifying both `options.keyId` and `options.keyAltName` is considered an error.
*
* @param value - The value that you wish to encrypt. Must be of a type that can be serialized into BSON
* @param expressionMode - a boolean that indicates whether or not to encrypt the value as an expression
* @param options - options to pass to encrypt
* @returns the raw result of the call to stateMachine.execute(). When expressionMode is set to true, the return
* value will be a bson document. When false, the value will be a BSON Binary.
*
*/
async _encrypt(value, expressionMode, options) {
const { algorithm, keyId, keyAltName, contentionFactor, queryType, rangeOptions, textOptions } = options;
const contextOptions = {
expressionMode,
algorithm
};
if (keyId) {
contextOptions.keyId = keyId.buffer;
}
if (keyAltName) {
if (keyId) {
throw new errors_1.MongoCryptInvalidArgumentError(`"options" cannot contain both "keyId" and "keyAltName"`);
}
if (typeof keyAltName !== 'string') {
throw new errors_1.MongoCryptInvalidArgumentError(`"options.keyAltName" must be of type string, but was of type ${typeof keyAltName}`);
}
contextOptions.keyAltName = (0, bson_1.serialize)({ keyAltName });
}
if (typeof contentionFactor === 'number' || typeof contentionFactor === 'bigint') {
contextOptions.contentionFactor = contentionFactor;
}
if (typeof queryType === 'string') {
contextOptions.queryType = queryType;
}
if (typeof rangeOptions === 'object') {
contextOptions.rangeOptions = (0, bson_1.serialize)(rangeOptions);
}
if (typeof textOptions === 'object') {
contextOptions.textOptions = (0, bson_1.serialize)(textOptions);
}
const valueBuffer = (0, bson_1.serialize)({ v: value });
const stateMachine = new state_machine_1.StateMachine({
proxyOptions: this._proxyOptions,
tlsOptions: this._tlsOptions,
socketOptions: autoSelectSocketOptions(this._client.s.options)
});
const context = this._mongoCrypt.makeExplicitEncryptionContext(valueBuffer, contextOptions);
const timeoutContext = this._timeoutMS != null
? timeout_1.TimeoutContext.create((0, utils_1.resolveTimeoutOptions)(this._client, { timeoutMS: this._timeoutMS }))
: undefined;
const { v } = (0, bson_1.deserialize)(await stateMachine.execute(this, context, { timeoutContext }));
return v;
}
}
exports.ClientEncryption = ClientEncryption;
/**
* Get the socket options from the client.
* @param baseOptions - The mongo client options.
* @returns ClientEncryptionSocketOptions
*/
function autoSelectSocketOptions(baseOptions) {
const options = { autoSelectFamily: true };
if ('autoSelectFamily' in baseOptions) {
options.autoSelectFamily = baseOptions.autoSelectFamily;
}
if ('autoSelectFamilyAttemptTimeout' in baseOptions) {
options.autoSelectFamilyAttemptTimeout = baseOptions.autoSelectFamilyAttemptTimeout;
}
return options;
}
//# sourceMappingURL=client_encryption.js.map

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,138 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoCryptKMSRequestNetworkTimeoutError = exports.MongoCryptAzureKMSRequestError = exports.MongoCryptCreateEncryptedCollectionError = exports.MongoCryptCreateDataKeyError = exports.MongoCryptInvalidArgumentError = exports.defaultErrorWrapper = exports.MongoCryptError = void 0;
const error_1 = require("../error");
/**
* @public
* An error indicating that something went wrong specifically with MongoDB Client Encryption
*/
class MongoCryptError extends error_1.MongoError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(message, options = {}) {
super(message, options);
}
get name() {
return 'MongoCryptError';
}
}
exports.MongoCryptError = MongoCryptError;
const defaultErrorWrapper = (error) => new MongoCryptError(error.message, { cause: error });
exports.defaultErrorWrapper = defaultErrorWrapper;
/**
* @public
*
* An error indicating an invalid argument was provided to an encryption API.
*/
class MongoCryptInvalidArgumentError extends MongoCryptError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(message) {
super(message);
}
get name() {
return 'MongoCryptInvalidArgumentError';
}
}
exports.MongoCryptInvalidArgumentError = MongoCryptInvalidArgumentError;
/**
* @public
* An error indicating that `ClientEncryption.createEncryptedCollection()` failed to create data keys
*/
class MongoCryptCreateDataKeyError extends MongoCryptError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(encryptedFields, { cause }) {
super(`Unable to complete creating data keys: ${cause.message}`, { cause });
this.encryptedFields = encryptedFields;
}
get name() {
return 'MongoCryptCreateDataKeyError';
}
}
exports.MongoCryptCreateDataKeyError = MongoCryptCreateDataKeyError;
/**
* @public
* An error indicating that `ClientEncryption.createEncryptedCollection()` failed to create a collection
*/
class MongoCryptCreateEncryptedCollectionError extends MongoCryptError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(encryptedFields, { cause }) {
super(`Unable to create collection: ${cause.message}`, { cause });
this.encryptedFields = encryptedFields;
}
get name() {
return 'MongoCryptCreateEncryptedCollectionError';
}
}
exports.MongoCryptCreateEncryptedCollectionError = MongoCryptCreateEncryptedCollectionError;
/**
* @public
* An error indicating that mongodb-client-encryption failed to auto-refresh Azure KMS credentials.
*/
class MongoCryptAzureKMSRequestError extends MongoCryptError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(message, body) {
super(message);
this.body = body;
}
get name() {
return 'MongoCryptAzureKMSRequestError';
}
}
exports.MongoCryptAzureKMSRequestError = MongoCryptAzureKMSRequestError;
/** @public */
class MongoCryptKMSRequestNetworkTimeoutError extends MongoCryptError {
get name() {
return 'MongoCryptKMSRequestNetworkTimeoutError';
}
}
exports.MongoCryptKMSRequestNetworkTimeoutError = MongoCryptKMSRequestNetworkTimeoutError;
//# sourceMappingURL=errors.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"errors.js","sourceRoot":"","sources":["../../src/client-side-encryption/errors.ts"],"names":[],"mappings":";;;AACA,oCAAsC;AAEtC;;;GAGG;AACH,MAAa,eAAgB,SAAQ,kBAAU;IAC7C;;;;;;;;;;QAUI;IACJ,YAAY,OAAe,EAAE,UAA6B,EAAE;QAC1D,KAAK,CAAC,OAAO,EAAE,OAAO,CAAC,CAAC;IAC1B,CAAC;IAED,IAAa,IAAI;QACf,OAAO,iBAAiB,CAAC;IAC3B,CAAC;CACF;AAnBD,0CAmBC;AAEM,MAAM,mBAAmB,GAAG,CAAC,KAAY,EAAE,EAAE,CAClD,IAAI,eAAe,CAAC,KAAK,CAAC,OAAO,EAAE,EAAE,KAAK,EAAE,KAAK,EAAE,CAAC,CAAC;AAD1C,QAAA,mBAAmB,uBACuB;AAEvD;;;;GAIG;AACH,MAAa,8BAA+B,SAAQ,eAAe;IACjE;;;;;;;;;;QAUI;IACJ,YAAY,OAAe;QACzB,KAAK,CAAC,OAAO,CAAC,CAAC;IACjB,CAAC;IAED,IAAa,IAAI;QACf,OAAO,gCAAgC,CAAC;IAC1C,CAAC;CACF;AAnBD,wEAmBC;AACD;;;GAGG;AACH,MAAa,4BAA6B,SAAQ,eAAe;IAE/D;;;;;;;;;;QAUI;IACJ,YAAY,eAAyB,EAAE,EAAE,KAAK,EAAoB;QAChE,KAAK,CAAC,0CAA0C,KAAK,CAAC,OAAO,EAAE,EAAE,EAAE,KAAK,EAAE,CAAC,CAAC;QAC5E,IAAI,CAAC,eAAe,GAAG,eAAe,CAAC;IACzC,CAAC;IAED,IAAa,IAAI;QACf,OAAO,8BAA8B,CAAC;IACxC,CAAC;CACF;AArBD,oEAqBC;AAED;;;GAGG;AACH,MAAa,wCAAyC,SAAQ,eAAe;IAE3E;;;;;;;;;;QAUI;IACJ,YAAY,eAAyB,EAAE,EAAE,KAAK,EAAoB;QAChE,KAAK,CAAC,gCAAgC,KAAK,CAAC,OAAO,EAAE,EAAE,EAAE,KAAK,EAAE,CAAC,CAAC;QAClE,IAAI,CAAC,eAAe,GAAG,eAAe,CAAC;IACzC,CAAC;IAED,IAAa,IAAI;QACf,OAAO,0CAA0C,CAAC;IACpD,CAAC;CACF;AArBD,4FAqBC;AAED;;;GAGG;AACH,MAAa,8BAA+B,SAAQ,eAAe;IAGjE;;;;;;;;;;QAUI;IACJ,YAAY,OAAe,EAAE,IAAe;QAC1C,KAAK,CAAC,OAAO,CAAC,CAAC;QACf,IAAI,CAAC,IAAI,GAAG,IAAI,CAAC;IACnB,CAAC;IAED,IAAa,IAAI;QACf,OAAO,gCAAgC,CAAC;IAC1C,CAAC;CACF;AAtBD,wEAsBC;AAED,cAAc;AACd,MAAa,uCAAwC,SAAQ,eAAe;IAC1E,IAAa,IAAI;QACf,OAAO,yCAAyC,CAAC;IACnD,CAAC;CACF;AAJD,0FAIC"}

View file

@ -0,0 +1,85 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongocryptdManager = void 0;
const error_1 = require("../error");
/**
* @internal
* An internal class that handles spawning a mongocryptd.
*/
class MongocryptdManager {
static { this.DEFAULT_MONGOCRYPTD_URI = 'mongodb://localhost:27020'; }
constructor(extraOptions = {}) {
this.spawnPath = '';
this.spawnArgs = [];
this.uri =
typeof extraOptions.mongocryptdURI === 'string' && extraOptions.mongocryptdURI.length > 0
? extraOptions.mongocryptdURI
: MongocryptdManager.DEFAULT_MONGOCRYPTD_URI;
this.bypassSpawn = !!extraOptions.mongocryptdBypassSpawn;
if (Object.hasOwn(extraOptions, 'mongocryptdSpawnPath') && extraOptions.mongocryptdSpawnPath) {
this.spawnPath = extraOptions.mongocryptdSpawnPath;
}
if (Object.hasOwn(extraOptions, 'mongocryptdSpawnArgs') &&
Array.isArray(extraOptions.mongocryptdSpawnArgs)) {
this.spawnArgs = this.spawnArgs.concat(extraOptions.mongocryptdSpawnArgs);
}
if (this.spawnArgs
.filter(arg => typeof arg === 'string')
.every(arg => arg.indexOf('--idleShutdownTimeoutSecs') < 0)) {
this.spawnArgs.push('--idleShutdownTimeoutSecs', '60');
}
}
/**
* Will check to see if a mongocryptd is up. If it is not up, it will attempt
* to spawn a mongocryptd in a detached process, and then wait for it to be up.
*/
async spawn() {
const cmdName = this.spawnPath || 'mongocryptd';
// eslint-disable-next-line @typescript-eslint/no-require-imports
const { spawn } = require('child_process');
// Spawned with stdio: ignore and detached: true
// to ensure child can outlive parent.
this._child = spawn(cmdName, this.spawnArgs, {
stdio: 'ignore',
detached: true
});
this._child.on('error', () => {
// From the FLE spec:
// "The stdout and stderr of the spawned process MUST not be exposed in the driver
// (e.g. redirect to /dev/null). Users can pass the argument --logpath to
// extraOptions.mongocryptdSpawnArgs if they need to inspect mongocryptd logs.
// If spawning is necessary, the driver MUST spawn mongocryptd whenever server
// selection on the MongoClient to mongocryptd fails. If the MongoClient fails to
// connect after spawning, the server selection error is propagated to the user."
// The AutoEncrypter and MongoCryptdManager should work together to spawn
// mongocryptd whenever necessary. Additionally, the `mongocryptd` intentionally
// shuts down after 60s and gets respawned when necessary. We rely on server
// selection timeouts when connecting to the `mongocryptd` to inform users that something
// has been configured incorrectly. For those reasons, we suppress stderr from
// the `mongocryptd` process and immediately unref the process.
});
// unref child to remove handle from event loop
this._child.unref();
}
/**
* @returns the result of `fn` or rejects with an error.
*/
async withRespawn(fn) {
try {
const result = await fn();
return result;
}
catch (err) {
// If we are not bypassing spawning, then we should retry once on a MongoTimeoutError (server selection error)
const shouldSpawn = err instanceof error_1.MongoNetworkTimeoutError && !this.bypassSpawn;
if (!shouldSpawn) {
throw err;
}
}
await this.spawn();
const result = await fn();
return result;
}
}
exports.MongocryptdManager = MongocryptdManager;
//# sourceMappingURL=mongocryptd_manager.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"mongocryptd_manager.js","sourceRoot":"","sources":["../../src/client-side-encryption/mongocryptd_manager.ts"],"names":[],"mappings":";;;AAEA,oCAAoD;AAGpD;;;GAGG;AACH,MAAa,kBAAkB;aACtB,4BAAuB,GAAG,2BAA2B,AAA9B,CAA+B;IAQ7D,YAAY,eAA2C,EAAE;QAJzD,cAAS,GAAG,EAAE,CAAC;QACf,cAAS,GAAkB,EAAE,CAAC;QAI5B,IAAI,CAAC,GAAG;YACN,OAAO,YAAY,CAAC,cAAc,KAAK,QAAQ,IAAI,YAAY,CAAC,cAAc,CAAC,MAAM,GAAG,CAAC;gBACvF,CAAC,CAAC,YAAY,CAAC,cAAc;gBAC7B,CAAC,CAAC,kBAAkB,CAAC,uBAAuB,CAAC;QAEjD,IAAI,CAAC,WAAW,GAAG,CAAC,CAAC,YAAY,CAAC,sBAAsB,CAAC;QAEzD,IAAI,MAAM,CAAC,MAAM,CAAC,YAAY,EAAE,sBAAsB,CAAC,IAAI,YAAY,CAAC,oBAAoB,EAAE,CAAC;YAC7F,IAAI,CAAC,SAAS,GAAG,YAAY,CAAC,oBAAoB,CAAC;QACrD,CAAC;QACD,IACE,MAAM,CAAC,MAAM,CAAC,YAAY,EAAE,sBAAsB,CAAC;YACnD,KAAK,CAAC,OAAO,CAAC,YAAY,CAAC,oBAAoB,CAAC,EAChD,CAAC;YACD,IAAI,CAAC,SAAS,GAAG,IAAI,CAAC,SAAS,CAAC,MAAM,CAAC,YAAY,CAAC,oBAAoB,CAAC,CAAC;QAC5E,CAAC;QACD,IACE,IAAI,CAAC,SAAS;aACX,MAAM,CAAC,GAAG,CAAC,EAAE,CAAC,OAAO,GAAG,KAAK,QAAQ,CAAC;aACtC,KAAK,CAAC,GAAG,CAAC,EAAE,CAAC,GAAG,CAAC,OAAO,CAAC,2BAA2B,CAAC,GAAG,CAAC,CAAC,EAC7D,CAAC;YACD,IAAI,CAAC,SAAS,CAAC,IAAI,CAAC,2BAA2B,EAAE,IAAI,CAAC,CAAC;QACzD,CAAC;IACH,CAAC;IAED;;;OAGG;IACH,KAAK,CAAC,KAAK;QACT,MAAM,OAAO,GAAG,IAAI,CAAC,SAAS,IAAI,aAAa,CAAC;QAEhD,iEAAiE;QACjE,MAAM,EAAE,KAAK,EAAE,GAAG,OAAO,CAAC,eAAe,CAAmC,CAAC;QAE7E,gDAAgD;QAChD,sCAAsC;QACtC,IAAI,CAAC,MAAM,GAAG,KAAK,CAAC,OAAO,EAAE,IAAI,CAAC,SAAS,EAAE;YAC3C,KAAK,EAAE,QAAQ;YACf,QAAQ,EAAE,IAAI;SACf,CAAC,CAAC;QAEH,IAAI,CAAC,MAAM,CAAC,EAAE,CAAC,OAAO,EAAE,GAAG,EAAE;YAC3B,qBAAqB;YACrB,kFAAkF;YAClF,yEAAyE;YACzE,8EAA8E;YAC9E,8EAA8E;YAC9E,iFAAiF;YACjF,iFAAiF;YACjF,yEAAyE;YACzE,iFAAiF;YACjF,6EAA6E;YAC7E,yFAAyF;YACzF,+EAA+E;YAC/E,+DAA+D;QACjE,CAAC,CAAC,CAAC;QAEH,+CAA+C;QAC/C,IAAI,CAAC,MAAM,CAAC,KAAK,EAAE,CAAC;IACtB,CAAC;IAED;;OAEG;IACH,KAAK,CAAC,WAAW,CAAI,EAAoB;QACvC,IAAI,CAAC;YACH,MAAM,MAAM,GAAG,MAAM,EAAE,EAAE,CAAC;YAC1B,OAAO,MAAM,CAAC;QAChB,CAAC;QAAC,OAAO,GAAG,EAAE,CAAC;YACb,8GAA8G;YAC9G,MAAM,WAAW,GAAG,GAAG,YAAY,gCAAwB,IAAI,CAAC,IAAI,CAAC,WAAW,CAAC;YACjF,IAAI,CAAC,WAAW,EAAE,CAAC;gBACjB,MAAM,GAAG,CAAC;YACZ,CAAC;QACH,CAAC;QACD,MAAM,IAAI,CAAC,KAAK,EAAE,CAAC;QACnB,MAAM,MAAM,GAAG,MAAM,EAAE,EAAE,CAAC;QAC1B,OAAO,MAAM,CAAC;IAChB,CAAC;;AAzFH,gDA0FC"}

View file

@ -0,0 +1,23 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.loadAWSCredentials = loadAWSCredentials;
const aws_temporary_credentials_1 = require("../../cmap/auth/aws_temporary_credentials");
/**
* @internal
*/
async function loadAWSCredentials(kmsProviders, provider) {
const credentialProvider = new aws_temporary_credentials_1.AWSSDKCredentialProvider(provider);
// We shouldn't ever receive a response from the AWS SDK that doesn't have a `SecretAccessKey`
// or `AccessKeyId`. However, TS says these fields are optional. We provide empty strings
// and let libmongocrypt error if we're unable to fetch the required keys.
const { SecretAccessKey = '', AccessKeyId = '', Token } = await credentialProvider.getCredentials();
const aws = {
secretAccessKey: SecretAccessKey,
accessKeyId: AccessKeyId
};
// the AWS session token is only required for temporary credentials so only attach it to the
// result if it's present in the response from the aws sdk
Token != null && (aws.sessionToken = Token);
return { ...kmsProviders, aws };
}
//# sourceMappingURL=aws.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"aws.js","sourceRoot":"","sources":["../../../src/client-side-encryption/providers/aws.ts"],"names":[],"mappings":";;AASA,gDAuBC;AAhCD,yFAGmD;AAGnD;;GAEG;AACI,KAAK,UAAU,kBAAkB,CACtC,YAA0B,EAC1B,QAAgC;IAEhC,MAAM,kBAAkB,GAAG,IAAI,oDAAwB,CAAC,QAAQ,CAAC,CAAC;IAElE,8FAA8F;IAC9F,2FAA2F;IAC3F,0EAA0E;IAC1E,MAAM,EACJ,eAAe,GAAG,EAAE,EACpB,WAAW,GAAG,EAAE,EAChB,KAAK,EACN,GAAG,MAAM,kBAAkB,CAAC,cAAc,EAAE,CAAC;IAC9C,MAAM,GAAG,GAAqC;QAC5C,eAAe,EAAE,eAAe;QAChC,WAAW,EAAE,WAAW;KACzB,CAAC;IACF,4FAA4F;IAC5F,0DAA0D;IAC1D,KAAK,IAAI,IAAI,IAAI,CAAC,GAAG,CAAC,YAAY,GAAG,KAAK,CAAC,CAAC;IAE5C,OAAO,EAAE,GAAG,YAAY,EAAE,GAAG,EAAE,CAAC;AAClC,CAAC"}

View file

@ -0,0 +1,132 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.tokenCache = exports.AzureCredentialCache = exports.AZURE_BASE_URL = void 0;
exports.addAzureParams = addAzureParams;
exports.prepareRequest = prepareRequest;
exports.fetchAzureKMSToken = fetchAzureKMSToken;
exports.loadAzureCredentials = loadAzureCredentials;
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const errors_1 = require("../errors");
const MINIMUM_TOKEN_REFRESH_IN_MILLISECONDS = 6000;
/** Base URL for getting Azure tokens. */
exports.AZURE_BASE_URL = 'http://169.254.169.254/metadata/identity/oauth2/token?';
/**
* @internal
*/
class AzureCredentialCache {
constructor() {
this.cachedToken = null;
}
async getToken() {
if (this.cachedToken == null || this.needsRefresh(this.cachedToken)) {
this.cachedToken = await this._getToken();
}
return { accessToken: this.cachedToken.accessToken };
}
needsRefresh(token) {
const timeUntilExpirationMS = token.expiresOnTimestamp - Date.now();
return timeUntilExpirationMS <= MINIMUM_TOKEN_REFRESH_IN_MILLISECONDS;
}
/**
* exposed for testing
*/
resetCache() {
this.cachedToken = null;
}
/**
* exposed for testing
*/
_getToken() {
return fetchAzureKMSToken();
}
}
exports.AzureCredentialCache = AzureCredentialCache;
/** @internal */
exports.tokenCache = new AzureCredentialCache();
/** @internal */
async function parseResponse(response) {
const { status, body: rawBody } = response;
const body = (() => {
try {
return JSON.parse(rawBody);
}
catch {
throw new errors_1.MongoCryptAzureKMSRequestError('Malformed JSON body in GET request.');
}
})();
if (status !== 200) {
throw new errors_1.MongoCryptAzureKMSRequestError('Unable to complete request.', body);
}
if (!body.access_token) {
throw new errors_1.MongoCryptAzureKMSRequestError('Malformed response body - missing field `access_token`.');
}
if (!body.expires_in) {
throw new errors_1.MongoCryptAzureKMSRequestError('Malformed response body - missing field `expires_in`.');
}
const expiresInMS = Number(body.expires_in) * 1000;
if (Number.isNaN(expiresInMS)) {
throw new errors_1.MongoCryptAzureKMSRequestError('Malformed response body - unable to parse int from `expires_in` field.');
}
return {
accessToken: body.access_token,
expiresOnTimestamp: Date.now() + expiresInMS
};
}
/**
* @internal
* Get the Azure endpoint URL.
*/
function addAzureParams(url, resource, username) {
url.searchParams.append('api-version', '2018-02-01');
url.searchParams.append('resource', resource);
if (username) {
url.searchParams.append('client_id', username);
}
return url;
}
/**
* @internal
*
* parses any options provided by prose tests to `fetchAzureKMSToken` and merges them with
* the default values for headers and the request url.
*/
function prepareRequest(options) {
const url = new URL(options.url?.toString() ?? exports.AZURE_BASE_URL);
addAzureParams(url, 'https://vault.azure.net');
const headers = { ...options.headers, 'Content-Type': 'application/json', Metadata: true };
return { headers, url };
}
/**
* @internal
*
* `AzureKMSRequestOptions` allows prose tests to modify the http request sent to the idms
* servers. This is required to simulate different server conditions. No options are expected to
* be set outside of tests.
*
* exposed for CSFLE
* [prose test 18](https://github.com/mongodb/specifications/tree/master/source/client-side-encryption/tests#azure-imds-credentials)
*/
async function fetchAzureKMSToken(options = {}) {
const { headers, url } = prepareRequest(options);
try {
const response = await (0, utils_1.get)(url, { headers });
return await parseResponse(response);
}
catch (error) {
if (error instanceof error_1.MongoNetworkTimeoutError) {
throw new errors_1.MongoCryptAzureKMSRequestError(`[Azure KMS] ${error.message}`);
}
throw error;
}
}
/**
* @internal
*
* @throws Will reject with a `MongoCryptError` if the http request fails or the http response is malformed.
*/
async function loadAzureCredentials(kmsProviders) {
const azure = await exports.tokenCache.getToken();
return { ...kmsProviders, azure };
}
//# sourceMappingURL=azure.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"azure.js","sourceRoot":"","sources":["../../../src/client-side-encryption/providers/azure.ts"],"names":[],"mappings":";;;AA0HA,wCAOC;AAQD,wCAQC;AAYD,gDAaC;AAOD,oDAGC;AAnLD,uCAAuD;AACvD,uCAAkC;AAClC,sCAA2D;AAG3D,MAAM,qCAAqC,GAAG,IAAI,CAAC;AACnD,yCAAyC;AAC5B,QAAA,cAAc,GAAG,wDAAwD,CAAC;AAkBvF;;GAEG;AACH,MAAa,oBAAoB;IAAjC;QACE,gBAAW,GAAgC,IAAI,CAAC;IA4BlD,CAAC;IA1BC,KAAK,CAAC,QAAQ;QACZ,IAAI,IAAI,CAAC,WAAW,IAAI,IAAI,IAAI,IAAI,CAAC,YAAY,CAAC,IAAI,CAAC,WAAW,CAAC,EAAE,CAAC;YACpE,IAAI,CAAC,WAAW,GAAG,MAAM,IAAI,CAAC,SAAS,EAAE,CAAC;QAC5C,CAAC;QAED,OAAO,EAAE,WAAW,EAAE,IAAI,CAAC,WAAW,CAAC,WAAW,EAAE,CAAC;IACvD,CAAC;IAED,YAAY,CAAC,KAA2B;QACtC,MAAM,qBAAqB,GAAG,KAAK,CAAC,kBAAkB,GAAG,IAAI,CAAC,GAAG,EAAE,CAAC;QACpE,OAAO,qBAAqB,IAAI,qCAAqC,CAAC;IACxE,CAAC;IAED;;OAEG;IACH,UAAU;QACR,IAAI,CAAC,WAAW,GAAG,IAAI,CAAC;IAC1B,CAAC;IAED;;OAEG;IACH,SAAS;QACP,OAAO,kBAAkB,EAAE,CAAC;IAC9B,CAAC;CACF;AA7BD,oDA6BC;AAED,gBAAgB;AACH,QAAA,UAAU,GAAG,IAAI,oBAAoB,EAAE,CAAC;AAErD,gBAAgB;AAChB,KAAK,UAAU,aAAa,CAAC,QAG5B;IACC,MAAM,EAAE,MAAM,EAAE,IAAI,EAAE,OAAO,EAAE,GAAG,QAAQ,CAAC;IAE3C,MAAM,IAAI,GAAmD,CAAC,GAAG,EAAE;QACjE,IAAI,CAAC;YACH,OAAO,IAAI,CAAC,KAAK,CAAC,OAAO,CAAC,CAAC;QAC7B,CAAC;QAAC,MAAM,CAAC;YACP,MAAM,IAAI,uCAA8B,CAAC,qCAAqC,CAAC,CAAC;QAClF,CAAC;IACH,CAAC,CAAC,EAAE,CAAC;IAEL,IAAI,MAAM,KAAK,GAAG,EAAE,CAAC;QACnB,MAAM,IAAI,uCAA8B,CAAC,6BAA6B,EAAE,IAAI,CAAC,CAAC;IAChF,CAAC;IAED,IAAI,CAAC,IAAI,CAAC,YAAY,EAAE,CAAC;QACvB,MAAM,IAAI,uCAA8B,CACtC,yDAAyD,CAC1D,CAAC;IACJ,CAAC;IAED,IAAI,CAAC,IAAI,CAAC,UAAU,EAAE,CAAC;QACrB,MAAM,IAAI,uCAA8B,CACtC,uDAAuD,CACxD,CAAC;IACJ,CAAC;IAED,MAAM,WAAW,GAAG,MAAM,CAAC,IAAI,CAAC,UAAU,CAAC,GAAG,IAAI,CAAC;IACnD,IAAI,MAAM,CAAC,KAAK,CAAC,WAAW,CAAC,EAAE,CAAC;QAC9B,MAAM,IAAI,uCAA8B,CACtC,wEAAwE,CACzE,CAAC;IACJ,CAAC;IAED,OAAO;QACL,WAAW,EAAE,IAAI,CAAC,YAAY;QAC9B,kBAAkB,EAAE,IAAI,CAAC,GAAG,EAAE,GAAG,WAAW;KAC7C,CAAC;AACJ,CAAC;AAaD;;;GAGG;AACH,SAAgB,cAAc,CAAC,GAAQ,EAAE,QAAgB,EAAE,QAAiB;IAC1E,GAAG,CAAC,YAAY,CAAC,MAAM,CAAC,aAAa,EAAE,YAAY,CAAC,CAAC;IACrD,GAAG,CAAC,YAAY,CAAC,MAAM,CAAC,UAAU,EAAE,QAAQ,CAAC,CAAC;IAC9C,IAAI,QAAQ,EAAE,CAAC;QACb,GAAG,CAAC,YAAY,CAAC,MAAM,CAAC,WAAW,EAAE,QAAQ,CAAC,CAAC;IACjD,CAAC;IACD,OAAO,GAAG,CAAC;AACb,CAAC;AAED;;;;;GAKG;AACH,SAAgB,cAAc,CAAC,OAA+B;IAI5D,MAAM,GAAG,GAAG,IAAI,GAAG,CAAC,OAAO,CAAC,GAAG,EAAE,QAAQ,EAAE,IAAI,sBAAc,CAAC,CAAC;IAC/D,cAAc,CAAC,GAAG,EAAE,yBAAyB,CAAC,CAAC;IAC/C,MAAM,OAAO,GAAG,EAAE,GAAG,OAAO,CAAC,OAAO,EAAE,cAAc,EAAE,kBAAkB,EAAE,QAAQ,EAAE,IAAI,EAAE,CAAC;IAC3F,OAAO,EAAE,OAAO,EAAE,GAAG,EAAE,CAAC;AAC1B,CAAC;AAED;;;;;;;;;GASG;AACI,KAAK,UAAU,kBAAkB,CACtC,UAAkC,EAAE;IAEpC,MAAM,EAAE,OAAO,EAAE,GAAG,EAAE,GAAG,cAAc,CAAC,OAAO,CAAC,CAAC;IACjD,IAAI,CAAC;QACH,MAAM,QAAQ,GAAG,MAAM,IAAA,WAAG,EAAC,GAAG,EAAE,EAAE,OAAO,EAAE,CAAC,CAAC;QAC7C,OAAO,MAAM,aAAa,CAAC,QAAQ,CAAC,CAAC;IACvC,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,IAAI,KAAK,YAAY,gCAAwB,EAAE,CAAC;YAC9C,MAAM,IAAI,uCAA8B,CAAC,eAAe,KAAK,CAAC,OAAO,EAAE,CAAC,CAAC;QAC3E,CAAC;QACD,MAAM,KAAK,CAAC;IACd,CAAC;AACH,CAAC;AAED;;;;GAIG;AACI,KAAK,UAAU,oBAAoB,CAAC,YAA0B;IACnE,MAAM,KAAK,GAAG,MAAM,kBAAU,CAAC,QAAQ,EAAE,CAAC;IAC1C,OAAO,EAAE,GAAG,YAAY,EAAE,KAAK,EAAE,CAAC;AACpC,CAAC"}

View file

@ -0,0 +1,16 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.loadGCPCredentials = loadGCPCredentials;
const deps_1 = require("../../deps");
/** @internal */
async function loadGCPCredentials(kmsProviders) {
const gcpMetadata = (0, deps_1.getGcpMetadata)();
if ('kModuleError' in gcpMetadata) {
return kmsProviders;
}
const { access_token: accessToken } = await gcpMetadata.instance({
property: 'service-accounts/default/token'
});
return { ...kmsProviders, gcp: { accessToken } };
}
//# sourceMappingURL=gcp.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"gcp.js","sourceRoot":"","sources":["../../../src/client-side-encryption/providers/gcp.ts"],"names":[],"mappings":";;AAIA,gDAWC;AAfD,qCAA4C;AAG5C,gBAAgB;AACT,KAAK,UAAU,kBAAkB,CAAC,YAA0B;IACjE,MAAM,WAAW,GAAG,IAAA,qBAAc,GAAE,CAAC;IAErC,IAAI,cAAc,IAAI,WAAW,EAAE,CAAC;QAClC,OAAO,YAAY,CAAC;IACtB,CAAC;IAED,MAAM,EAAE,YAAY,EAAE,WAAW,EAAE,GAAG,MAAM,WAAW,CAAC,QAAQ,CAA2B;QACzF,QAAQ,EAAE,gCAAgC;KAC3C,CAAC,CAAC;IACH,OAAO,EAAE,GAAG,YAAY,EAAE,GAAG,EAAE,EAAE,WAAW,EAAE,EAAE,CAAC;AACnD,CAAC"}

View file

@ -0,0 +1,43 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.isEmptyCredentials = isEmptyCredentials;
exports.refreshKMSCredentials = refreshKMSCredentials;
const aws_1 = require("./aws");
const azure_1 = require("./azure");
const gcp_1 = require("./gcp");
/**
* Auto credential fetching should only occur when the provider is defined on the kmsProviders map
* and the settings are an empty object.
*
* This is distinct from a nullish provider key.
*
* @internal - exposed for testing purposes only
*/
function isEmptyCredentials(providerName, kmsProviders) {
const provider = kmsProviders[providerName];
if (provider == null) {
return false;
}
return typeof provider === 'object' && Object.keys(provider).length === 0;
}
/**
* Load cloud provider credentials for the user provided KMS providers.
* Credentials will only attempt to get loaded if they do not exist
* and no existing credentials will get overwritten.
*
* @internal
*/
async function refreshKMSCredentials(kmsProviders, credentialProviders) {
let finalKMSProviders = kmsProviders;
if (isEmptyCredentials('aws', kmsProviders)) {
finalKMSProviders = await (0, aws_1.loadAWSCredentials)(finalKMSProviders, credentialProviders?.aws);
}
if (isEmptyCredentials('gcp', kmsProviders)) {
finalKMSProviders = await (0, gcp_1.loadGCPCredentials)(finalKMSProviders);
}
if (isEmptyCredentials('azure', kmsProviders)) {
finalKMSProviders = await (0, azure_1.loadAzureCredentials)(finalKMSProviders);
}
return finalKMSProviders;
}
//# sourceMappingURL=index.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../../src/client-side-encryption/providers/index.ts"],"names":[],"mappings":";;AA0KA,gDASC;AASD,sDAkBC;AA5MD,+BAA2C;AAC3C,mCAA+C;AAC/C,+BAA2C;AA8J3C;;;;;;;GAOG;AACH,SAAgB,kBAAkB,CAChC,YAA6C,EAC7C,YAA0B;IAE1B,MAAM,QAAQ,GAAG,YAAY,CAAC,YAAY,CAAC,CAAC;IAC5C,IAAI,QAAQ,IAAI,IAAI,EAAE,CAAC;QACrB,OAAO,KAAK,CAAC;IACf,CAAC;IACD,OAAO,OAAO,QAAQ,KAAK,QAAQ,IAAI,MAAM,CAAC,IAAI,CAAC,QAAQ,CAAC,CAAC,MAAM,KAAK,CAAC,CAAC;AAC5E,CAAC;AAED;;;;;;GAMG;AACI,KAAK,UAAU,qBAAqB,CACzC,YAA0B,EAC1B,mBAAyC;IAEzC,IAAI,iBAAiB,GAAG,YAAY,CAAC;IAErC,IAAI,kBAAkB,CAAC,KAAK,EAAE,YAAY,CAAC,EAAE,CAAC;QAC5C,iBAAiB,GAAG,MAAM,IAAA,wBAAkB,EAAC,iBAAiB,EAAE,mBAAmB,EAAE,GAAG,CAAC,CAAC;IAC5F,CAAC;IAED,IAAI,kBAAkB,CAAC,KAAK,EAAE,YAAY,CAAC,EAAE,CAAC;QAC5C,iBAAiB,GAAG,MAAM,IAAA,wBAAkB,EAAC,iBAAiB,CAAC,CAAC;IAClE,CAAC;IAED,IAAI,kBAAkB,CAAC,OAAO,EAAE,YAAY,CAAC,EAAE,CAAC;QAC9C,iBAAiB,GAAG,MAAM,IAAA,4BAAoB,EAAC,iBAAiB,CAAC,CAAC;IACpE,CAAC;IACD,OAAO,iBAAiB,CAAC;AAC3B,CAAC"}

View file

@ -0,0 +1,426 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.StateMachine = void 0;
const fs = require("fs/promises");
const net = require("net");
const tls = require("tls");
const bson_1 = require("../bson");
const abstract_cursor_1 = require("../cursor/abstract_cursor");
const deps_1 = require("../deps");
const error_1 = require("../error");
const timeout_1 = require("../timeout");
const utils_1 = require("../utils");
const client_encryption_1 = require("./client_encryption");
const errors_1 = require("./errors");
let socks = null;
function loadSocks() {
if (socks == null) {
const socksImport = (0, deps_1.getSocks)();
if ('kModuleError' in socksImport) {
throw socksImport.kModuleError;
}
socks = socksImport;
}
return socks;
}
// libmongocrypt states
const MONGOCRYPT_CTX_ERROR = 0;
const MONGOCRYPT_CTX_NEED_MONGO_COLLINFO = 1;
const MONGOCRYPT_CTX_NEED_MONGO_MARKINGS = 2;
const MONGOCRYPT_CTX_NEED_MONGO_KEYS = 3;
const MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS = 7;
const MONGOCRYPT_CTX_NEED_KMS = 4;
const MONGOCRYPT_CTX_READY = 5;
const MONGOCRYPT_CTX_DONE = 6;
const HTTPS_PORT = 443;
const stateToString = new Map([
[MONGOCRYPT_CTX_ERROR, 'MONGOCRYPT_CTX_ERROR'],
[MONGOCRYPT_CTX_NEED_MONGO_COLLINFO, 'MONGOCRYPT_CTX_NEED_MONGO_COLLINFO'],
[MONGOCRYPT_CTX_NEED_MONGO_MARKINGS, 'MONGOCRYPT_CTX_NEED_MONGO_MARKINGS'],
[MONGOCRYPT_CTX_NEED_MONGO_KEYS, 'MONGOCRYPT_CTX_NEED_MONGO_KEYS'],
[MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS, 'MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS'],
[MONGOCRYPT_CTX_NEED_KMS, 'MONGOCRYPT_CTX_NEED_KMS'],
[MONGOCRYPT_CTX_READY, 'MONGOCRYPT_CTX_READY'],
[MONGOCRYPT_CTX_DONE, 'MONGOCRYPT_CTX_DONE']
]);
const INSECURE_TLS_OPTIONS = [
'tlsInsecure',
'tlsAllowInvalidCertificates',
'tlsAllowInvalidHostnames'
];
/**
* Helper function for logging. Enabled by setting the environment flag MONGODB_CRYPT_DEBUG.
* @param msg - Anything you want to be logged.
*/
function debug(msg) {
if (process.env.MONGODB_CRYPT_DEBUG) {
// eslint-disable-next-line no-console
console.error(msg);
}
}
/**
* This is kind of a hack. For `rewrapManyDataKey`, we have tests that
* guarantee that when there are no matching keys, `rewrapManyDataKey` returns
* nothing. We also have tests for auto encryption that guarantee for `encrypt`
* we return an error when there are no matching keys. This error is generated in
* subsequent iterations of the state machine.
* Some apis (`encrypt`) throw if there are no filter matches and others (`rewrapManyDataKey`)
* do not. We set the result manually here, and let the state machine continue. `libmongocrypt`
* will inform us if we need to error by setting the state to `MONGOCRYPT_CTX_ERROR` but
* otherwise we'll return `{ v: [] }`.
*/
let EMPTY_V;
/**
* @internal
* An internal class that executes across a MongoCryptContext until either
* a finishing state or an error is reached. Do not instantiate directly.
*/
// TODO(DRIVERS-2671): clarify CSOT behavior for FLE APIs
class StateMachine {
constructor(options, bsonOptions = (0, bson_1.pluckBSONSerializeOptions)(options)) {
this.options = options;
this.bsonOptions = bsonOptions;
}
/**
* Executes the state machine according to the specification
*/
async execute(executor, context, options) {
const keyVaultNamespace = executor._keyVaultNamespace;
const keyVaultClient = executor._keyVaultClient;
const metaDataClient = executor._metaDataClient;
const mongocryptdClient = executor._mongocryptdClient;
const mongocryptdManager = executor._mongocryptdManager;
let result = null;
// Typescript treats getters just like properties: Once you've tested it for equality
// it cannot change. Which is exactly the opposite of what we use state and status for.
// Every call to at least `addMongoOperationResponse` and `finalize` can change the state.
// These wrappers let us write code more naturally and not add compiler exceptions
// to conditions checks inside the state machine.
const getStatus = () => context.status;
const getState = () => context.state;
while (getState() !== MONGOCRYPT_CTX_DONE && getState() !== MONGOCRYPT_CTX_ERROR) {
options.signal?.throwIfAborted();
debug(`[context#${context.id}] ${stateToString.get(getState()) || getState()}`);
switch (getState()) {
case MONGOCRYPT_CTX_NEED_MONGO_COLLINFO: {
const filter = (0, bson_1.deserialize)(context.nextMongoOperation());
if (!metaDataClient) {
throw new errors_1.MongoCryptError('unreachable state machine state: entered MONGOCRYPT_CTX_NEED_MONGO_COLLINFO but metadata client is undefined');
}
const collInfoCursor = this.fetchCollectionInfo(metaDataClient, context.ns, filter, options);
for await (const collInfo of collInfoCursor) {
context.addMongoOperationResponse((0, bson_1.serialize)(collInfo));
if (getState() === MONGOCRYPT_CTX_ERROR)
break;
}
if (getState() === MONGOCRYPT_CTX_ERROR)
break;
context.finishMongoOperation();
break;
}
case MONGOCRYPT_CTX_NEED_MONGO_MARKINGS: {
const command = context.nextMongoOperation();
if (getState() === MONGOCRYPT_CTX_ERROR)
break;
if (!mongocryptdClient) {
throw new errors_1.MongoCryptError('unreachable state machine state: entered MONGOCRYPT_CTX_NEED_MONGO_MARKINGS but mongocryptdClient is undefined');
}
// When we are using the shared library, we don't have a mongocryptd manager.
const markedCommand = mongocryptdManager
? await mongocryptdManager.withRespawn(this.markCommand.bind(this, mongocryptdClient, context.ns, command, options))
: await this.markCommand(mongocryptdClient, context.ns, command, options);
context.addMongoOperationResponse(markedCommand);
context.finishMongoOperation();
break;
}
case MONGOCRYPT_CTX_NEED_MONGO_KEYS: {
const filter = context.nextMongoOperation();
const keys = await this.fetchKeys(keyVaultClient, keyVaultNamespace, filter, options);
if (keys.length === 0) {
// See docs on EMPTY_V
result = EMPTY_V ??= (0, bson_1.serialize)({ v: [] });
}
for (const key of keys) {
context.addMongoOperationResponse((0, bson_1.serialize)(key));
}
context.finishMongoOperation();
break;
}
case MONGOCRYPT_CTX_NEED_KMS_CREDENTIALS: {
const kmsProviders = await executor.askForKMSCredentials();
context.provideKMSProviders((0, bson_1.serialize)(kmsProviders));
break;
}
case MONGOCRYPT_CTX_NEED_KMS: {
await Promise.all(this.requests(context, options));
context.finishKMSRequests();
break;
}
case MONGOCRYPT_CTX_READY: {
const finalizedContext = context.finalize();
if (getState() === MONGOCRYPT_CTX_ERROR) {
const message = getStatus().message || 'Finalization error';
throw new errors_1.MongoCryptError(message);
}
result = finalizedContext;
break;
}
default:
throw new errors_1.MongoCryptError(`Unknown state: ${getState()}`);
}
}
if (getState() === MONGOCRYPT_CTX_ERROR || result == null) {
const message = getStatus().message;
if (!message) {
debug(`unidentifiable error in MongoCrypt - received an error status from \`libmongocrypt\` but received no error message.`);
}
throw new errors_1.MongoCryptError(message ??
'unidentifiable error in MongoCrypt - received an error status from `libmongocrypt` but received no error message.');
}
return result;
}
/**
* Handles the request to the KMS service. Exposed for testing purposes. Do not directly invoke.
* @param kmsContext - A C++ KMS context returned from the bindings
* @returns A promise that resolves when the KMS reply has be fully parsed
*/
async kmsRequest(request, options) {
const parsedUrl = request.endpoint.split(':');
const port = parsedUrl[1] != null ? Number.parseInt(parsedUrl[1], 10) : HTTPS_PORT;
const socketOptions = {
host: parsedUrl[0],
servername: parsedUrl[0],
port,
...(0, client_encryption_1.autoSelectSocketOptions)(this.options.socketOptions || {})
};
const message = request.message;
const buffer = new utils_1.BufferPool();
let netSocket;
let socket;
function destroySockets() {
for (const sock of [socket, netSocket]) {
if (sock) {
sock.destroy();
}
}
}
function onerror(cause) {
return new errors_1.MongoCryptError('KMS request failed', { cause });
}
function onclose() {
return new errors_1.MongoCryptError('KMS request closed');
}
const tlsOptions = this.options.tlsOptions;
if (tlsOptions) {
const kmsProvider = request.kmsProvider;
const providerTlsOptions = tlsOptions[kmsProvider];
if (providerTlsOptions) {
const error = this.validateTlsOptions(kmsProvider, providerTlsOptions);
if (error) {
throw error;
}
try {
await this.setTlsOptions(providerTlsOptions, socketOptions);
}
catch (err) {
throw onerror(err);
}
}
}
let abortListener;
try {
if (this.options.proxyOptions && this.options.proxyOptions.proxyHost) {
netSocket = new net.Socket();
const { promise: willConnect, reject: rejectOnNetSocketError, resolve: resolveOnNetSocketConnect } = (0, utils_1.promiseWithResolvers)();
netSocket
.once('error', err => rejectOnNetSocketError(onerror(err)))
.once('close', () => rejectOnNetSocketError(onclose()))
.once('connect', () => resolveOnNetSocketConnect());
const netSocketOptions = {
...socketOptions,
host: this.options.proxyOptions.proxyHost,
port: this.options.proxyOptions.proxyPort || 1080
};
netSocket.connect(netSocketOptions);
await willConnect;
try {
socks ??= loadSocks();
socketOptions.socket = (await socks.SocksClient.createConnection({
existing_socket: netSocket,
command: 'connect',
destination: { host: socketOptions.host, port: socketOptions.port },
proxy: {
// host and port are ignored because we pass existing_socket
host: 'iLoveJavaScript',
port: 0,
type: 5,
userId: this.options.proxyOptions.proxyUsername,
password: this.options.proxyOptions.proxyPassword
}
})).socket;
}
catch (err) {
throw onerror(err);
}
}
socket = tls.connect(socketOptions, () => {
socket.write(message);
});
const { promise: willResolveKmsRequest, reject: rejectOnTlsSocketError, resolve } = (0, utils_1.promiseWithResolvers)();
abortListener = (0, utils_1.addAbortListener)(options?.signal, function () {
destroySockets();
rejectOnTlsSocketError(this.reason);
});
socket
.once('error', err => rejectOnTlsSocketError(onerror(err)))
.once('close', () => rejectOnTlsSocketError(onclose()))
.on('data', data => {
buffer.append(data);
while (request.bytesNeeded > 0 && buffer.length) {
const bytesNeeded = Math.min(request.bytesNeeded, buffer.length);
request.addResponse(buffer.read(bytesNeeded));
}
if (request.bytesNeeded <= 0) {
resolve();
}
});
await (options?.timeoutContext?.csotEnabled()
? Promise.all([
willResolveKmsRequest,
timeout_1.Timeout.expires(options.timeoutContext?.remainingTimeMS)
])
: willResolveKmsRequest);
}
catch (error) {
if (error instanceof timeout_1.TimeoutError)
throw new error_1.MongoOperationTimeoutError('KMS request timed out');
throw error;
}
finally {
// There's no need for any more activity on this socket at this point.
destroySockets();
abortListener?.[utils_1.kDispose]();
}
}
*requests(context, options) {
for (let request = context.nextKMSRequest(); request != null; request = context.nextKMSRequest()) {
yield this.kmsRequest(request, options);
}
}
/**
* Validates the provided TLS options are secure.
*
* @param kmsProvider - The KMS provider name.
* @param tlsOptions - The client TLS options for the provider.
*
* @returns An error if any option is invalid.
*/
validateTlsOptions(kmsProvider, tlsOptions) {
const tlsOptionNames = Object.keys(tlsOptions);
for (const option of INSECURE_TLS_OPTIONS) {
if (tlsOptionNames.includes(option)) {
return new errors_1.MongoCryptError(`Insecure TLS options prohibited for ${kmsProvider}: ${option}`);
}
}
}
/**
* Sets only the valid secure TLS options.
*
* @param tlsOptions - The client TLS options for the provider.
* @param options - The existing connection options.
*/
async setTlsOptions(tlsOptions, options) {
// If a secureContext is provided, ensure it is set.
if (tlsOptions.secureContext) {
options.secureContext = tlsOptions.secureContext;
}
if (tlsOptions.tlsCertificateKeyFile) {
const cert = await fs.readFile(tlsOptions.tlsCertificateKeyFile);
options.cert = options.key = cert;
}
if (tlsOptions.tlsCAFile) {
options.ca = await fs.readFile(tlsOptions.tlsCAFile);
}
if (tlsOptions.tlsCertificateKeyFilePassword) {
options.passphrase = tlsOptions.tlsCertificateKeyFilePassword;
}
}
/**
* Fetches collection info for a provided namespace, when libmongocrypt
* enters the `MONGOCRYPT_CTX_NEED_MONGO_COLLINFO` state. The result is
* used to inform libmongocrypt of the schema associated with this
* namespace. Exposed for testing purposes. Do not directly invoke.
*
* @param client - A MongoClient connected to the topology
* @param ns - The namespace to list collections from
* @param filter - A filter for the listCollections command
* @param callback - Invoked with the info of the requested collection, or with an error
*/
fetchCollectionInfo(client, ns, filter, options) {
const { db } = utils_1.MongoDBCollectionNamespace.fromString(ns);
const cursor = client.db(db).listCollections(filter, {
promoteLongs: false,
promoteValues: false,
timeoutContext: options?.timeoutContext && new abstract_cursor_1.CursorTimeoutContext(options?.timeoutContext, Symbol()),
signal: options?.signal,
nameOnly: false
});
return cursor;
}
/**
* Calls to the mongocryptd to provide markings for a command.
* Exposed for testing purposes. Do not directly invoke.
* @param client - A MongoClient connected to a mongocryptd
* @param ns - The namespace (database.collection) the command is being executed on
* @param command - The command to execute.
* @param callback - Invoked with the serialized and marked bson command, or with an error
*/
async markCommand(client, ns, command, options) {
const { db } = utils_1.MongoDBCollectionNamespace.fromString(ns);
const bsonOptions = { promoteLongs: false, promoteValues: false };
const rawCommand = (0, bson_1.deserialize)(command, bsonOptions);
const commandOptions = {
timeoutMS: undefined,
signal: undefined
};
if (options?.timeoutContext?.csotEnabled()) {
commandOptions.timeoutMS = options.timeoutContext.remainingTimeMS;
}
if (options?.signal) {
commandOptions.signal = options.signal;
}
const response = await client.db(db).command(rawCommand, {
...bsonOptions,
...commandOptions
});
return (0, bson_1.serialize)(response, this.bsonOptions);
}
/**
* Requests keys from the keyVault collection on the topology.
* Exposed for testing purposes. Do not directly invoke.
* @param client - A MongoClient connected to the topology
* @param keyVaultNamespace - The namespace (database.collection) of the keyVault Collection
* @param filter - The filter for the find query against the keyVault Collection
* @param callback - Invoked with the found keys, or with an error
*/
fetchKeys(client, keyVaultNamespace, filter, options) {
const { db: dbName, collection: collectionName } = utils_1.MongoDBCollectionNamespace.fromString(keyVaultNamespace);
const commandOptions = {
timeoutContext: undefined,
signal: undefined
};
if (options?.timeoutContext != null) {
commandOptions.timeoutContext = new abstract_cursor_1.CursorTimeoutContext(options.timeoutContext, Symbol());
}
if (options?.signal != null) {
commandOptions.signal = options.signal;
}
return client
.db(dbName)
.collection(collectionName, { readConcern: { level: 'majority' } })
.find((0, bson_1.deserialize)(filter), commandOptions)
.toArray();
}
}
exports.StateMachine = StateMachine;
//# sourceMappingURL=state_machine.js.map

File diff suppressed because one or more lines are too long

51
node_modules/mongodb/lib/cmap/auth/auth_provider.js generated vendored Normal file
View file

@ -0,0 +1,51 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AuthProvider = exports.AuthContext = void 0;
const error_1 = require("../../error");
/**
* Context used during authentication
* @internal
*/
class AuthContext {
constructor(connection, credentials, options) {
/** If the context is for reauthentication. */
this.reauthenticating = false;
this.connection = connection;
this.credentials = credentials;
this.options = options;
}
}
exports.AuthContext = AuthContext;
/**
* Provider used during authentication.
* @internal
*/
class AuthProvider {
/**
* Prepare the handshake document before the initial handshake.
*
* @param handshakeDoc - The document used for the initial handshake on a connection
* @param authContext - Context for authentication flow
*/
async prepare(handshakeDoc, _authContext) {
return handshakeDoc;
}
/**
* Reauthenticate.
* @param context - The shared auth context.
*/
async reauth(context) {
if (context.reauthenticating) {
throw new error_1.MongoRuntimeError('Reauthentication already in progress.');
}
try {
context.reauthenticating = true;
await this.auth(context);
}
finally {
context.reauthenticating = false;
}
}
}
exports.AuthProvider = AuthProvider;
//# sourceMappingURL=auth_provider.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"auth_provider.js","sourceRoot":"","sources":["../../../src/cmap/auth/auth_provider.ts"],"names":[],"mappings":";;;AACA,uCAAgD;AAKhD;;;GAGG;AACH,MAAa,WAAW;IAetB,YACE,UAAsB,EACtB,WAAyC,EACzC,OAA0B;QAb5B,8CAA8C;QAC9C,qBAAgB,GAAG,KAAK,CAAC;QAcvB,IAAI,CAAC,UAAU,GAAG,UAAU,CAAC;QAC7B,IAAI,CAAC,WAAW,GAAG,WAAW,CAAC;QAC/B,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;CACF;AAxBD,kCAwBC;AAED;;;GAGG;AACH,MAAsB,YAAY;IAChC;;;;;OAKG;IACH,KAAK,CAAC,OAAO,CACX,YAA+B,EAC/B,YAAyB;QAEzB,OAAO,YAAY,CAAC;IACtB,CAAC;IASD;;;OAGG;IACH,KAAK,CAAC,MAAM,CAAC,OAAoB;QAC/B,IAAI,OAAO,CAAC,gBAAgB,EAAE,CAAC;YAC7B,MAAM,IAAI,yBAAiB,CAAC,uCAAuC,CAAC,CAAC;QACvE,CAAC;QACD,IAAI,CAAC;YACH,OAAO,CAAC,gBAAgB,GAAG,IAAI,CAAC;YAChC,MAAM,IAAI,CAAC,IAAI,CAAC,OAAO,CAAC,CAAC;QAC3B,CAAC;gBAAS,CAAC;YACT,OAAO,CAAC,gBAAgB,GAAG,KAAK,CAAC;QACnC,CAAC;IACH,CAAC;CACF;AApCD,oCAoCC"}

View file

@ -0,0 +1,102 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AWSSDKCredentialProvider = void 0;
const deps_1 = require("../../deps");
const error_1 = require("../../error");
/** @internal */
class AWSSDKCredentialProvider {
/**
* Create the SDK credentials provider.
* @param credentialsProvider - The credentials provider.
*/
constructor(credentialsProvider) {
if (credentialsProvider) {
this._provider = credentialsProvider;
}
}
static get awsSDK() {
AWSSDKCredentialProvider._awsSDK ??= (0, deps_1.getAwsCredentialProvider)();
return AWSSDKCredentialProvider._awsSDK;
}
/**
* The AWS SDK caches credentials automatically and handles refresh when the credentials have expired.
* To ensure this occurs, we need to cache the `provider` returned by the AWS sdk and re-use it when fetching credentials.
*/
get provider() {
if ('kModuleError' in AWSSDKCredentialProvider.awsSDK) {
throw AWSSDKCredentialProvider.awsSDK.kModuleError;
}
if (this._provider) {
return this._provider;
}
let { AWS_STS_REGIONAL_ENDPOINTS = '', AWS_REGION = '' } = process.env;
AWS_STS_REGIONAL_ENDPOINTS = AWS_STS_REGIONAL_ENDPOINTS.toLowerCase();
AWS_REGION = AWS_REGION.toLowerCase();
/** The option setting should work only for users who have explicit settings in their environment, the driver should not encode "defaults" */
const awsRegionSettingsExist = AWS_REGION.length !== 0 && AWS_STS_REGIONAL_ENDPOINTS.length !== 0;
/**
* The following regions use the global AWS STS endpoint, sts.amazonaws.com, by default
* https://docs.aws.amazon.com/sdkref/latest/guide/feature-sts-regionalized-endpoints.html
*/
const LEGACY_REGIONS = new Set([
'ap-northeast-1',
'ap-south-1',
'ap-southeast-1',
'ap-southeast-2',
'aws-global',
'ca-central-1',
'eu-central-1',
'eu-north-1',
'eu-west-1',
'eu-west-2',
'eu-west-3',
'sa-east-1',
'us-east-1',
'us-east-2',
'us-west-1',
'us-west-2'
]);
/**
* If AWS_STS_REGIONAL_ENDPOINTS is set to regional, users are opting into the new behavior of respecting the region settings
*
* If AWS_STS_REGIONAL_ENDPOINTS is set to legacy, then "old" regions need to keep using the global setting.
* Technically the SDK gets this wrong, it reaches out to 'sts.us-east-1.amazonaws.com' when it should be 'sts.amazonaws.com'.
* That is not our bug to fix here. We leave that up to the SDK.
*/
const useRegionalSts = AWS_STS_REGIONAL_ENDPOINTS === 'regional' ||
(AWS_STS_REGIONAL_ENDPOINTS === 'legacy' && !LEGACY_REGIONS.has(AWS_REGION));
this._provider =
awsRegionSettingsExist && useRegionalSts
? AWSSDKCredentialProvider.awsSDK.fromNodeProviderChain({
clientConfig: { region: AWS_REGION }
})
: AWSSDKCredentialProvider.awsSDK.fromNodeProviderChain();
return this._provider;
}
async getCredentials() {
/*
* Creates a credential provider that will attempt to find credentials from the
* following sources (listed in order of precedence):
*
* - Environment variables exposed via process.env
* - SSO credentials from token cache
* - Web identity token credentials
* - Shared credentials and config ini files
* - The EC2/ECS Instance Metadata Service
*/
try {
const creds = await this.provider();
return {
AccessKeyId: creds.accessKeyId,
SecretAccessKey: creds.secretAccessKey,
Token: creds.sessionToken,
Expiration: creds.expiration
};
}
catch (error) {
throw new error_1.MongoAWSError(error.message, { cause: error });
}
}
}
exports.AWSSDKCredentialProvider = AWSSDKCredentialProvider;
//# sourceMappingURL=aws_temporary_credentials.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"aws_temporary_credentials.js","sourceRoot":"","sources":["../../../src/cmap/auth/aws_temporary_credentials.ts"],"names":[],"mappings":";;;AAAA,qCAA2E;AAC3E,uCAA4C;AAoB5C,gBAAgB;AAChB,MAAa,wBAAwB;IAInC;;;OAGG;IACH,YAAY,mBAA2C;QACrD,IAAI,mBAAmB,EAAE,CAAC;YACxB,IAAI,CAAC,SAAS,GAAG,mBAAmB,CAAC;QACvC,CAAC;IACH,CAAC;IAED,MAAM,KAAK,MAAM;QACf,wBAAwB,CAAC,OAAO,KAAK,IAAA,+BAAwB,GAAE,CAAC;QAChE,OAAO,wBAAwB,CAAC,OAAO,CAAC;IAC1C,CAAC;IAED;;;OAGG;IACH,IAAY,QAAQ;QAClB,IAAI,cAAc,IAAI,wBAAwB,CAAC,MAAM,EAAE,CAAC;YACtD,MAAM,wBAAwB,CAAC,MAAM,CAAC,YAAY,CAAC;QACrD,CAAC;QACD,IAAI,IAAI,CAAC,SAAS,EAAE,CAAC;YACnB,OAAO,IAAI,CAAC,SAAS,CAAC;QACxB,CAAC;QACD,IAAI,EAAE,0BAA0B,GAAG,EAAE,EAAE,UAAU,GAAG,EAAE,EAAE,GAAG,OAAO,CAAC,GAAG,CAAC;QACvE,0BAA0B,GAAG,0BAA0B,CAAC,WAAW,EAAE,CAAC;QACtE,UAAU,GAAG,UAAU,CAAC,WAAW,EAAE,CAAC;QAEtC,6IAA6I;QAC7I,MAAM,sBAAsB,GAC1B,UAAU,CAAC,MAAM,KAAK,CAAC,IAAI,0BAA0B,CAAC,MAAM,KAAK,CAAC,CAAC;QAErE;;;WAGG;QACH,MAAM,cAAc,GAAG,IAAI,GAAG,CAAC;YAC7B,gBAAgB;YAChB,YAAY;YACZ,gBAAgB;YAChB,gBAAgB;YAChB,YAAY;YACZ,cAAc;YACd,cAAc;YACd,YAAY;YACZ,WAAW;YACX,WAAW;YACX,WAAW;YACX,WAAW;YACX,WAAW;YACX,WAAW;YACX,WAAW;YACX,WAAW;SACZ,CAAC,CAAC;QACH;;;;;;WAMG;QACH,MAAM,cAAc,GAClB,0BAA0B,KAAK,UAAU;YACzC,CAAC,0BAA0B,KAAK,QAAQ,IAAI,CAAC,cAAc,CAAC,GAAG,CAAC,UAAU,CAAC,CAAC,CAAC;QAE/E,IAAI,CAAC,SAAS;YACZ,sBAAsB,IAAI,cAAc;gBACtC,CAAC,CAAC,wBAAwB,CAAC,MAAM,CAAC,qBAAqB,CAAC;oBACpD,YAAY,EAAE,EAAE,MAAM,EAAE,UAAU,EAAE;iBACrC,CAAC;gBACJ,CAAC,CAAC,wBAAwB,CAAC,MAAM,CAAC,qBAAqB,EAAE,CAAC;QAE9D,OAAO,IAAI,CAAC,SAAS,CAAC;IACxB,CAAC;IAED,KAAK,CAAC,cAAc;QAClB;;;;;;;;;WASG;QACH,IAAI,CAAC;YACH,MAAM,KAAK,GAAG,MAAM,IAAI,CAAC,QAAQ,EAAE,CAAC;YACpC,OAAO;gBACL,WAAW,EAAE,KAAK,CAAC,WAAW;gBAC9B,eAAe,EAAE,KAAK,CAAC,eAAe;gBACtC,KAAK,EAAE,KAAK,CAAC,YAAY;gBACzB,UAAU,EAAE,KAAK,CAAC,UAAU;aAC7B,CAAC;QACJ,CAAC;QAAC,OAAO,KAAK,EAAE,CAAC;YACf,MAAM,IAAI,qBAAa,CAAC,KAAK,CAAC,OAAO,EAAE,EAAE,KAAK,EAAE,KAAK,EAAE,CAAC,CAAC;QAC3D,CAAC;IACH,CAAC;CACF;AAxGD,4DAwGC"}

154
node_modules/mongodb/lib/cmap/auth/gssapi.js generated vendored Normal file
View file

@ -0,0 +1,154 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.GSSAPI = exports.GSSAPICanonicalizationValue = void 0;
exports.performGSSAPICanonicalizeHostName = performGSSAPICanonicalizeHostName;
exports.resolveCname = resolveCname;
const dns = require("dns");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
/** @public */
exports.GSSAPICanonicalizationValue = Object.freeze({
on: true,
off: false,
none: 'none',
forward: 'forward',
forwardAndReverse: 'forwardAndReverse'
});
async function externalCommand(connection, command) {
const response = await connection.command((0, utils_1.ns)('$external.$cmd'), command);
return response;
}
let krb;
class GSSAPI extends auth_provider_1.AuthProvider {
async auth(authContext) {
const { connection, credentials } = authContext;
if (credentials == null) {
throw new error_1.MongoMissingCredentialsError('Credentials required for GSSAPI authentication');
}
const { username } = credentials;
const client = await makeKerberosClient(authContext);
const payload = await client.step('');
const saslStartResponse = await externalCommand(connection, saslStart(payload));
const negotiatedPayload = await negotiate(client, 10, saslStartResponse.payload);
const saslContinueResponse = await externalCommand(connection, saslContinue(negotiatedPayload, saslStartResponse.conversationId));
const finalizePayload = await finalize(client, username, saslContinueResponse.payload);
await externalCommand(connection, {
saslContinue: 1,
conversationId: saslContinueResponse.conversationId,
payload: finalizePayload
});
}
}
exports.GSSAPI = GSSAPI;
async function makeKerberosClient(authContext) {
const { hostAddress } = authContext.options;
const { credentials } = authContext;
if (!hostAddress || typeof hostAddress.host !== 'string' || !credentials) {
throw new error_1.MongoInvalidArgumentError('Connection must have host and port and credentials defined.');
}
loadKrb();
if ('kModuleError' in krb) {
throw krb['kModuleError'];
}
const { initializeClient } = krb;
const { username, password } = credentials;
const mechanismProperties = credentials.mechanismProperties;
const serviceName = mechanismProperties.SERVICE_NAME ?? 'mongodb';
const host = await performGSSAPICanonicalizeHostName(hostAddress.host, mechanismProperties);
const initOptions = {};
if (password != null) {
// TODO(NODE-5139): These do not match the typescript options in initializeClient
Object.assign(initOptions, { user: username, password: password });
}
const spnHost = mechanismProperties.SERVICE_HOST ?? host;
let spn = `${serviceName}${process.platform === 'win32' ? '/' : '@'}${spnHost}`;
if ('SERVICE_REALM' in mechanismProperties) {
spn = `${spn}@${mechanismProperties.SERVICE_REALM}`;
}
return await initializeClient(spn, initOptions);
}
function saslStart(payload) {
return {
saslStart: 1,
mechanism: 'GSSAPI',
payload,
autoAuthorize: 1
};
}
function saslContinue(payload, conversationId) {
return {
saslContinue: 1,
conversationId,
payload
};
}
async function negotiate(client, retries, payload) {
try {
const response = await client.step(payload);
return response || '';
}
catch (error) {
if (retries === 0) {
// Retries exhausted, raise error
throw error;
}
// Adjust number of retries and call step again
return await negotiate(client, retries - 1, payload);
}
}
async function finalize(client, user, payload) {
// GSS Client Unwrap
const response = await client.unwrap(payload);
return await client.wrap(response || '', { user });
}
async function performGSSAPICanonicalizeHostName(host, mechanismProperties) {
const mode = mechanismProperties.CANONICALIZE_HOST_NAME;
if (!mode || mode === exports.GSSAPICanonicalizationValue.none) {
return host;
}
// If forward and reverse or true
if (mode === exports.GSSAPICanonicalizationValue.on ||
mode === exports.GSSAPICanonicalizationValue.forwardAndReverse) {
// Perform the lookup of the ip address.
const { address } = await dns.promises.lookup(host);
try {
// Perform a reverse ptr lookup on the ip address.
const results = await dns.promises.resolvePtr(address);
// If the ptr did not error but had no results, return the host.
return results.length > 0 ? results[0] : host;
}
catch {
// This can error as ptr records may not exist for all ips. In this case
// fallback to a cname lookup as dns.lookup() does not return the
// cname.
return await resolveCname(host);
}
}
else {
// The case for forward is just to resolve the cname as dns.lookup()
// will not return it.
return await resolveCname(host);
}
}
async function resolveCname(host) {
// Attempt to resolve the host name
try {
const results = await dns.promises.resolveCname(host);
// Get the first resolved host id
return results.length > 0 ? results[0] : host;
}
catch {
return host;
}
}
/**
* Load the Kerberos library.
*/
function loadKrb() {
if (!krb) {
krb = (0, deps_1.getKerberos)();
}
}
//# sourceMappingURL=gssapi.js.map

1
node_modules/mongodb/lib/cmap/auth/gssapi.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"gssapi.js","sourceRoot":"","sources":["../../../src/cmap/auth/gssapi.ts"],"names":[],"mappings":";;;AAoJA,8EAiCC;AAED,oCASC;AAhMD,2BAA2B;AAE3B,qCAA6E;AAC7E,uCAAsF;AACtF,uCAAiC;AAEjC,mDAAiE;AAEjE,cAAc;AACD,QAAA,2BAA2B,GAAG,MAAM,CAAC,MAAM,CAAC;IACvD,EAAE,EAAE,IAAI;IACR,GAAG,EAAE,KAAK;IACV,IAAI,EAAE,MAAM;IACZ,OAAO,EAAE,SAAS;IAClB,iBAAiB,EAAE,mBAAmB;CAC9B,CAAC,CAAC;AAaZ,KAAK,UAAU,eAAe,CAC5B,UAAsB,EACtB,OAAuE;IAEvE,MAAM,QAAQ,GAAG,MAAM,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,gBAAgB,CAAC,EAAE,OAAO,CAAC,CAAC;IACzE,OAAO,QAAuD,CAAC;AACjE,CAAC;AAED,IAAI,GAAa,CAAC;AAElB,MAAa,MAAO,SAAQ,4BAAY;IAC7B,KAAK,CAAC,IAAI,CAAC,WAAwB;QAC1C,MAAM,EAAE,UAAU,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QAChD,IAAI,WAAW,IAAI,IAAI,EAAE,CAAC;YACxB,MAAM,IAAI,oCAA4B,CAAC,gDAAgD,CAAC,CAAC;QAC3F,CAAC;QAED,MAAM,EAAE,QAAQ,EAAE,GAAG,WAAW,CAAC;QAEjC,MAAM,MAAM,GAAG,MAAM,kBAAkB,CAAC,WAAW,CAAC,CAAC;QAErD,MAAM,OAAO,GAAG,MAAM,MAAM,CAAC,IAAI,CAAC,EAAE,CAAC,CAAC;QAEtC,MAAM,iBAAiB,GAAG,MAAM,eAAe,CAAC,UAAU,EAAE,SAAS,CAAC,OAAO,CAAC,CAAC,CAAC;QAEhF,MAAM,iBAAiB,GAAG,MAAM,SAAS,CAAC,MAAM,EAAE,EAAE,EAAE,iBAAiB,CAAC,OAAO,CAAC,CAAC;QAEjF,MAAM,oBAAoB,GAAG,MAAM,eAAe,CAChD,UAAU,EACV,YAAY,CAAC,iBAAiB,EAAE,iBAAiB,CAAC,cAAc,CAAC,CAClE,CAAC;QAEF,MAAM,eAAe,GAAG,MAAM,QAAQ,CAAC,MAAM,EAAE,QAAQ,EAAE,oBAAoB,CAAC,OAAO,CAAC,CAAC;QAEvF,MAAM,eAAe,CAAC,UAAU,EAAE;YAChC,YAAY,EAAE,CAAC;YACf,cAAc,EAAE,oBAAoB,CAAC,cAAc;YACnD,OAAO,EAAE,eAAe;SACzB,CAAC,CAAC;IACL,CAAC;CACF;AA9BD,wBA8BC;AAED,KAAK,UAAU,kBAAkB,CAAC,WAAwB;IACxD,MAAM,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC,OAAO,CAAC;IAC5C,MAAM,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;IACpC,IAAI,CAAC,WAAW,IAAI,OAAO,WAAW,CAAC,IAAI,KAAK,QAAQ,IAAI,CAAC,WAAW,EAAE,CAAC;QACzE,MAAM,IAAI,iCAAyB,CACjC,6DAA6D,CAC9D,CAAC;IACJ,CAAC;IAED,OAAO,EAAE,CAAC;IACV,IAAI,cAAc,IAAI,GAAG,EAAE,CAAC;QAC1B,MAAM,GAAG,CAAC,cAAc,CAAC,CAAC;IAC5B,CAAC;IACD,MAAM,EAAE,gBAAgB,EAAE,GAAG,GAAG,CAAC;IAEjC,MAAM,EAAE,QAAQ,EAAE,QAAQ,EAAE,GAAG,WAAW,CAAC;IAC3C,MAAM,mBAAmB,GAAG,WAAW,CAAC,mBAA0C,CAAC;IAEnF,MAAM,WAAW,GAAG,mBAAmB,CAAC,YAAY,IAAI,SAAS,CAAC;IAElE,MAAM,IAAI,GAAG,MAAM,iCAAiC,CAAC,WAAW,CAAC,IAAI,EAAE,mBAAmB,CAAC,CAAC;IAE5F,MAAM,WAAW,GAAG,EAAE,CAAC;IACvB,IAAI,QAAQ,IAAI,IAAI,EAAE,CAAC;QACrB,iFAAiF;QACjF,MAAM,CAAC,MAAM,CAAC,WAAW,EAAE,EAAE,IAAI,EAAE,QAAQ,EAAE,QAAQ,EAAE,QAAQ,EAAE,CAAC,CAAC;IACrE,CAAC;IAED,MAAM,OAAO,GAAG,mBAAmB,CAAC,YAAY,IAAI,IAAI,CAAC;IACzD,IAAI,GAAG,GAAG,GAAG,WAAW,GAAG,OAAO,CAAC,QAAQ,KAAK,OAAO,CAAC,CAAC,CAAC,GAAG,CAAC,CAAC,CAAC,GAAG,GAAG,OAAO,EAAE,CAAC;IAChF,IAAI,eAAe,IAAI,mBAAmB,EAAE,CAAC;QAC3C,GAAG,GAAG,GAAG,GAAG,IAAI,mBAAmB,CAAC,aAAa,EAAE,CAAC;IACtD,CAAC;IAED,OAAO,MAAM,gBAAgB,CAAC,GAAG,EAAE,WAAW,CAAC,CAAC;AAClD,CAAC;AAED,SAAS,SAAS,CAAC,OAAe;IAChC,OAAO;QACL,SAAS,EAAE,CAAC;QACZ,SAAS,EAAE,QAAQ;QACnB,OAAO;QACP,aAAa,EAAE,CAAC;KACR,CAAC;AACb,CAAC;AAED,SAAS,YAAY,CAAC,OAAe,EAAE,cAAsB;IAC3D,OAAO;QACL,YAAY,EAAE,CAAC;QACf,cAAc;QACd,OAAO;KACC,CAAC;AACb,CAAC;AAED,KAAK,UAAU,SAAS,CACtB,MAAsB,EACtB,OAAe,EACf,OAAe;IAEf,IAAI,CAAC;QACH,MAAM,QAAQ,GAAG,MAAM,MAAM,CAAC,IAAI,CAAC,OAAO,CAAC,CAAC;QAC5C,OAAO,QAAQ,IAAI,EAAE,CAAC;IACxB,CAAC;IAAC,OAAO,KAAK,EAAE,CAAC;QACf,IAAI,OAAO,KAAK,CAAC,EAAE,CAAC;YAClB,iCAAiC;YACjC,MAAM,KAAK,CAAC;QACd,CAAC;QACD,+CAA+C;QAC/C,OAAO,MAAM,SAAS,CAAC,MAAM,EAAE,OAAO,GAAG,CAAC,EAAE,OAAO,CAAC,CAAC;IACvD,CAAC;AACH,CAAC;AAED,KAAK,UAAU,QAAQ,CAAC,MAAsB,EAAE,IAAY,EAAE,OAAe;IAC3E,oBAAoB;IACpB,MAAM,QAAQ,GAAG,MAAM,MAAM,CAAC,MAAM,CAAC,OAAO,CAAC,CAAC;IAC9C,OAAO,MAAM,MAAM,CAAC,IAAI,CAAC,QAAQ,IAAI,EAAE,EAAE,EAAE,IAAI,EAAE,CAAC,CAAC;AACrD,CAAC;AAEM,KAAK,UAAU,iCAAiC,CACrD,IAAY,EACZ,mBAAwC;IAExC,MAAM,IAAI,GAAG,mBAAmB,CAAC,sBAAsB,CAAC;IACxD,IAAI,CAAC,IAAI,IAAI,IAAI,KAAK,mCAA2B,CAAC,IAAI,EAAE,CAAC;QACvD,OAAO,IAAI,CAAC;IACd,CAAC;IAED,iCAAiC;IACjC,IACE,IAAI,KAAK,mCAA2B,CAAC,EAAE;QACvC,IAAI,KAAK,mCAA2B,CAAC,iBAAiB,EACtD,CAAC;QACD,wCAAwC;QACxC,MAAM,EAAE,OAAO,EAAE,GAAG,MAAM,GAAG,CAAC,QAAQ,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC;QAEpD,IAAI,CAAC;YACH,kDAAkD;YAClD,MAAM,OAAO,GAAG,MAAM,GAAG,CAAC,QAAQ,CAAC,UAAU,CAAC,OAAO,CAAC,CAAC;YACvD,gEAAgE;YAChE,OAAO,OAAO,CAAC,MAAM,GAAG,CAAC,CAAC,CAAC,CAAC,OAAO,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,IAAI,CAAC;QAChD,CAAC;QAAC,MAAM,CAAC;YACP,wEAAwE;YACxE,iEAAiE;YACjE,SAAS;YACT,OAAO,MAAM,YAAY,CAAC,IAAI,CAAC,CAAC;QAClC,CAAC;IACH,CAAC;SAAM,CAAC;QACN,oEAAoE;QACpE,sBAAsB;QACtB,OAAO,MAAM,YAAY,CAAC,IAAI,CAAC,CAAC;IAClC,CAAC;AACH,CAAC;AAEM,KAAK,UAAU,YAAY,CAAC,IAAY;IAC7C,mCAAmC;IACnC,IAAI,CAAC;QACH,MAAM,OAAO,GAAG,MAAM,GAAG,CAAC,QAAQ,CAAC,YAAY,CAAC,IAAI,CAAC,CAAC;QACtD,iCAAiC;QACjC,OAAO,OAAO,CAAC,MAAM,GAAG,CAAC,CAAC,CAAC,CAAC,OAAO,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,IAAI,CAAC;IAChD,CAAC;IAAC,MAAM,CAAC;QACP,OAAO,IAAI,CAAC;IACd,CAAC;AACH,CAAC;AAED;;GAEG;AACH,SAAS,OAAO;IACd,IAAI,CAAC,GAAG,EAAE,CAAC;QACT,GAAG,GAAG,IAAA,kBAAW,GAAE,CAAC;IACtB,CAAC;AACH,CAAC"}

168
node_modules/mongodb/lib/cmap/auth/mongo_credentials.js generated vendored Normal file
View file

@ -0,0 +1,168 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoCredentials = exports.DEFAULT_ALLOWED_HOSTS = void 0;
const error_1 = require("../../error");
const gssapi_1 = require("./gssapi");
const providers_1 = require("./providers");
/**
* @see https://github.com/mongodb/specifications/blob/master/source/auth/auth.md
*/
function getDefaultAuthMechanism(hello) {
if (hello) {
// If hello contains saslSupportedMechs, use scram-sha-256
// if it is available, else scram-sha-1
if (Array.isArray(hello.saslSupportedMechs)) {
return hello.saslSupportedMechs.includes(providers_1.AuthMechanism.MONGODB_SCRAM_SHA256)
? providers_1.AuthMechanism.MONGODB_SCRAM_SHA256
: providers_1.AuthMechanism.MONGODB_SCRAM_SHA1;
}
}
// Default auth mechanism for 4.0 and higher.
return providers_1.AuthMechanism.MONGODB_SCRAM_SHA256;
}
const ALLOWED_ENVIRONMENT_NAMES = [
'test',
'azure',
'gcp',
'k8s'
];
const ALLOWED_HOSTS_ERROR = 'Auth mechanism property ALLOWED_HOSTS must be an array of strings.';
/** @internal */
exports.DEFAULT_ALLOWED_HOSTS = [
'*.mongodb.net',
'*.mongodb-qa.net',
'*.mongodb-dev.net',
'*.mongodbgov.net',
'localhost',
'127.0.0.1',
'::1'
];
/** Error for when the token audience is missing in the environment. */
const TOKEN_RESOURCE_MISSING_ERROR = 'TOKEN_RESOURCE must be set in the auth mechanism properties when ENVIRONMENT is azure or gcp.';
/**
* A representation of the credentials used by MongoDB
* @public
*/
class MongoCredentials {
constructor(options) {
this.username = options.username ?? '';
this.password = options.password;
this.source = options.source;
if (!this.source && options.db) {
this.source = options.db;
}
this.mechanism = options.mechanism || providers_1.AuthMechanism.MONGODB_DEFAULT;
this.mechanismProperties = options.mechanismProperties || {};
if (this.mechanism === providers_1.AuthMechanism.MONGODB_OIDC && !this.mechanismProperties.ALLOWED_HOSTS) {
this.mechanismProperties = {
...this.mechanismProperties,
ALLOWED_HOSTS: exports.DEFAULT_ALLOWED_HOSTS
};
}
Object.freeze(this.mechanismProperties);
Object.freeze(this);
}
/** Determines if two MongoCredentials objects are equivalent */
equals(other) {
return (this.mechanism === other.mechanism &&
this.username === other.username &&
this.password === other.password &&
this.source === other.source);
}
/**
* If the authentication mechanism is set to "default", resolves the authMechanism
* based on the server version and server supported sasl mechanisms.
*
* @param hello - A hello response from the server
*/
resolveAuthMechanism(hello) {
// If the mechanism is not "default", then it does not need to be resolved
if (this.mechanism.match(/DEFAULT/i)) {
return new MongoCredentials({
username: this.username,
password: this.password,
source: this.source,
mechanism: getDefaultAuthMechanism(hello),
mechanismProperties: this.mechanismProperties
});
}
return this;
}
validate() {
if ((this.mechanism === providers_1.AuthMechanism.MONGODB_GSSAPI ||
this.mechanism === providers_1.AuthMechanism.MONGODB_PLAIN ||
this.mechanism === providers_1.AuthMechanism.MONGODB_SCRAM_SHA1 ||
this.mechanism === providers_1.AuthMechanism.MONGODB_SCRAM_SHA256) &&
!this.username) {
throw new error_1.MongoMissingCredentialsError(`Username required for mechanism '${this.mechanism}'`);
}
if (this.mechanism === providers_1.AuthMechanism.MONGODB_OIDC) {
if (this.username &&
this.mechanismProperties.ENVIRONMENT &&
this.mechanismProperties.ENVIRONMENT !== 'azure') {
throw new error_1.MongoInvalidArgumentError(`username and ENVIRONMENT '${this.mechanismProperties.ENVIRONMENT}' may not be used together for mechanism '${this.mechanism}'.`);
}
if (this.username && this.password) {
throw new error_1.MongoInvalidArgumentError(`No password is allowed in ENVIRONMENT '${this.mechanismProperties.ENVIRONMENT}' for '${this.mechanism}'.`);
}
if ((this.mechanismProperties.ENVIRONMENT === 'azure' ||
this.mechanismProperties.ENVIRONMENT === 'gcp') &&
!this.mechanismProperties.TOKEN_RESOURCE) {
throw new error_1.MongoInvalidArgumentError(TOKEN_RESOURCE_MISSING_ERROR);
}
if (this.mechanismProperties.ENVIRONMENT &&
!ALLOWED_ENVIRONMENT_NAMES.includes(this.mechanismProperties.ENVIRONMENT)) {
throw new error_1.MongoInvalidArgumentError(`Currently only a ENVIRONMENT in ${ALLOWED_ENVIRONMENT_NAMES.join(',')} is supported for mechanism '${this.mechanism}'.`);
}
if (!this.mechanismProperties.ENVIRONMENT &&
!this.mechanismProperties.OIDC_CALLBACK &&
!this.mechanismProperties.OIDC_HUMAN_CALLBACK) {
throw new error_1.MongoInvalidArgumentError(`Either a ENVIRONMENT, OIDC_CALLBACK, or OIDC_HUMAN_CALLBACK must be specified for mechanism '${this.mechanism}'.`);
}
if (this.mechanismProperties.ALLOWED_HOSTS) {
const hosts = this.mechanismProperties.ALLOWED_HOSTS;
if (!Array.isArray(hosts)) {
throw new error_1.MongoInvalidArgumentError(ALLOWED_HOSTS_ERROR);
}
for (const host of hosts) {
if (typeof host !== 'string') {
throw new error_1.MongoInvalidArgumentError(ALLOWED_HOSTS_ERROR);
}
}
}
}
if (providers_1.AUTH_MECHS_AUTH_SRC_EXTERNAL.has(this.mechanism)) {
if (this.source != null && this.source !== '$external') {
// TODO(NODE-3485): Replace this with a MongoAuthValidationError
throw new error_1.MongoAPIError(`Invalid source '${this.source}' for mechanism '${this.mechanism}' specified.`);
}
}
if (this.mechanism === providers_1.AuthMechanism.MONGODB_PLAIN && this.source == null) {
// TODO(NODE-3485): Replace this with a MongoAuthValidationError
throw new error_1.MongoAPIError('PLAIN Authentication Mechanism needs an auth source');
}
if (this.mechanism === providers_1.AuthMechanism.MONGODB_X509 && this.password != null) {
if (this.password === '') {
Reflect.set(this, 'password', undefined);
return;
}
// TODO(NODE-3485): Replace this with a MongoAuthValidationError
throw new error_1.MongoAPIError(`Password not allowed for mechanism MONGODB-X509`);
}
const canonicalization = this.mechanismProperties.CANONICALIZE_HOST_NAME ?? false;
if (!Object.values(gssapi_1.GSSAPICanonicalizationValue).includes(canonicalization)) {
throw new error_1.MongoAPIError(`Invalid CANONICALIZE_HOST_NAME value: ${canonicalization}`);
}
}
static merge(creds, options) {
return new MongoCredentials({
username: options.username ?? creds?.username ?? '',
password: options.password ?? creds?.password ?? '',
mechanism: options.mechanism ?? creds?.mechanism ?? providers_1.AuthMechanism.MONGODB_DEFAULT,
mechanismProperties: options.mechanismProperties ?? creds?.mechanismProperties ?? {},
source: options.source ?? options.db ?? creds?.source ?? 'admin'
});
}
}
exports.MongoCredentials = MongoCredentials;
//# sourceMappingURL=mongo_credentials.js.map

File diff suppressed because one or more lines are too long

133
node_modules/mongodb/lib/cmap/auth/mongodb_aws.js generated vendored Normal file
View file

@ -0,0 +1,133 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoDBAWS = void 0;
const BSON = require("../../bson");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
const aws_temporary_credentials_1 = require("./aws_temporary_credentials");
const mongo_credentials_1 = require("./mongo_credentials");
const providers_1 = require("./providers");
const ASCII_N = 110;
const bsonOptions = {
useBigInt64: false,
promoteLongs: true,
promoteValues: true,
promoteBuffers: false,
bsonRegExp: false
};
class MongoDBAWS extends auth_provider_1.AuthProvider {
constructor(credentialProvider) {
super();
this.credentialFetcher = new aws_temporary_credentials_1.AWSSDKCredentialProvider(credentialProvider);
}
async auth(authContext) {
const { connection } = authContext;
if (!authContext.credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
if ('kModuleError' in deps_1.aws4) {
throw deps_1.aws4['kModuleError'];
}
const { sign } = deps_1.aws4;
if ((0, utils_1.maxWireVersion)(connection) < 9) {
throw new error_1.MongoCompatibilityError('MONGODB-AWS authentication requires MongoDB version 4.4 or later');
}
authContext.credentials = await makeTempCredentials(authContext.credentials, this.credentialFetcher);
const { credentials } = authContext;
const accessKeyId = credentials.username;
const secretAccessKey = credentials.password;
// Allow the user to specify an AWS session token for authentication with temporary credentials.
const sessionToken = credentials.mechanismProperties.AWS_SESSION_TOKEN;
// If all three defined, include sessionToken, else include username and pass, else no credentials
const awsCredentials = accessKeyId && secretAccessKey && sessionToken
? { accessKeyId, secretAccessKey, sessionToken }
: accessKeyId && secretAccessKey
? { accessKeyId, secretAccessKey }
: undefined;
const db = credentials.source;
const nonce = await (0, utils_1.randomBytes)(32);
// All messages between MongoDB clients and servers are sent as BSON objects
// in the payload field of saslStart and saslContinue.
const saslStart = {
saslStart: 1,
mechanism: 'MONGODB-AWS',
payload: BSON.serialize({ r: nonce, p: ASCII_N }, bsonOptions)
};
const saslStartResponse = await connection.command((0, utils_1.ns)(`${db}.$cmd`), saslStart, undefined);
const serverResponse = BSON.deserialize(saslStartResponse.payload.buffer, bsonOptions);
const host = serverResponse.h;
const serverNonce = serverResponse.s.buffer;
if (serverNonce.length !== 64) {
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError(`Invalid server nonce length ${serverNonce.length}, expected 64`);
}
if (!utils_1.ByteUtils.equals(serverNonce.subarray(0, nonce.byteLength), nonce)) {
// throw because the serverNonce's leading 32 bytes must equal the client nonce's 32 bytes
// https://github.com/mongodb/specifications/blob/master/source/auth/auth.md#conversation-5
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError('Server nonce does not begin with client nonce');
}
if (host.length < 1 || host.length > 255 || host.indexOf('..') !== -1) {
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError(`Server returned an invalid host: "${host}"`);
}
const body = 'Action=GetCallerIdentity&Version=2011-06-15';
const options = sign({
method: 'POST',
host,
region: deriveRegion(serverResponse.h),
service: 'sts',
headers: {
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': body.length,
'X-MongoDB-Server-Nonce': utils_1.ByteUtils.toBase64(serverNonce),
'X-MongoDB-GS2-CB-Flag': 'n'
},
path: '/',
body
}, awsCredentials);
const payload = {
a: options.headers.Authorization,
d: options.headers['X-Amz-Date']
};
if (sessionToken) {
payload.t = sessionToken;
}
const saslContinue = {
saslContinue: 1,
conversationId: saslStartResponse.conversationId,
payload: BSON.serialize(payload, bsonOptions)
};
await connection.command((0, utils_1.ns)(`${db}.$cmd`), saslContinue, undefined);
}
}
exports.MongoDBAWS = MongoDBAWS;
async function makeTempCredentials(credentials, awsCredentialFetcher) {
function makeMongoCredentialsFromAWSTemp(creds) {
// The AWS session token (creds.Token) may or may not be set.
if (!creds.AccessKeyId || !creds.SecretAccessKey) {
throw new error_1.MongoMissingCredentialsError('Could not obtain temporary MONGODB-AWS credentials');
}
return new mongo_credentials_1.MongoCredentials({
username: creds.AccessKeyId,
password: creds.SecretAccessKey,
source: credentials.source,
mechanism: providers_1.AuthMechanism.MONGODB_AWS,
mechanismProperties: {
AWS_SESSION_TOKEN: creds.Token
}
});
}
const temporaryCredentials = await awsCredentialFetcher.getCredentials();
return makeMongoCredentialsFromAWSTemp(temporaryCredentials);
}
function deriveRegion(host) {
const parts = host.split('.');
if (parts.length === 1 || parts[1] === 'amazonaws') {
return 'us-east-1';
}
return parts[1];
}
//# sourceMappingURL=mongodb_aws.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"mongodb_aws.js","sourceRoot":"","sources":["../../../src/cmap/auth/mongodb_aws.ts"],"names":[],"mappings":";;;AACA,mCAAmC;AACnC,qCAAkC;AAClC,uCAIqB;AACrB,uCAAyE;AACzE,mDAAiE;AACjE,2EAIqC;AACrC,2DAAuD;AACvD,2CAA4C;AAE5C,MAAM,OAAO,GAAG,GAAG,CAAC;AACpB,MAAM,WAAW,GAAyB;IACxC,WAAW,EAAE,KAAK;IAClB,YAAY,EAAE,IAAI;IAClB,aAAa,EAAE,IAAI;IACnB,cAAc,EAAE,KAAK;IACrB,UAAU,EAAE,KAAK;CAClB,CAAC;AAQF,MAAa,UAAW,SAAQ,4BAAY;IAG1C,YAAY,kBAA0C;QACpD,KAAK,EAAE,CAAC;QACR,IAAI,CAAC,iBAAiB,GAAG,IAAI,oDAAwB,CAAC,kBAAkB,CAAC,CAAC;IAC5E,CAAC;IAEQ,KAAK,CAAC,IAAI,CAAC,WAAwB;QAC1C,MAAM,EAAE,UAAU,EAAE,GAAG,WAAW,CAAC;QACnC,IAAI,CAAC,WAAW,CAAC,WAAW,EAAE,CAAC;YAC7B,MAAM,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC;QAClF,CAAC;QAED,IAAI,cAAc,IAAI,WAAI,EAAE,CAAC;YAC3B,MAAM,WAAI,CAAC,cAAc,CAAC,CAAC;QAC7B,CAAC;QACD,MAAM,EAAE,IAAI,EAAE,GAAG,WAAI,CAAC;QAEtB,IAAI,IAAA,sBAAc,EAAC,UAAU,CAAC,GAAG,CAAC,EAAE,CAAC;YACnC,MAAM,IAAI,+BAAuB,CAC/B,kEAAkE,CACnE,CAAC;QACJ,CAAC;QAED,WAAW,CAAC,WAAW,GAAG,MAAM,mBAAmB,CACjD,WAAW,CAAC,WAAW,EACvB,IAAI,CAAC,iBAAiB,CACvB,CAAC;QAEF,MAAM,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QAEpC,MAAM,WAAW,GAAG,WAAW,CAAC,QAAQ,CAAC;QACzC,MAAM,eAAe,GAAG,WAAW,CAAC,QAAQ,CAAC;QAC7C,gGAAgG;QAChG,MAAM,YAAY,GAAG,WAAW,CAAC,mBAAmB,CAAC,iBAAiB,CAAC;QAEvE,kGAAkG;QAClG,MAAM,cAAc,GAClB,WAAW,IAAI,eAAe,IAAI,YAAY;YAC5C,CAAC,CAAC,EAAE,WAAW,EAAE,eAAe,EAAE,YAAY,EAAE;YAChD,CAAC,CAAC,WAAW,IAAI,eAAe;gBAC9B,CAAC,CAAC,EAAE,WAAW,EAAE,eAAe,EAAE;gBAClC,CAAC,CAAC,SAAS,CAAC;QAElB,MAAM,EAAE,GAAG,WAAW,CAAC,MAAM,CAAC;QAC9B,MAAM,KAAK,GAAG,MAAM,IAAA,mBAAW,EAAC,EAAE,CAAC,CAAC;QAEpC,4EAA4E;QAC5E,sDAAsD;QACtD,MAAM,SAAS,GAAG;YAChB,SAAS,EAAE,CAAC;YACZ,SAAS,EAAE,aAAa;YACxB,OAAO,EAAE,IAAI,CAAC,SAAS,CAAC,EAAE,CAAC,EAAE,KAAK,EAAE,CAAC,EAAE,OAAO,EAAE,EAAE,WAAW,CAAC;SAC/D,CAAC;QAEF,MAAM,iBAAiB,GAAG,MAAM,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,GAAG,EAAE,OAAO,CAAC,EAAE,SAAS,EAAE,SAAS,CAAC,CAAC;QAE3F,MAAM,cAAc,GAAG,IAAI,CAAC,WAAW,CAAC,iBAAiB,CAAC,OAAO,CAAC,MAAM,EAAE,WAAW,CAGpF,CAAC;QACF,MAAM,IAAI,GAAG,cAAc,CAAC,CAAC,CAAC;QAC9B,MAAM,WAAW,GAAG,cAAc,CAAC,CAAC,CAAC,MAAM,CAAC;QAC5C,IAAI,WAAW,CAAC,MAAM,KAAK,EAAE,EAAE,CAAC;YAC9B,kBAAkB;YAClB,MAAM,IAAI,yBAAiB,CAAC,+BAA+B,WAAW,CAAC,MAAM,eAAe,CAAC,CAAC;QAChG,CAAC;QAED,IAAI,CAAC,iBAAS,CAAC,MAAM,CAAC,WAAW,CAAC,QAAQ,CAAC,CAAC,EAAE,KAAK,CAAC,UAAU,CAAC,EAAE,KAAK,CAAC,EAAE,CAAC;YACxE,0FAA0F;YAC1F,2FAA2F;YAE3F,kBAAkB;YAClB,MAAM,IAAI,yBAAiB,CAAC,+CAA+C,CAAC,CAAC;QAC/E,CAAC;QAED,IAAI,IAAI,CAAC,MAAM,GAAG,CAAC,IAAI,IAAI,CAAC,MAAM,GAAG,GAAG,IAAI,IAAI,CAAC,OAAO,CAAC,IAAI,CAAC,KAAK,CAAC,CAAC,EAAE,CAAC;YACtE,kBAAkB;YAClB,MAAM,IAAI,yBAAiB,CAAC,qCAAqC,IAAI,GAAG,CAAC,CAAC;QAC5E,CAAC;QAED,MAAM,IAAI,GAAG,6CAA6C,CAAC;QAC3D,MAAM,OAAO,GAAG,IAAI,CAClB;YACE,MAAM,EAAE,MAAM;YACd,IAAI;YACJ,MAAM,EAAE,YAAY,CAAC,cAAc,CAAC,CAAC,CAAC;YACtC,OAAO,EAAE,KAAK;YACd,OAAO,EAAE;gBACP,cAAc,EAAE,mCAAmC;gBACnD,gBAAgB,EAAE,IAAI,CAAC,MAAM;gBAC7B,wBAAwB,EAAE,iBAAS,CAAC,QAAQ,CAAC,WAAW,CAAC;gBACzD,uBAAuB,EAAE,GAAG;aAC7B;YACD,IAAI,EAAE,GAAG;YACT,IAAI;SACL,EACD,cAAc,CACf,CAAC;QAEF,MAAM,OAAO,GAA2B;YACtC,CAAC,EAAE,OAAO,CAAC,OAAO,CAAC,aAAa;YAChC,CAAC,EAAE,OAAO,CAAC,OAAO,CAAC,YAAY,CAAC;SACjC,CAAC;QAEF,IAAI,YAAY,EAAE,CAAC;YACjB,OAAO,CAAC,CAAC,GAAG,YAAY,CAAC;QAC3B,CAAC;QAED,MAAM,YAAY,GAAG;YACnB,YAAY,EAAE,CAAC;YACf,cAAc,EAAE,iBAAiB,CAAC,cAAc;YAChD,OAAO,EAAE,IAAI,CAAC,SAAS,CAAC,OAAO,EAAE,WAAW,CAAC;SAC9C,CAAC;QAEF,MAAM,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,GAAG,EAAE,OAAO,CAAC,EAAE,YAAY,EAAE,SAAS,CAAC,CAAC;IACtE,CAAC;CACF;AAtHD,gCAsHC;AAED,KAAK,UAAU,mBAAmB,CAChC,WAA6B,EAC7B,oBAA8C;IAE9C,SAAS,+BAA+B,CAAC,KAAyB;QAChE,6DAA6D;QAC7D,IAAI,CAAC,KAAK,CAAC,WAAW,IAAI,CAAC,KAAK,CAAC,eAAe,EAAE,CAAC;YACjD,MAAM,IAAI,oCAA4B,CAAC,oDAAoD,CAAC,CAAC;QAC/F,CAAC;QAED,OAAO,IAAI,oCAAgB,CAAC;YAC1B,QAAQ,EAAE,KAAK,CAAC,WAAW;YAC3B,QAAQ,EAAE,KAAK,CAAC,eAAe;YAC/B,MAAM,EAAE,WAAW,CAAC,MAAM;YAC1B,SAAS,EAAE,yBAAa,CAAC,WAAW;YACpC,mBAAmB,EAAE;gBACnB,iBAAiB,EAAE,KAAK,CAAC,KAAK;aAC/B;SACF,CAAC,CAAC;IACL,CAAC;IACD,MAAM,oBAAoB,GAAG,MAAM,oBAAoB,CAAC,cAAc,EAAE,CAAC;IAEzE,OAAO,+BAA+B,CAAC,oBAAoB,CAAC,CAAC;AAC/D,CAAC;AAED,SAAS,YAAY,CAAC,IAAY;IAChC,MAAM,KAAK,GAAG,IAAI,CAAC,KAAK,CAAC,GAAG,CAAC,CAAC;IAC9B,IAAI,KAAK,CAAC,MAAM,KAAK,CAAC,IAAI,KAAK,CAAC,CAAC,CAAC,KAAK,WAAW,EAAE,CAAC;QACnD,OAAO,WAAW,CAAC;IACrB,CAAC;IAED,OAAO,KAAK,CAAC,CAAC,CAAC,CAAC;AAClB,CAAC"}

73
node_modules/mongodb/lib/cmap/auth/mongodb_oidc.js generated vendored Normal file
View file

@ -0,0 +1,73 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.MongoDBOIDC = exports.OIDC_WORKFLOWS = exports.OIDC_VERSION = void 0;
const error_1 = require("../../error");
const auth_provider_1 = require("./auth_provider");
const automated_callback_workflow_1 = require("./mongodb_oidc/automated_callback_workflow");
const azure_machine_workflow_1 = require("./mongodb_oidc/azure_machine_workflow");
const gcp_machine_workflow_1 = require("./mongodb_oidc/gcp_machine_workflow");
const k8s_machine_workflow_1 = require("./mongodb_oidc/k8s_machine_workflow");
const token_cache_1 = require("./mongodb_oidc/token_cache");
const token_machine_workflow_1 = require("./mongodb_oidc/token_machine_workflow");
/** Error when credentials are missing. */
const MISSING_CREDENTIALS_ERROR = 'AuthContext must provide credentials.';
/** The current version of OIDC implementation. */
exports.OIDC_VERSION = 1;
/** @internal */
exports.OIDC_WORKFLOWS = new Map();
exports.OIDC_WORKFLOWS.set('test', () => new automated_callback_workflow_1.AutomatedCallbackWorkflow(new token_cache_1.TokenCache(), token_machine_workflow_1.callback));
exports.OIDC_WORKFLOWS.set('azure', () => new automated_callback_workflow_1.AutomatedCallbackWorkflow(new token_cache_1.TokenCache(), azure_machine_workflow_1.callback));
exports.OIDC_WORKFLOWS.set('gcp', () => new automated_callback_workflow_1.AutomatedCallbackWorkflow(new token_cache_1.TokenCache(), gcp_machine_workflow_1.callback));
exports.OIDC_WORKFLOWS.set('k8s', () => new automated_callback_workflow_1.AutomatedCallbackWorkflow(new token_cache_1.TokenCache(), k8s_machine_workflow_1.callback));
/**
* OIDC auth provider.
*/
class MongoDBOIDC extends auth_provider_1.AuthProvider {
/**
* Instantiate the auth provider.
*/
constructor(workflow) {
super();
if (!workflow) {
throw new error_1.MongoInvalidArgumentError('No workflow provided to the OIDC auth provider.');
}
this.workflow = workflow;
}
/**
* Authenticate using OIDC
*/
async auth(authContext) {
const { connection, reauthenticating, response } = authContext;
if (response?.speculativeAuthenticate?.done && !reauthenticating) {
return;
}
const credentials = getCredentials(authContext);
if (reauthenticating) {
await this.workflow.reauthenticate(connection, credentials);
}
else {
await this.workflow.execute(connection, credentials, response);
}
}
/**
* Add the speculative auth for the initial handshake.
*/
async prepare(handshakeDoc, authContext) {
const { connection } = authContext;
const credentials = getCredentials(authContext);
const result = await this.workflow.speculativeAuth(connection, credentials);
return { ...handshakeDoc, ...result };
}
}
exports.MongoDBOIDC = MongoDBOIDC;
/**
* Get credentials from the auth context, throwing if they do not exist.
*/
function getCredentials(authContext) {
const { credentials } = authContext;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError(MISSING_CREDENTIALS_ERROR);
}
return credentials;
}
//# sourceMappingURL=mongodb_oidc.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"mongodb_oidc.js","sourceRoot":"","sources":["../../../src/cmap/auth/mongodb_oidc.ts"],"names":[],"mappings":";;;AACA,uCAAsF;AAGtF,mDAAiE;AAEjE,4FAAuF;AACvF,kFAAkF;AAClF,8EAA8E;AAC9E,8EAA8E;AAC9E,4DAAwD;AACxD,kFAAiF;AAEjF,0CAA0C;AAC1C,MAAM,yBAAyB,GAAG,uCAAuC,CAAC;AA6E1E,kDAAkD;AACrC,QAAA,YAAY,GAAG,CAAC,CAAC;AA6B9B,gBAAgB;AACH,QAAA,cAAc,GAAyC,IAAI,GAAG,EAAE,CAAC;AAC9E,sBAAc,CAAC,GAAG,CAAC,MAAM,EAAE,GAAG,EAAE,CAAC,IAAI,uDAAyB,CAAC,IAAI,wBAAU,EAAE,EAAE,iCAAY,CAAC,CAAC,CAAC;AAChG,sBAAc,CAAC,GAAG,CAAC,OAAO,EAAE,GAAG,EAAE,CAAC,IAAI,uDAAyB,CAAC,IAAI,wBAAU,EAAE,EAAE,iCAAa,CAAC,CAAC,CAAC;AAClG,sBAAc,CAAC,GAAG,CAAC,KAAK,EAAE,GAAG,EAAE,CAAC,IAAI,uDAAyB,CAAC,IAAI,wBAAU,EAAE,EAAE,+BAAW,CAAC,CAAC,CAAC;AAC9F,sBAAc,CAAC,GAAG,CAAC,KAAK,EAAE,GAAG,EAAE,CAAC,IAAI,uDAAyB,CAAC,IAAI,wBAAU,EAAE,EAAE,+BAAW,CAAC,CAAC,CAAC;AAE9F;;GAEG;AACH,MAAa,WAAY,SAAQ,4BAAY;IAG3C;;OAEG;IACH,YAAY,QAAmB;QAC7B,KAAK,EAAE,CAAC;QACR,IAAI,CAAC,QAAQ,EAAE,CAAC;YACd,MAAM,IAAI,iCAAyB,CAAC,iDAAiD,CAAC,CAAC;QACzF,CAAC;QACD,IAAI,CAAC,QAAQ,GAAG,QAAQ,CAAC;IAC3B,CAAC;IAED;;OAEG;IACM,KAAK,CAAC,IAAI,CAAC,WAAwB;QAC1C,MAAM,EAAE,UAAU,EAAE,gBAAgB,EAAE,QAAQ,EAAE,GAAG,WAAW,CAAC;QAC/D,IAAI,QAAQ,EAAE,uBAAuB,EAAE,IAAI,IAAI,CAAC,gBAAgB,EAAE,CAAC;YACjE,OAAO;QACT,CAAC;QACD,MAAM,WAAW,GAAG,cAAc,CAAC,WAAW,CAAC,CAAC;QAChD,IAAI,gBAAgB,EAAE,CAAC;YACrB,MAAM,IAAI,CAAC,QAAQ,CAAC,cAAc,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;QAC9D,CAAC;aAAM,CAAC;YACN,MAAM,IAAI,CAAC,QAAQ,CAAC,OAAO,CAAC,UAAU,EAAE,WAAW,EAAE,QAAQ,CAAC,CAAC;QACjE,CAAC;IACH,CAAC;IAED;;OAEG;IACM,KAAK,CAAC,OAAO,CACpB,YAA+B,EAC/B,WAAwB;QAExB,MAAM,EAAE,UAAU,EAAE,GAAG,WAAW,CAAC;QACnC,MAAM,WAAW,GAAG,cAAc,CAAC,WAAW,CAAC,CAAC;QAChD,MAAM,MAAM,GAAG,MAAM,IAAI,CAAC,QAAQ,CAAC,eAAe,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;QAC5E,OAAO,EAAE,GAAG,YAAY,EAAE,GAAG,MAAM,EAAE,CAAC;IACxC,CAAC;CACF;AA1CD,kCA0CC;AAED;;GAEG;AACH,SAAS,cAAc,CAAC,WAAwB;IAC9C,MAAM,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;IACpC,IAAI,CAAC,WAAW,EAAE,CAAC;QACjB,MAAM,IAAI,oCAA4B,CAAC,yBAAyB,CAAC,CAAC;IACpE,CAAC;IACD,OAAO,WAAW,CAAC;AACrB,CAAC"}

View file

@ -0,0 +1,84 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AutomatedCallbackWorkflow = void 0;
const error_1 = require("../../../error");
const timeout_1 = require("../../../timeout");
const mongodb_oidc_1 = require("../mongodb_oidc");
const callback_workflow_1 = require("./callback_workflow");
/**
* Class implementing behaviour for the non human callback workflow.
* @internal
*/
class AutomatedCallbackWorkflow extends callback_workflow_1.CallbackWorkflow {
/**
* Instantiate the human callback workflow.
*/
constructor(cache, callback) {
super(cache, callback);
}
/**
* Execute the OIDC callback workflow.
*/
async execute(connection, credentials) {
// If there is a cached access token, try to authenticate with it. If
// authentication fails with an Authentication error (18),
// invalidate the access token, fetch a new access token, and try
// to authenticate again.
// If the server fails for any other reason, do not clear the cache.
if (this.cache.hasAccessToken) {
const token = this.cache.getAccessToken();
if (!connection.accessToken) {
connection.accessToken = token;
}
try {
return await this.finishAuthentication(connection, credentials, token);
}
catch (error) {
if (error instanceof error_1.MongoError &&
error.code === error_1.MONGODB_ERROR_CODES.AuthenticationFailed) {
this.cache.removeAccessToken();
return await this.execute(connection, credentials);
}
else {
throw error;
}
}
}
const response = await this.fetchAccessToken(credentials);
this.cache.put(response);
connection.accessToken = response.accessToken;
await this.finishAuthentication(connection, credentials, response.accessToken);
}
/**
* Fetches the access token using the callback.
*/
async fetchAccessToken(credentials) {
const controller = new AbortController();
const params = {
timeoutContext: controller.signal,
version: mongodb_oidc_1.OIDC_VERSION
};
if (credentials.username) {
params.username = credentials.username;
}
if (credentials.mechanismProperties.TOKEN_RESOURCE) {
params.tokenAudience = credentials.mechanismProperties.TOKEN_RESOURCE;
}
const timeout = timeout_1.Timeout.expires(callback_workflow_1.AUTOMATED_TIMEOUT_MS);
try {
return await Promise.race([this.executeAndValidateCallback(params), timeout]);
}
catch (error) {
if (timeout_1.TimeoutError.is(error)) {
controller.abort();
throw new error_1.MongoOIDCError(`OIDC callback timed out after ${callback_workflow_1.AUTOMATED_TIMEOUT_MS}ms.`);
}
throw error;
}
finally {
timeout.clear();
}
}
}
exports.AutomatedCallbackWorkflow = AutomatedCallbackWorkflow;
//# sourceMappingURL=automated_callback_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"automated_callback_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/automated_callback_workflow.ts"],"names":[],"mappings":";;;AAAA,0CAAiF;AACjF,8CAAyD;AAGzD,kDAKyB;AACzB,2DAA6E;AAG7E;;;GAGG;AACH,MAAa,yBAA0B,SAAQ,oCAAgB;IAC7D;;OAEG;IACH,YAAY,KAAiB,EAAE,QAA8B;QAC3D,KAAK,CAAC,KAAK,EAAE,QAAQ,CAAC,CAAC;IACzB,CAAC;IAED;;OAEG;IACH,KAAK,CAAC,OAAO,CAAC,UAAsB,EAAE,WAA6B;QACjE,qEAAqE;QACrE,0DAA0D;QAC1D,iEAAiE;QACjE,yBAAyB;QACzB,oEAAoE;QACpE,IAAI,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAC9B,MAAM,KAAK,GAAG,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAC1C,IAAI,CAAC,UAAU,CAAC,WAAW,EAAE,CAAC;gBAC5B,UAAU,CAAC,WAAW,GAAG,KAAK,CAAC;YACjC,CAAC;YACD,IAAI,CAAC;gBACH,OAAO,MAAM,IAAI,CAAC,oBAAoB,CAAC,UAAU,EAAE,WAAW,EAAE,KAAK,CAAC,CAAC;YACzE,CAAC;YAAC,OAAO,KAAK,EAAE,CAAC;gBACf,IACE,KAAK,YAAY,kBAAU;oBAC3B,KAAK,CAAC,IAAI,KAAK,2BAAmB,CAAC,oBAAoB,EACvD,CAAC;oBACD,IAAI,CAAC,KAAK,CAAC,iBAAiB,EAAE,CAAC;oBAC/B,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;gBACrD,CAAC;qBAAM,CAAC;oBACN,MAAM,KAAK,CAAC;gBACd,CAAC;YACH,CAAC;QACH,CAAC;QACD,MAAM,QAAQ,GAAG,MAAM,IAAI,CAAC,gBAAgB,CAAC,WAAW,CAAC,CAAC;QAC1D,IAAI,CAAC,KAAK,CAAC,GAAG,CAAC,QAAQ,CAAC,CAAC;QACzB,UAAU,CAAC,WAAW,GAAG,QAAQ,CAAC,WAAW,CAAC;QAC9C,MAAM,IAAI,CAAC,oBAAoB,CAAC,UAAU,EAAE,WAAW,EAAE,QAAQ,CAAC,WAAW,CAAC,CAAC;IACjF,CAAC;IAED;;OAEG;IACO,KAAK,CAAC,gBAAgB,CAAC,WAA6B;QAC5D,MAAM,UAAU,GAAG,IAAI,eAAe,EAAE,CAAC;QACzC,MAAM,MAAM,GAAuB;YACjC,cAAc,EAAE,UAAU,CAAC,MAAM;YACjC,OAAO,EAAE,2BAAY;SACtB,CAAC;QACF,IAAI,WAAW,CAAC,QAAQ,EAAE,CAAC;YACzB,MAAM,CAAC,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QACzC,CAAC;QACD,IAAI,WAAW,CAAC,mBAAmB,CAAC,cAAc,EAAE,CAAC;YACnD,MAAM,CAAC,aAAa,GAAG,WAAW,CAAC,mBAAmB,CAAC,cAAc,CAAC;QACxE,CAAC;QACD,MAAM,OAAO,GAAG,iBAAO,CAAC,OAAO,CAAC,wCAAoB,CAAC,CAAC;QACtD,IAAI,CAAC;YACH,OAAO,MAAM,OAAO,CAAC,IAAI,CAAC,CAAC,IAAI,CAAC,0BAA0B,CAAC,MAAM,CAAC,EAAE,OAAO,CAAC,CAAC,CAAC;QAChF,CAAC;QAAC,OAAO,KAAK,EAAE,CAAC;YACf,IAAI,sBAAY,CAAC,EAAE,CAAC,KAAK,CAAC,EAAE,CAAC;gBAC3B,UAAU,CAAC,KAAK,EAAE,CAAC;gBACnB,MAAM,IAAI,sBAAc,CAAC,iCAAiC,wCAAoB,KAAK,CAAC,CAAC;YACvF,CAAC;YACD,MAAM,KAAK,CAAC;QACd,CAAC;gBAAS,CAAC;YACT,OAAO,CAAC,KAAK,EAAE,CAAC;QAClB,CAAC;IACH,CAAC;CACF;AAtED,8DAsEC"}

View file

@ -0,0 +1,62 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.callback = void 0;
const azure_1 = require("../../../client-side-encryption/providers/azure");
const error_1 = require("../../../error");
const utils_1 = require("../../../utils");
/** Azure request headers. */
const AZURE_HEADERS = Object.freeze({ Metadata: 'true', Accept: 'application/json' });
/** Invalid endpoint result error. */
const ENDPOINT_RESULT_ERROR = 'Azure endpoint did not return a value with only access_token and expires_in properties';
/** Error for when the token audience is missing in the environment. */
const TOKEN_RESOURCE_MISSING_ERROR = 'TOKEN_RESOURCE must be set in the auth mechanism properties when ENVIRONMENT is azure.';
/**
* The callback function to be used in the automated callback workflow.
* @param params - The OIDC callback parameters.
* @returns The OIDC response.
*/
const callback = async (params) => {
const tokenAudience = params.tokenAudience;
const username = params.username;
if (!tokenAudience) {
throw new error_1.MongoAzureError(TOKEN_RESOURCE_MISSING_ERROR);
}
const response = await getAzureTokenData(tokenAudience, username);
if (!isEndpointResultValid(response)) {
throw new error_1.MongoAzureError(ENDPOINT_RESULT_ERROR);
}
return response;
};
exports.callback = callback;
/**
* Hit the Azure endpoint to get the token data.
*/
async function getAzureTokenData(tokenAudience, username) {
const url = new URL(azure_1.AZURE_BASE_URL);
(0, azure_1.addAzureParams)(url, tokenAudience, username);
const response = await (0, utils_1.get)(url, {
headers: AZURE_HEADERS
});
if (response.status !== 200) {
throw new error_1.MongoAzureError(`Status code ${response.status} returned from the Azure endpoint. Response body: ${response.body}`);
}
const result = JSON.parse(response.body);
return {
accessToken: result.access_token,
expiresInSeconds: Number(result.expires_in)
};
}
/**
* Determines if a result returned from the endpoint is valid.
* This means the result is not nullish, contains the access_token required field
* and the expires_in required field.
*/
function isEndpointResultValid(token) {
if (token == null || typeof token !== 'object')
return false;
return ('accessToken' in token &&
typeof token.accessToken === 'string' &&
'expiresInSeconds' in token &&
typeof token.expiresInSeconds === 'number');
}
//# sourceMappingURL=azure_machine_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"azure_machine_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/azure_machine_workflow.ts"],"names":[],"mappings":";;;AAAA,2EAAiG;AACjG,0CAAiD;AACjD,0CAAqC;AAGrC,6BAA6B;AAC7B,MAAM,aAAa,GAAG,MAAM,CAAC,MAAM,CAAC,EAAE,QAAQ,EAAE,MAAM,EAAE,MAAM,EAAE,kBAAkB,EAAE,CAAC,CAAC;AAEtF,qCAAqC;AACrC,MAAM,qBAAqB,GACzB,wFAAwF,CAAC;AAE3F,uEAAuE;AACvE,MAAM,4BAA4B,GAChC,wFAAwF,CAAC;AAE3F;;;;GAIG;AACI,MAAM,QAAQ,GAAyB,KAAK,EACjD,MAA0B,EACH,EAAE;IACzB,MAAM,aAAa,GAAG,MAAM,CAAC,aAAa,CAAC;IAC3C,MAAM,QAAQ,GAAG,MAAM,CAAC,QAAQ,CAAC;IACjC,IAAI,CAAC,aAAa,EAAE,CAAC;QACnB,MAAM,IAAI,uBAAe,CAAC,4BAA4B,CAAC,CAAC;IAC1D,CAAC;IACD,MAAM,QAAQ,GAAG,MAAM,iBAAiB,CAAC,aAAa,EAAE,QAAQ,CAAC,CAAC;IAClE,IAAI,CAAC,qBAAqB,CAAC,QAAQ,CAAC,EAAE,CAAC;QACrC,MAAM,IAAI,uBAAe,CAAC,qBAAqB,CAAC,CAAC;IACnD,CAAC;IACD,OAAO,QAAQ,CAAC;AAClB,CAAC,CAAC;AAbW,QAAA,QAAQ,YAanB;AAEF;;GAEG;AACH,KAAK,UAAU,iBAAiB,CAAC,aAAqB,EAAE,QAAiB;IACvE,MAAM,GAAG,GAAG,IAAI,GAAG,CAAC,sBAAc,CAAC,CAAC;IACpC,IAAA,sBAAc,EAAC,GAAG,EAAE,aAAa,EAAE,QAAQ,CAAC,CAAC;IAC7C,MAAM,QAAQ,GAAG,MAAM,IAAA,WAAG,EAAC,GAAG,EAAE;QAC9B,OAAO,EAAE,aAAa;KACvB,CAAC,CAAC;IACH,IAAI,QAAQ,CAAC,MAAM,KAAK,GAAG,EAAE,CAAC;QAC5B,MAAM,IAAI,uBAAe,CACvB,eAAe,QAAQ,CAAC,MAAM,qDAAqD,QAAQ,CAAC,IAAI,EAAE,CACnG,CAAC;IACJ,CAAC;IACD,MAAM,MAAM,GAAG,IAAI,CAAC,KAAK,CAAC,QAAQ,CAAC,IAAI,CAAC,CAAC;IACzC,OAAO;QACL,WAAW,EAAE,MAAM,CAAC,YAAY;QAChC,gBAAgB,EAAE,MAAM,CAAC,MAAM,CAAC,UAAU,CAAC;KAC5C,CAAC;AACJ,CAAC;AAED;;;;GAIG;AACH,SAAS,qBAAqB,CAC5B,KAAc;IAEd,IAAI,KAAK,IAAI,IAAI,IAAI,OAAO,KAAK,KAAK,QAAQ;QAAE,OAAO,KAAK,CAAC;IAC7D,OAAO,CACL,aAAa,IAAI,KAAK;QACtB,OAAO,KAAK,CAAC,WAAW,KAAK,QAAQ;QACrC,kBAAkB,IAAI,KAAK;QAC3B,OAAO,KAAK,CAAC,gBAAgB,KAAK,QAAQ,CAC3C,CAAC;AACJ,CAAC"}

View file

@ -0,0 +1,141 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.CallbackWorkflow = exports.AUTOMATED_TIMEOUT_MS = exports.HUMAN_TIMEOUT_MS = void 0;
const promises_1 = require("timers/promises");
const error_1 = require("../../../error");
const utils_1 = require("../../../utils");
const command_builders_1 = require("./command_builders");
/** 5 minutes in milliseconds */
exports.HUMAN_TIMEOUT_MS = 300000;
/** 1 minute in milliseconds */
exports.AUTOMATED_TIMEOUT_MS = 60000;
/** Properties allowed on results of callbacks. */
const RESULT_PROPERTIES = ['accessToken', 'expiresInSeconds', 'refreshToken'];
/** Error message when the callback result is invalid. */
const CALLBACK_RESULT_ERROR = 'User provided OIDC callbacks must return a valid object with an accessToken.';
/** The time to throttle callback calls. */
const THROTTLE_MS = 100;
/**
* OIDC implementation of a callback based workflow.
* @internal
*/
class CallbackWorkflow {
/**
* Instantiate the callback workflow.
*/
constructor(cache, callback) {
this.cache = cache;
this.callback = this.withLock(callback);
this.lastExecutionTime = Date.now() - THROTTLE_MS;
}
/**
* Get the document to add for speculative authentication. This also needs
* to add a db field from the credentials source.
*/
async speculativeAuth(connection, credentials) {
// Check if the Client Cache has an access token.
// If it does, cache the access token in the Connection Cache and send a JwtStepRequest
// with the cached access token in the speculative authentication SASL payload.
if (this.cache.hasAccessToken) {
const accessToken = this.cache.getAccessToken();
connection.accessToken = accessToken;
const document = (0, command_builders_1.finishCommandDocument)(accessToken);
document.db = credentials.source;
return { speculativeAuthenticate: document };
}
return {};
}
/**
* Reauthenticate the callback workflow. For this we invalidated the access token
* in the cache and run the authentication steps again. No initial handshake needs
* to be sent.
*/
async reauthenticate(connection, credentials) {
if (this.cache.hasAccessToken) {
// Reauthentication implies the token has expired.
if (connection.accessToken === this.cache.getAccessToken()) {
// If connection's access token is the same as the cache's, remove
// the token from the cache and connection.
this.cache.removeAccessToken();
delete connection.accessToken;
}
else {
// If the connection's access token is different from the cache's, set
// the cache's token on the connection and do not remove from the
// cache.
connection.accessToken = this.cache.getAccessToken();
}
}
await this.execute(connection, credentials);
}
/**
* Starts the callback authentication process. If there is a speculative
* authentication document from the initial handshake, then we will use that
* value to get the issuer, otherwise we will send the saslStart command.
*/
async startAuthentication(connection, credentials, response) {
let result;
if (response?.speculativeAuthenticate) {
result = response.speculativeAuthenticate;
}
else {
result = await connection.command((0, utils_1.ns)(credentials.source), (0, command_builders_1.startCommandDocument)(credentials), undefined);
}
return result;
}
/**
* Finishes the callback authentication process.
*/
async finishAuthentication(connection, credentials, token, conversationId) {
await connection.command((0, utils_1.ns)(credentials.source), (0, command_builders_1.finishCommandDocument)(token, conversationId), undefined);
}
/**
* Executes the callback and validates the output.
*/
async executeAndValidateCallback(params) {
const result = await this.callback(params);
// Validate that the result returned by the callback is acceptable. If it is not
// we must clear the token result from the cache.
if (isCallbackResultInvalid(result)) {
throw new error_1.MongoMissingCredentialsError(CALLBACK_RESULT_ERROR);
}
return result;
}
/**
* Ensure the callback is only executed one at a time and throttles the calls
* to every 100ms.
*/
withLock(callback) {
let lock = Promise.resolve();
return async (params) => {
// We do this to ensure that we would never return the result of the
// previous lock, only the current callback's value would get returned.
await lock;
lock = lock
.catch(() => null)
.then(async () => {
const difference = Date.now() - this.lastExecutionTime;
if (difference <= THROTTLE_MS) {
await (0, promises_1.setTimeout)(THROTTLE_MS - difference, { signal: params.timeoutContext });
}
this.lastExecutionTime = Date.now();
return await callback(params);
});
return await lock;
};
}
}
exports.CallbackWorkflow = CallbackWorkflow;
/**
* Determines if a result returned from a request or refresh callback
* function is invalid. This means the result is nullish, doesn't contain
* the accessToken required field, and does not contain extra fields.
*/
function isCallbackResultInvalid(tokenResult) {
if (tokenResult == null || typeof tokenResult !== 'object')
return true;
if (!('accessToken' in tokenResult))
return true;
return !Object.getOwnPropertyNames(tokenResult).every(prop => RESULT_PROPERTIES.includes(prop));
}
//# sourceMappingURL=callback_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"callback_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/callback_workflow.ts"],"names":[],"mappings":";;;AAAA,8CAA6C;AAG7C,0CAA8D;AAC9D,0CAAoC;AASpC,yDAAiF;AAGjF,gCAAgC;AACnB,QAAA,gBAAgB,GAAG,MAAM,CAAC;AACvC,+BAA+B;AAClB,QAAA,oBAAoB,GAAG,KAAK,CAAC;AAE1C,kDAAkD;AAClD,MAAM,iBAAiB,GAAG,CAAC,aAAa,EAAE,kBAAkB,EAAE,cAAc,CAAC,CAAC;AAE9E,yDAAyD;AACzD,MAAM,qBAAqB,GACzB,8EAA8E,CAAC;AAEjF,2CAA2C;AAC3C,MAAM,WAAW,GAAG,GAAG,CAAC;AAExB;;;GAGG;AACH,MAAsB,gBAAgB;IAKpC;;OAEG;IACH,YAAY,KAAiB,EAAE,QAA8B;QAC3D,IAAI,CAAC,KAAK,GAAG,KAAK,CAAC;QACnB,IAAI,CAAC,QAAQ,GAAG,IAAI,CAAC,QAAQ,CAAC,QAAQ,CAAC,CAAC;QACxC,IAAI,CAAC,iBAAiB,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,WAAW,CAAC;IACpD,CAAC;IAED;;;OAGG;IACH,KAAK,CAAC,eAAe,CAAC,UAAsB,EAAE,WAA6B;QACzE,iDAAiD;QACjD,uFAAuF;QACvF,+EAA+E;QAC/E,IAAI,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAC9B,MAAM,WAAW,GAAG,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAChD,UAAU,CAAC,WAAW,GAAG,WAAW,CAAC;YACrC,MAAM,QAAQ,GAAG,IAAA,wCAAqB,EAAC,WAAW,CAAC,CAAC;YACpD,QAAQ,CAAC,EAAE,GAAG,WAAW,CAAC,MAAM,CAAC;YACjC,OAAO,EAAE,uBAAuB,EAAE,QAAQ,EAAE,CAAC;QAC/C,CAAC;QACD,OAAO,EAAE,CAAC;IACZ,CAAC;IAED;;;;OAIG;IACH,KAAK,CAAC,cAAc,CAAC,UAAsB,EAAE,WAA6B;QACxE,IAAI,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAC9B,kDAAkD;YAClD,IAAI,UAAU,CAAC,WAAW,KAAK,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,EAAE,CAAC;gBAC3D,kEAAkE;gBAClE,2CAA2C;gBAC3C,IAAI,CAAC,KAAK,CAAC,iBAAiB,EAAE,CAAC;gBAC/B,OAAO,UAAU,CAAC,WAAW,CAAC;YAChC,CAAC;iBAAM,CAAC;gBACN,sEAAsE;gBACtE,iEAAiE;gBACjE,SAAS;gBACT,UAAU,CAAC,WAAW,GAAG,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YACvD,CAAC;QACH,CAAC;QACD,MAAM,IAAI,CAAC,OAAO,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;IAC9C,CAAC;IAWD;;;;OAIG;IACO,KAAK,CAAC,mBAAmB,CACjC,UAAsB,EACtB,WAA6B,EAC7B,QAAmB;QAEnB,IAAI,MAAM,CAAC;QACX,IAAI,QAAQ,EAAE,uBAAuB,EAAE,CAAC;YACtC,MAAM,GAAG,QAAQ,CAAC,uBAAuB,CAAC;QAC5C,CAAC;aAAM,CAAC;YACN,MAAM,GAAG,MAAM,UAAU,CAAC,OAAO,CAC/B,IAAA,UAAE,EAAC,WAAW,CAAC,MAAM,CAAC,EACtB,IAAA,uCAAoB,EAAC,WAAW,CAAC,EACjC,SAAS,CACV,CAAC;QACJ,CAAC;QACD,OAAO,MAAM,CAAC;IAChB,CAAC;IAED;;OAEG;IACO,KAAK,CAAC,oBAAoB,CAClC,UAAsB,EACtB,WAA6B,EAC7B,KAAa,EACb,cAAuB;QAEvB,MAAM,UAAU,CAAC,OAAO,CACtB,IAAA,UAAE,EAAC,WAAW,CAAC,MAAM,CAAC,EACtB,IAAA,wCAAqB,EAAC,KAAK,EAAE,cAAc,CAAC,EAC5C,SAAS,CACV,CAAC;IACJ,CAAC;IAED;;OAEG;IACO,KAAK,CAAC,0BAA0B,CAAC,MAA0B;QACnE,MAAM,MAAM,GAAG,MAAM,IAAI,CAAC,QAAQ,CAAC,MAAM,CAAC,CAAC;QAC3C,gFAAgF;QAChF,iDAAiD;QACjD,IAAI,uBAAuB,CAAC,MAAM,CAAC,EAAE,CAAC;YACpC,MAAM,IAAI,oCAA4B,CAAC,qBAAqB,CAAC,CAAC;QAChE,CAAC;QACD,OAAO,MAAM,CAAC;IAChB,CAAC;IAED;;;OAGG;IACO,QAAQ,CAAC,QAA8B;QAC/C,IAAI,IAAI,GAAiB,OAAO,CAAC,OAAO,EAAE,CAAC;QAC3C,OAAO,KAAK,EAAE,MAA0B,EAAyB,EAAE;YACjE,oEAAoE;YACpE,uEAAuE;YACvE,MAAM,IAAI,CAAC;YACX,IAAI,GAAG,IAAI;iBAER,KAAK,CAAC,GAAG,EAAE,CAAC,IAAI,CAAC;iBAEjB,IAAI,CAAC,KAAK,IAAI,EAAE;gBACf,MAAM,UAAU,GAAG,IAAI,CAAC,GAAG,EAAE,GAAG,IAAI,CAAC,iBAAiB,CAAC;gBACvD,IAAI,UAAU,IAAI,WAAW,EAAE,CAAC;oBAC9B,MAAM,IAAA,qBAAU,EAAC,WAAW,GAAG,UAAU,EAAE,EAAE,MAAM,EAAE,MAAM,CAAC,cAAc,EAAE,CAAC,CAAC;gBAChF,CAAC;gBACD,IAAI,CAAC,iBAAiB,GAAG,IAAI,CAAC,GAAG,EAAE,CAAC;gBACpC,OAAO,MAAM,QAAQ,CAAC,MAAM,CAAC,CAAC;YAChC,CAAC,CAAC,CAAC;YACL,OAAO,MAAM,IAAI,CAAC;QACpB,CAAC,CAAC;IACJ,CAAC;CACF;AA7ID,4CA6IC;AAED;;;;GAIG;AACH,SAAS,uBAAuB,CAAC,WAAoB;IACnD,IAAI,WAAW,IAAI,IAAI,IAAI,OAAO,WAAW,KAAK,QAAQ;QAAE,OAAO,IAAI,CAAC;IACxE,IAAI,CAAC,CAAC,aAAa,IAAI,WAAW,CAAC;QAAE,OAAO,IAAI,CAAC;IACjD,OAAO,CAAC,MAAM,CAAC,mBAAmB,CAAC,WAAW,CAAC,CAAC,KAAK,CAAC,IAAI,CAAC,EAAE,CAAC,iBAAiB,CAAC,QAAQ,CAAC,IAAI,CAAC,CAAC,CAAC;AAClG,CAAC"}

View file

@ -0,0 +1,44 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.finishCommandDocument = finishCommandDocument;
exports.startCommandDocument = startCommandDocument;
const bson_1 = require("../../../bson");
const providers_1 = require("../providers");
/**
* Generate the finishing command document for authentication. Will be a
* saslStart or saslContinue depending on the presence of a conversation id.
*/
function finishCommandDocument(token, conversationId) {
if (conversationId != null) {
return {
saslContinue: 1,
conversationId: conversationId,
payload: new bson_1.Binary(bson_1.BSON.serialize({ jwt: token }))
};
}
// saslContinue requires a conversationId in the command to be valid so in this
// case the server allows "step two" to actually be a saslStart with the token
// as the jwt since the use of the cached value has no correlating conversating
// on the particular connection.
return {
saslStart: 1,
mechanism: providers_1.AuthMechanism.MONGODB_OIDC,
payload: new bson_1.Binary(bson_1.BSON.serialize({ jwt: token }))
};
}
/**
* Generate the saslStart command document.
*/
function startCommandDocument(credentials) {
const payload = {};
if (credentials.username) {
payload.n = credentials.username;
}
return {
saslStart: 1,
autoAuthorize: 1,
mechanism: providers_1.AuthMechanism.MONGODB_OIDC,
payload: new bson_1.Binary(bson_1.BSON.serialize(payload))
};
}
//# sourceMappingURL=command_builders.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"command_builders.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/command_builders.ts"],"names":[],"mappings":";;AAmBA,sDAiBC;AAKD,oDAWC;AApDD,wCAA4D;AAE5D,4CAA6C;AAa7C;;;GAGG;AACH,SAAgB,qBAAqB,CAAC,KAAa,EAAE,cAAuB;IAC1E,IAAI,cAAc,IAAI,IAAI,EAAE,CAAC;QAC3B,OAAO;YACL,YAAY,EAAE,CAAC;YACf,cAAc,EAAE,cAAc;YAC9B,OAAO,EAAE,IAAI,aAAM,CAAC,WAAI,CAAC,SAAS,CAAC,EAAE,GAAG,EAAE,KAAK,EAAE,CAAC,CAAC;SACpD,CAAC;IACJ,CAAC;IACD,+EAA+E;IAC/E,8EAA8E;IAC9E,+EAA+E;IAC/E,gCAAgC;IAChC,OAAO;QACL,SAAS,EAAE,CAAC;QACZ,SAAS,EAAE,yBAAa,CAAC,YAAY;QACrC,OAAO,EAAE,IAAI,aAAM,CAAC,WAAI,CAAC,SAAS,CAAC,EAAE,GAAG,EAAE,KAAK,EAAE,CAAC,CAAC;KACpD,CAAC;AACJ,CAAC;AAED;;GAEG;AACH,SAAgB,oBAAoB,CAAC,WAA6B;IAChE,MAAM,OAAO,GAAa,EAAE,CAAC;IAC7B,IAAI,WAAW,CAAC,QAAQ,EAAE,CAAC;QACzB,OAAO,CAAC,CAAC,GAAG,WAAW,CAAC,QAAQ,CAAC;IACnC,CAAC;IACD,OAAO;QACL,SAAS,EAAE,CAAC;QACZ,aAAa,EAAE,CAAC;QAChB,SAAS,EAAE,yBAAa,CAAC,YAAY;QACrC,OAAO,EAAE,IAAI,aAAM,CAAC,WAAI,CAAC,SAAS,CAAC,OAAO,CAAC,CAAC;KAC7C,CAAC;AACJ,CAAC"}

View file

@ -0,0 +1,39 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.callback = void 0;
const error_1 = require("../../../error");
const utils_1 = require("../../../utils");
/** GCP base URL. */
const GCP_BASE_URL = 'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity';
/** GCP request headers. */
const GCP_HEADERS = Object.freeze({ 'Metadata-Flavor': 'Google' });
/** Error for when the token audience is missing in the environment. */
const TOKEN_RESOURCE_MISSING_ERROR = 'TOKEN_RESOURCE must be set in the auth mechanism properties when ENVIRONMENT is gcp.';
/**
* The callback function to be used in the automated callback workflow.
* @param params - The OIDC callback parameters.
* @returns The OIDC response.
*/
const callback = async (params) => {
const tokenAudience = params.tokenAudience;
if (!tokenAudience) {
throw new error_1.MongoGCPError(TOKEN_RESOURCE_MISSING_ERROR);
}
return await getGcpTokenData(tokenAudience);
};
exports.callback = callback;
/**
* Hit the GCP endpoint to get the token data.
*/
async function getGcpTokenData(tokenAudience) {
const url = new URL(GCP_BASE_URL);
url.searchParams.append('audience', tokenAudience);
const response = await (0, utils_1.get)(url, {
headers: GCP_HEADERS
});
if (response.status !== 200) {
throw new error_1.MongoGCPError(`Status code ${response.status} returned from the GCP endpoint. Response body: ${response.body}`);
}
return { accessToken: response.body };
}
//# sourceMappingURL=gcp_machine_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"gcp_machine_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/gcp_machine_workflow.ts"],"names":[],"mappings":";;;AAAA,0CAA+C;AAC/C,0CAAqC;AAGrC,oBAAoB;AACpB,MAAM,YAAY,GAChB,+EAA+E,CAAC;AAElF,2BAA2B;AAC3B,MAAM,WAAW,GAAG,MAAM,CAAC,MAAM,CAAC,EAAE,iBAAiB,EAAE,QAAQ,EAAE,CAAC,CAAC;AAEnE,uEAAuE;AACvE,MAAM,4BAA4B,GAChC,sFAAsF,CAAC;AAEzF;;;;GAIG;AACI,MAAM,QAAQ,GAAyB,KAAK,EACjD,MAA0B,EACH,EAAE;IACzB,MAAM,aAAa,GAAG,MAAM,CAAC,aAAa,CAAC;IAC3C,IAAI,CAAC,aAAa,EAAE,CAAC;QACnB,MAAM,IAAI,qBAAa,CAAC,4BAA4B,CAAC,CAAC;IACxD,CAAC;IACD,OAAO,MAAM,eAAe,CAAC,aAAa,CAAC,CAAC;AAC9C,CAAC,CAAC;AARW,QAAA,QAAQ,YAQnB;AAEF;;GAEG;AACH,KAAK,UAAU,eAAe,CAAC,aAAqB;IAClD,MAAM,GAAG,GAAG,IAAI,GAAG,CAAC,YAAY,CAAC,CAAC;IAClC,GAAG,CAAC,YAAY,CAAC,MAAM,CAAC,UAAU,EAAE,aAAa,CAAC,CAAC;IACnD,MAAM,QAAQ,GAAG,MAAM,IAAA,WAAG,EAAC,GAAG,EAAE;QAC9B,OAAO,EAAE,WAAW;KACrB,CAAC,CAAC;IACH,IAAI,QAAQ,CAAC,MAAM,KAAK,GAAG,EAAE,CAAC;QAC5B,MAAM,IAAI,qBAAa,CACrB,eAAe,QAAQ,CAAC,MAAM,mDAAmD,QAAQ,CAAC,IAAI,EAAE,CACjG,CAAC;IACJ,CAAC;IACD,OAAO,EAAE,WAAW,EAAE,QAAQ,CAAC,IAAI,EAAE,CAAC;AACxC,CAAC"}

View file

@ -0,0 +1,122 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.HumanCallbackWorkflow = void 0;
const bson_1 = require("../../../bson");
const error_1 = require("../../../error");
const timeout_1 = require("../../../timeout");
const mongodb_oidc_1 = require("../mongodb_oidc");
const callback_workflow_1 = require("./callback_workflow");
/**
* Class implementing behaviour for the non human callback workflow.
* @internal
*/
class HumanCallbackWorkflow extends callback_workflow_1.CallbackWorkflow {
/**
* Instantiate the human callback workflow.
*/
constructor(cache, callback) {
super(cache, callback);
}
/**
* Execute the OIDC human callback workflow.
*/
async execute(connection, credentials) {
// Check if the Client Cache has an access token.
// If it does, cache the access token in the Connection Cache and perform a One-Step SASL conversation
// using the access token. If the server returns an Authentication error (18),
// invalidate the access token token from the Client Cache, clear the Connection Cache,
// and restart the authentication flow. Raise any other errors to the user. On success, exit the algorithm.
if (this.cache.hasAccessToken) {
const token = this.cache.getAccessToken();
connection.accessToken = token;
try {
return await this.finishAuthentication(connection, credentials, token);
}
catch (error) {
if (error instanceof error_1.MongoError &&
error.code === error_1.MONGODB_ERROR_CODES.AuthenticationFailed) {
this.cache.removeAccessToken();
delete connection.accessToken;
return await this.execute(connection, credentials);
}
else {
throw error;
}
}
}
// Check if the Client Cache has a refresh token.
// If it does, call the OIDC Human Callback with the cached refresh token and IdpInfo to get a
// new access token. Cache the new access token in the Client Cache and Connection Cache.
// Perform a One-Step SASL conversation using the new access token. If the the server returns
// an Authentication error (18), clear the refresh token, invalidate the access token from the
// Client Cache, clear the Connection Cache, and restart the authentication flow. Raise any other
// errors to the user. On success, exit the algorithm.
if (this.cache.hasRefreshToken) {
const refreshToken = this.cache.getRefreshToken();
const result = await this.fetchAccessToken(this.cache.getIdpInfo(), credentials, refreshToken);
this.cache.put(result);
connection.accessToken = result.accessToken;
try {
return await this.finishAuthentication(connection, credentials, result.accessToken);
}
catch (error) {
if (error instanceof error_1.MongoError &&
error.code === error_1.MONGODB_ERROR_CODES.AuthenticationFailed) {
this.cache.removeRefreshToken();
delete connection.accessToken;
return await this.execute(connection, credentials);
}
else {
throw error;
}
}
}
// Start a new Two-Step SASL conversation.
// Run a PrincipalStepRequest to get the IdpInfo.
// Call the OIDC Human Callback with the new IdpInfo to get a new access token and optional refresh
// token. Drivers MUST NOT pass a cached refresh token to the callback when performing
// a new Two-Step conversation. Cache the new IdpInfo and refresh token in the Client Cache and the
// new access token in the Client Cache and Connection Cache.
// Attempt to authenticate using a JwtStepRequest with the new access token. Raise any errors to the user.
const startResponse = await this.startAuthentication(connection, credentials);
const conversationId = startResponse.conversationId;
const idpInfo = bson_1.BSON.deserialize(startResponse.payload.buffer);
const callbackResponse = await this.fetchAccessToken(idpInfo, credentials);
this.cache.put(callbackResponse, idpInfo);
connection.accessToken = callbackResponse.accessToken;
return await this.finishAuthentication(connection, credentials, callbackResponse.accessToken, conversationId);
}
/**
* Fetches an access token using the callback.
*/
async fetchAccessToken(idpInfo, credentials, refreshToken) {
const controller = new AbortController();
const params = {
timeoutContext: controller.signal,
version: mongodb_oidc_1.OIDC_VERSION,
idpInfo: idpInfo
};
if (credentials.username) {
params.username = credentials.username;
}
if (refreshToken) {
params.refreshToken = refreshToken;
}
const timeout = timeout_1.Timeout.expires(callback_workflow_1.HUMAN_TIMEOUT_MS);
try {
return await Promise.race([this.executeAndValidateCallback(params), timeout]);
}
catch (error) {
if (timeout_1.TimeoutError.is(error)) {
controller.abort();
throw new error_1.MongoOIDCError(`OIDC callback timed out after ${callback_workflow_1.HUMAN_TIMEOUT_MS}ms.`);
}
throw error;
}
finally {
timeout.clear();
}
}
}
exports.HumanCallbackWorkflow = HumanCallbackWorkflow;
//# sourceMappingURL=human_callback_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"human_callback_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/human_callback_workflow.ts"],"names":[],"mappings":";;;AAAA,wCAAqC;AACrC,0CAAiF;AACjF,8CAAyD;AAGzD,kDAMyB;AACzB,2DAAyE;AAGzE;;;GAGG;AACH,MAAa,qBAAsB,SAAQ,oCAAgB;IACzD;;OAEG;IACH,YAAY,KAAiB,EAAE,QAA8B;QAC3D,KAAK,CAAC,KAAK,EAAE,QAAQ,CAAC,CAAC;IACzB,CAAC;IAED;;OAEG;IACH,KAAK,CAAC,OAAO,CAAC,UAAsB,EAAE,WAA6B;QACjE,iDAAiD;QACjD,sGAAsG;QACtG,8EAA8E;QAC9E,uFAAuF;QACvF,2GAA2G;QAC3G,IAAI,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAC9B,MAAM,KAAK,GAAG,IAAI,CAAC,KAAK,CAAC,cAAc,EAAE,CAAC;YAC1C,UAAU,CAAC,WAAW,GAAG,KAAK,CAAC;YAC/B,IAAI,CAAC;gBACH,OAAO,MAAM,IAAI,CAAC,oBAAoB,CAAC,UAAU,EAAE,WAAW,EAAE,KAAK,CAAC,CAAC;YACzE,CAAC;YAAC,OAAO,KAAK,EAAE,CAAC;gBACf,IACE,KAAK,YAAY,kBAAU;oBAC3B,KAAK,CAAC,IAAI,KAAK,2BAAmB,CAAC,oBAAoB,EACvD,CAAC;oBACD,IAAI,CAAC,KAAK,CAAC,iBAAiB,EAAE,CAAC;oBAC/B,OAAO,UAAU,CAAC,WAAW,CAAC;oBAC9B,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;gBACrD,CAAC;qBAAM,CAAC;oBACN,MAAM,KAAK,CAAC;gBACd,CAAC;YACH,CAAC;QACH,CAAC;QACD,iDAAiD;QACjD,8FAA8F;QAC9F,yFAAyF;QACzF,6FAA6F;QAC7F,8FAA8F;QAC9F,iGAAiG;QACjG,sDAAsD;QACtD,IAAI,IAAI,CAAC,KAAK,CAAC,eAAe,EAAE,CAAC;YAC/B,MAAM,YAAY,GAAG,IAAI,CAAC,KAAK,CAAC,eAAe,EAAE,CAAC;YAClD,MAAM,MAAM,GAAG,MAAM,IAAI,CAAC,gBAAgB,CACxC,IAAI,CAAC,KAAK,CAAC,UAAU,EAAE,EACvB,WAAW,EACX,YAAY,CACb,CAAC;YACF,IAAI,CAAC,KAAK,CAAC,GAAG,CAAC,MAAM,CAAC,CAAC;YACvB,UAAU,CAAC,WAAW,GAAG,MAAM,CAAC,WAAW,CAAC;YAC5C,IAAI,CAAC;gBACH,OAAO,MAAM,IAAI,CAAC,oBAAoB,CAAC,UAAU,EAAE,WAAW,EAAE,MAAM,CAAC,WAAW,CAAC,CAAC;YACtF,CAAC;YAAC,OAAO,KAAK,EAAE,CAAC;gBACf,IACE,KAAK,YAAY,kBAAU;oBAC3B,KAAK,CAAC,IAAI,KAAK,2BAAmB,CAAC,oBAAoB,EACvD,CAAC;oBACD,IAAI,CAAC,KAAK,CAAC,kBAAkB,EAAE,CAAC;oBAChC,OAAO,UAAU,CAAC,WAAW,CAAC;oBAC9B,OAAO,MAAM,IAAI,CAAC,OAAO,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;gBACrD,CAAC;qBAAM,CAAC;oBACN,MAAM,KAAK,CAAC;gBACd,CAAC;YACH,CAAC;QACH,CAAC;QAED,0CAA0C;QAC1C,iDAAiD;QACjD,mGAAmG;QACnG,sFAAsF;QACtF,mGAAmG;QACnG,6DAA6D;QAC7D,0GAA0G;QAC1G,MAAM,aAAa,GAAG,MAAM,IAAI,CAAC,mBAAmB,CAAC,UAAU,EAAE,WAAW,CAAC,CAAC;QAC9E,MAAM,cAAc,GAAG,aAAa,CAAC,cAAc,CAAC;QACpD,MAAM,OAAO,GAAG,WAAI,CAAC,WAAW,CAAC,aAAa,CAAC,OAAO,CAAC,MAAM,CAAY,CAAC;QAC1E,MAAM,gBAAgB,GAAG,MAAM,IAAI,CAAC,gBAAgB,CAAC,OAAO,EAAE,WAAW,CAAC,CAAC;QAC3E,IAAI,CAAC,KAAK,CAAC,GAAG,CAAC,gBAAgB,EAAE,OAAO,CAAC,CAAC;QAC1C,UAAU,CAAC,WAAW,GAAG,gBAAgB,CAAC,WAAW,CAAC;QACtD,OAAO,MAAM,IAAI,CAAC,oBAAoB,CACpC,UAAU,EACV,WAAW,EACX,gBAAgB,CAAC,WAAW,EAC5B,cAAc,CACf,CAAC;IACJ,CAAC;IAED;;OAEG;IACK,KAAK,CAAC,gBAAgB,CAC5B,OAAgB,EAChB,WAA6B,EAC7B,YAAqB;QAErB,MAAM,UAAU,GAAG,IAAI,eAAe,EAAE,CAAC;QACzC,MAAM,MAAM,GAAuB;YACjC,cAAc,EAAE,UAAU,CAAC,MAAM;YACjC,OAAO,EAAE,2BAAY;YACrB,OAAO,EAAE,OAAO;SACjB,CAAC;QACF,IAAI,WAAW,CAAC,QAAQ,EAAE,CAAC;YACzB,MAAM,CAAC,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QACzC,CAAC;QACD,IAAI,YAAY,EAAE,CAAC;YACjB,MAAM,CAAC,YAAY,GAAG,YAAY,CAAC;QACrC,CAAC;QACD,MAAM,OAAO,GAAG,iBAAO,CAAC,OAAO,CAAC,oCAAgB,CAAC,CAAC;QAClD,IAAI,CAAC;YACH,OAAO,MAAM,OAAO,CAAC,IAAI,CAAC,CAAC,IAAI,CAAC,0BAA0B,CAAC,MAAM,CAAC,EAAE,OAAO,CAAC,CAAC,CAAC;QAChF,CAAC;QAAC,OAAO,KAAK,EAAE,CAAC;YACf,IAAI,sBAAY,CAAC,EAAE,CAAC,KAAK,CAAC,EAAE,CAAC;gBAC3B,UAAU,CAAC,KAAK,EAAE,CAAC;gBACnB,MAAM,IAAI,sBAAc,CAAC,iCAAiC,oCAAgB,KAAK,CAAC,CAAC;YACnF,CAAC;YACD,MAAM,KAAK,CAAC;QACd,CAAC;gBAAS,CAAC;YACT,OAAO,CAAC,KAAK,EAAE,CAAC;QAClB,CAAC;IACH,CAAC;CACF;AAzHD,sDAyHC"}

View file

@ -0,0 +1,31 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.callback = void 0;
const promises_1 = require("fs/promises");
/** The fallback file name */
const FALLBACK_FILENAME = '/var/run/secrets/kubernetes.io/serviceaccount/token';
/** The azure environment variable for the file name. */
const AZURE_FILENAME = 'AZURE_FEDERATED_TOKEN_FILE';
/** The AWS environment variable for the file name. */
const AWS_FILENAME = 'AWS_WEB_IDENTITY_TOKEN_FILE';
/**
* The callback function to be used in the automated callback workflow.
* @param params - The OIDC callback parameters.
* @returns The OIDC response.
*/
const callback = async () => {
let filename;
if (process.env[AZURE_FILENAME]) {
filename = process.env[AZURE_FILENAME];
}
else if (process.env[AWS_FILENAME]) {
filename = process.env[AWS_FILENAME];
}
else {
filename = FALLBACK_FILENAME;
}
const token = await (0, promises_1.readFile)(filename, 'utf8');
return { accessToken: token };
};
exports.callback = callback;
//# sourceMappingURL=k8s_machine_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"k8s_machine_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/k8s_machine_workflow.ts"],"names":[],"mappings":";;;AAAA,0CAAuC;AAIvC,6BAA6B;AAC7B,MAAM,iBAAiB,GAAG,qDAAqD,CAAC;AAEhF,wDAAwD;AACxD,MAAM,cAAc,GAAG,4BAA4B,CAAC;AAEpD,sDAAsD;AACtD,MAAM,YAAY,GAAG,6BAA6B,CAAC;AAEnD;;;;GAIG;AACI,MAAM,QAAQ,GAAyB,KAAK,IAA2B,EAAE;IAC9E,IAAI,QAAgB,CAAC;IACrB,IAAI,OAAO,CAAC,GAAG,CAAC,cAAc,CAAC,EAAE,CAAC;QAChC,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,cAAc,CAAC,CAAC;IACzC,CAAC;SAAM,IAAI,OAAO,CAAC,GAAG,CAAC,YAAY,CAAC,EAAE,CAAC;QACrC,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,YAAY,CAAC,CAAC;IACvC,CAAC;SAAM,CAAC;QACN,QAAQ,GAAG,iBAAiB,CAAC;IAC/B,CAAC;IACD,MAAM,KAAK,GAAG,MAAM,IAAA,mBAAQ,EAAC,QAAQ,EAAE,MAAM,CAAC,CAAC;IAC/C,OAAO,EAAE,WAAW,EAAE,KAAK,EAAE,CAAC;AAChC,CAAC,CAAC;AAXW,QAAA,QAAQ,YAWnB"}

View file

@ -0,0 +1,52 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.TokenCache = void 0;
const error_1 = require("../../../error");
class MongoOIDCError extends error_1.MongoDriverError {
}
/** @internal */
class TokenCache {
get hasAccessToken() {
return !!this.accessToken;
}
get hasRefreshToken() {
return !!this.refreshToken;
}
get hasIdpInfo() {
return !!this.idpInfo;
}
getAccessToken() {
if (!this.accessToken) {
throw new MongoOIDCError('Attempted to get an access token when none exists.');
}
return this.accessToken;
}
getRefreshToken() {
if (!this.refreshToken) {
throw new MongoOIDCError('Attempted to get a refresh token when none exists.');
}
return this.refreshToken;
}
getIdpInfo() {
if (!this.idpInfo) {
throw new MongoOIDCError('Attempted to get IDP information when none exists.');
}
return this.idpInfo;
}
put(response, idpInfo) {
this.accessToken = response.accessToken;
this.refreshToken = response.refreshToken;
this.expiresInSeconds = response.expiresInSeconds;
if (idpInfo) {
this.idpInfo = idpInfo;
}
}
removeAccessToken() {
this.accessToken = undefined;
}
removeRefreshToken() {
this.refreshToken = undefined;
}
}
exports.TokenCache = TokenCache;
//# sourceMappingURL=token_cache.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"token_cache.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/token_cache.ts"],"names":[],"mappings":";;;AAAA,0CAAkD;AAGlD,MAAM,cAAe,SAAQ,wBAAgB;CAAG;AAEhD,gBAAgB;AAChB,MAAa,UAAU;IAMrB,IAAI,cAAc;QAChB,OAAO,CAAC,CAAC,IAAI,CAAC,WAAW,CAAC;IAC5B,CAAC;IAED,IAAI,eAAe;QACjB,OAAO,CAAC,CAAC,IAAI,CAAC,YAAY,CAAC;IAC7B,CAAC;IAED,IAAI,UAAU;QACZ,OAAO,CAAC,CAAC,IAAI,CAAC,OAAO,CAAC;IACxB,CAAC;IAED,cAAc;QACZ,IAAI,CAAC,IAAI,CAAC,WAAW,EAAE,CAAC;YACtB,MAAM,IAAI,cAAc,CAAC,oDAAoD,CAAC,CAAC;QACjF,CAAC;QACD,OAAO,IAAI,CAAC,WAAW,CAAC;IAC1B,CAAC;IAED,eAAe;QACb,IAAI,CAAC,IAAI,CAAC,YAAY,EAAE,CAAC;YACvB,MAAM,IAAI,cAAc,CAAC,oDAAoD,CAAC,CAAC;QACjF,CAAC;QACD,OAAO,IAAI,CAAC,YAAY,CAAC;IAC3B,CAAC;IAED,UAAU;QACR,IAAI,CAAC,IAAI,CAAC,OAAO,EAAE,CAAC;YAClB,MAAM,IAAI,cAAc,CAAC,oDAAoD,CAAC,CAAC;QACjF,CAAC;QACD,OAAO,IAAI,CAAC,OAAO,CAAC;IACtB,CAAC;IAED,GAAG,CAAC,QAAsB,EAAE,OAAiB;QAC3C,IAAI,CAAC,WAAW,GAAG,QAAQ,CAAC,WAAW,CAAC;QACxC,IAAI,CAAC,YAAY,GAAG,QAAQ,CAAC,YAAY,CAAC;QAC1C,IAAI,CAAC,gBAAgB,GAAG,QAAQ,CAAC,gBAAgB,CAAC;QAClD,IAAI,OAAO,EAAE,CAAC;YACZ,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;QACzB,CAAC;IACH,CAAC;IAED,iBAAiB;QACf,IAAI,CAAC,WAAW,GAAG,SAAS,CAAC;IAC/B,CAAC;IAED,kBAAkB;QAChB,IAAI,CAAC,YAAY,GAAG,SAAS,CAAC;IAChC,CAAC;CACF;AAvDD,gCAuDC"}

View file

@ -0,0 +1,22 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.callback = void 0;
const fs = require("fs");
const error_1 = require("../../../error");
/** Error for when the token is missing in the environment. */
const TOKEN_MISSING_ERROR = 'OIDC_TOKEN_FILE must be set in the environment.';
/**
* The callback function to be used in the automated callback workflow.
* @param params - The OIDC callback parameters.
* @returns The OIDC response.
*/
const callback = async () => {
const tokenFile = process.env.OIDC_TOKEN_FILE;
if (!tokenFile) {
throw new error_1.MongoAWSError(TOKEN_MISSING_ERROR);
}
const token = await fs.promises.readFile(tokenFile, 'utf8');
return { accessToken: token };
};
exports.callback = callback;
//# sourceMappingURL=token_machine_workflow.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"token_machine_workflow.js","sourceRoot":"","sources":["../../../../src/cmap/auth/mongodb_oidc/token_machine_workflow.ts"],"names":[],"mappings":";;;AAAA,yBAAyB;AAEzB,0CAA+C;AAG/C,8DAA8D;AAC9D,MAAM,mBAAmB,GAAG,iDAAiD,CAAC;AAE9E;;;;GAIG;AACI,MAAM,QAAQ,GAAyB,KAAK,IAA2B,EAAE;IAC9E,MAAM,SAAS,GAAG,OAAO,CAAC,GAAG,CAAC,eAAe,CAAC;IAC9C,IAAI,CAAC,SAAS,EAAE,CAAC;QACf,MAAM,IAAI,qBAAa,CAAC,mBAAmB,CAAC,CAAC;IAC/C,CAAC;IACD,MAAM,KAAK,GAAG,MAAM,EAAE,CAAC,QAAQ,CAAC,QAAQ,CAAC,SAAS,EAAE,MAAM,CAAC,CAAC;IAC5D,OAAO,EAAE,WAAW,EAAE,KAAK,EAAE,CAAC;AAChC,CAAC,CAAC;AAPW,QAAA,QAAQ,YAOnB"}

26
node_modules/mongodb/lib/cmap/auth/plain.js generated vendored Normal file
View file

@ -0,0 +1,26 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.Plain = void 0;
const bson_1 = require("../../bson");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
class Plain extends auth_provider_1.AuthProvider {
async auth(authContext) {
const { connection, credentials } = authContext;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
const { username, password } = credentials;
const payload = new bson_1.Binary(Buffer.from(`\x00${username}\x00${password}`));
const command = {
saslStart: 1,
mechanism: 'PLAIN',
payload: payload,
autoAuthorize: 1
};
await connection.command((0, utils_1.ns)('$external.$cmd'), command, undefined);
}
}
exports.Plain = Plain;
//# sourceMappingURL=plain.js.map

1
node_modules/mongodb/lib/cmap/auth/plain.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"plain.js","sourceRoot":"","sources":["../../../src/cmap/auth/plain.ts"],"names":[],"mappings":";;;AAAA,qCAAoC;AACpC,uCAA2D;AAC3D,uCAAiC;AACjC,mDAAiE;AAEjE,MAAa,KAAM,SAAQ,4BAAY;IAC5B,KAAK,CAAC,IAAI,CAAC,WAAwB;QAC1C,MAAM,EAAE,UAAU,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QAChD,IAAI,CAAC,WAAW,EAAE,CAAC;YACjB,MAAM,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC;QAClF,CAAC;QAED,MAAM,EAAE,QAAQ,EAAE,QAAQ,EAAE,GAAG,WAAW,CAAC;QAE3C,MAAM,OAAO,GAAG,IAAI,aAAM,CAAC,MAAM,CAAC,IAAI,CAAC,OAAO,QAAQ,OAAO,QAAQ,EAAE,CAAC,CAAC,CAAC;QAC1E,MAAM,OAAO,GAAG;YACd,SAAS,EAAE,CAAC;YACZ,SAAS,EAAE,OAAO;YAClB,OAAO,EAAE,OAAO;YAChB,aAAa,EAAE,CAAC;SACjB,CAAC;QAEF,MAAM,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,gBAAgB,CAAC,EAAE,OAAO,EAAE,SAAS,CAAC,CAAC;IACrE,CAAC;CACF;AAnBD,sBAmBC"}

22
node_modules/mongodb/lib/cmap/auth/providers.js generated vendored Normal file
View file

@ -0,0 +1,22 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.AUTH_MECHS_AUTH_SRC_EXTERNAL = exports.AuthMechanism = void 0;
/** @public */
exports.AuthMechanism = Object.freeze({
MONGODB_AWS: 'MONGODB-AWS',
MONGODB_DEFAULT: 'DEFAULT',
MONGODB_GSSAPI: 'GSSAPI',
MONGODB_PLAIN: 'PLAIN',
MONGODB_SCRAM_SHA1: 'SCRAM-SHA-1',
MONGODB_SCRAM_SHA256: 'SCRAM-SHA-256',
MONGODB_X509: 'MONGODB-X509',
MONGODB_OIDC: 'MONGODB-OIDC'
});
/** @internal */
exports.AUTH_MECHS_AUTH_SRC_EXTERNAL = new Set([
exports.AuthMechanism.MONGODB_GSSAPI,
exports.AuthMechanism.MONGODB_AWS,
exports.AuthMechanism.MONGODB_OIDC,
exports.AuthMechanism.MONGODB_X509
]);
//# sourceMappingURL=providers.js.map

1
node_modules/mongodb/lib/cmap/auth/providers.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"providers.js","sourceRoot":"","sources":["../../../src/cmap/auth/providers.ts"],"names":[],"mappings":";;;AAAA,cAAc;AACD,QAAA,aAAa,GAAG,MAAM,CAAC,MAAM,CAAC;IACzC,WAAW,EAAE,aAAa;IAC1B,eAAe,EAAE,SAAS;IAC1B,cAAc,EAAE,QAAQ;IACxB,aAAa,EAAE,OAAO;IACtB,kBAAkB,EAAE,aAAa;IACjC,oBAAoB,EAAE,eAAe;IACrC,YAAY,EAAE,cAAc;IAC5B,YAAY,EAAE,cAAc;CACpB,CAAC,CAAC;AAKZ,gBAAgB;AACH,QAAA,4BAA4B,GAAG,IAAI,GAAG,CAAgB;IACjE,qBAAa,CAAC,cAAc;IAC5B,qBAAa,CAAC,WAAW;IACzB,qBAAa,CAAC,YAAY;IAC1B,qBAAa,CAAC,YAAY;CAC3B,CAAC,CAAC"}

254
node_modules/mongodb/lib/cmap/auth/scram.js generated vendored Normal file
View file

@ -0,0 +1,254 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ScramSHA256 = exports.ScramSHA1 = void 0;
const saslprep_1 = require("@mongodb-js/saslprep");
const crypto = require("crypto");
const bson_1 = require("../../bson");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
const providers_1 = require("./providers");
class ScramSHA extends auth_provider_1.AuthProvider {
constructor(cryptoMethod) {
super();
this.cryptoMethod = cryptoMethod || 'sha1';
}
async prepare(handshakeDoc, authContext) {
const cryptoMethod = this.cryptoMethod;
const credentials = authContext.credentials;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
const nonce = await (0, utils_1.randomBytes)(24);
// store the nonce for later use
authContext.nonce = nonce;
const request = {
...handshakeDoc,
speculativeAuthenticate: {
...makeFirstMessage(cryptoMethod, credentials, nonce),
db: credentials.source
}
};
return request;
}
async auth(authContext) {
const { reauthenticating, response } = authContext;
if (response?.speculativeAuthenticate && !reauthenticating) {
return await continueScramConversation(this.cryptoMethod, response.speculativeAuthenticate, authContext);
}
return await executeScram(this.cryptoMethod, authContext);
}
}
function cleanUsername(username) {
return username.replace('=', '=3D').replace(',', '=2C');
}
function clientFirstMessageBare(username, nonce) {
// NOTE: This is done b/c Javascript uses UTF-16, but the server is hashing in UTF-8.
// Since the username is not sasl-prep-d, we need to do this here.
return Buffer.concat([
Buffer.from('n=', 'utf8'),
Buffer.from(username, 'utf8'),
Buffer.from(',r=', 'utf8'),
Buffer.from(nonce.toString('base64'), 'utf8')
]);
}
function makeFirstMessage(cryptoMethod, credentials, nonce) {
const username = cleanUsername(credentials.username);
const mechanism = cryptoMethod === 'sha1' ? providers_1.AuthMechanism.MONGODB_SCRAM_SHA1 : providers_1.AuthMechanism.MONGODB_SCRAM_SHA256;
// NOTE: This is done b/c Javascript uses UTF-16, but the server is hashing in UTF-8.
// Since the username is not sasl-prep-d, we need to do this here.
return {
saslStart: 1,
mechanism,
payload: new bson_1.Binary(Buffer.concat([Buffer.from('n,,', 'utf8'), clientFirstMessageBare(username, nonce)])),
autoAuthorize: 1,
options: { skipEmptyExchange: true }
};
}
async function executeScram(cryptoMethod, authContext) {
const { connection, credentials } = authContext;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
if (!authContext.nonce) {
throw new error_1.MongoInvalidArgumentError('AuthContext must contain a valid nonce property');
}
const nonce = authContext.nonce;
const db = credentials.source;
const saslStartCmd = makeFirstMessage(cryptoMethod, credentials, nonce);
const response = await connection.command((0, utils_1.ns)(`${db}.$cmd`), saslStartCmd, undefined);
await continueScramConversation(cryptoMethod, response, authContext);
}
async function continueScramConversation(cryptoMethod, response, authContext) {
const connection = authContext.connection;
const credentials = authContext.credentials;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
if (!authContext.nonce) {
throw new error_1.MongoInvalidArgumentError('Unable to continue SCRAM without valid nonce');
}
const nonce = authContext.nonce;
const db = credentials.source;
const username = cleanUsername(credentials.username);
const password = credentials.password;
const processedPassword = cryptoMethod === 'sha256' ? (0, saslprep_1.saslprep)(password) : passwordDigest(username, password);
const payload = Buffer.isBuffer(response.payload)
? new bson_1.Binary(response.payload)
: response.payload;
const dict = parsePayload(payload);
const iterations = parseInt(dict.i, 10);
if (iterations && iterations < 4096) {
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError(`Server returned an invalid iteration count ${iterations}`);
}
const salt = dict.s;
const rnonce = dict.r;
if (rnonce.startsWith('nonce')) {
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError(`Server returned an invalid nonce: ${rnonce}`);
}
// Set up start of proof
const withoutProof = `c=biws,r=${rnonce}`;
const saltedPassword = HI(processedPassword, Buffer.from(salt, 'base64'), iterations, cryptoMethod);
const clientKey = HMAC(cryptoMethod, saltedPassword, 'Client Key');
const serverKey = HMAC(cryptoMethod, saltedPassword, 'Server Key');
const storedKey = H(cryptoMethod, clientKey);
const authMessage = [
clientFirstMessageBare(username, nonce),
payload.toString('utf8'),
withoutProof
].join(',');
const clientSignature = HMAC(cryptoMethod, storedKey, authMessage);
const clientProof = `p=${xor(clientKey, clientSignature)}`;
const clientFinal = [withoutProof, clientProof].join(',');
const serverSignature = HMAC(cryptoMethod, serverKey, authMessage);
const saslContinueCmd = {
saslContinue: 1,
conversationId: response.conversationId,
payload: new bson_1.Binary(Buffer.from(clientFinal))
};
const r = await connection.command((0, utils_1.ns)(`${db}.$cmd`), saslContinueCmd, undefined);
const parsedResponse = parsePayload(r.payload);
if (!compareDigest(Buffer.from(parsedResponse.v, 'base64'), serverSignature)) {
throw new error_1.MongoRuntimeError('Server returned an invalid signature');
}
if (r.done !== false) {
// If the server sends r.done === true we can save one RTT
return;
}
const retrySaslContinueCmd = {
saslContinue: 1,
conversationId: r.conversationId,
payload: Buffer.alloc(0)
};
await connection.command((0, utils_1.ns)(`${db}.$cmd`), retrySaslContinueCmd, undefined);
}
function parsePayload(payload) {
const payloadStr = payload.toString('utf8');
const dict = {};
const parts = payloadStr.split(',');
for (let i = 0; i < parts.length; i++) {
const valueParts = (parts[i].match(/^([^=]*)=(.*)$/) ?? []).slice(1);
dict[valueParts[0]] = valueParts[1];
}
return dict;
}
function passwordDigest(username, password) {
if (typeof username !== 'string') {
throw new error_1.MongoInvalidArgumentError('Username must be a string');
}
if (typeof password !== 'string') {
throw new error_1.MongoInvalidArgumentError('Password must be a string');
}
if (password.length === 0) {
throw new error_1.MongoInvalidArgumentError('Password cannot be empty');
}
let md5;
try {
md5 = crypto.createHash('md5');
}
catch (err) {
if (crypto.getFips()) {
// This error is (slightly) more helpful than what comes from OpenSSL directly, e.g.
// 'Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS'
throw new Error('Auth mechanism SCRAM-SHA-1 is not supported in FIPS mode');
}
throw err;
}
md5.update(`${username}:mongo:${password}`, 'utf8');
return md5.digest('hex');
}
// XOR two buffers
function xor(a, b) {
if (!Buffer.isBuffer(a)) {
a = Buffer.from(a);
}
if (!Buffer.isBuffer(b)) {
b = Buffer.from(b);
}
const length = Math.max(a.length, b.length);
const res = [];
for (let i = 0; i < length; i += 1) {
res.push(a[i] ^ b[i]);
}
return Buffer.from(res).toString('base64');
}
function H(method, text) {
return crypto.createHash(method).update(text).digest();
}
function HMAC(method, key, text) {
return crypto.createHmac(method, key).update(text).digest();
}
let _hiCache = {};
let _hiCacheCount = 0;
function _hiCachePurge() {
_hiCache = {};
_hiCacheCount = 0;
}
const hiLengthMap = {
sha256: 32,
sha1: 20
};
function HI(data, salt, iterations, cryptoMethod) {
// omit the work if already generated
const key = [data, salt.toString('base64'), iterations].join('_');
if (_hiCache[key] != null) {
return _hiCache[key];
}
// generate the salt
const saltedData = crypto.pbkdf2Sync(data, salt, iterations, hiLengthMap[cryptoMethod], cryptoMethod);
// cache a copy to speed up the next lookup, but prevent unbounded cache growth
if (_hiCacheCount >= 200) {
_hiCachePurge();
}
_hiCache[key] = saltedData;
_hiCacheCount += 1;
return saltedData;
}
function compareDigest(lhs, rhs) {
if (lhs.length !== rhs.length) {
return false;
}
if (typeof crypto.timingSafeEqual === 'function') {
return crypto.timingSafeEqual(lhs, rhs);
}
let result = 0;
for (let i = 0; i < lhs.length; i++) {
result |= lhs[i] ^ rhs[i];
}
return result === 0;
}
class ScramSHA1 extends ScramSHA {
constructor() {
super('sha1');
}
}
exports.ScramSHA1 = ScramSHA1;
class ScramSHA256 extends ScramSHA {
constructor() {
super('sha256');
}
}
exports.ScramSHA256 = ScramSHA256;
//# sourceMappingURL=scram.js.map

1
node_modules/mongodb/lib/cmap/auth/scram.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

36
node_modules/mongodb/lib/cmap/auth/x509.js generated vendored Normal file
View file

@ -0,0 +1,36 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.X509 = void 0;
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const auth_provider_1 = require("./auth_provider");
class X509 extends auth_provider_1.AuthProvider {
async prepare(handshakeDoc, authContext) {
const { credentials } = authContext;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
return { ...handshakeDoc, speculativeAuthenticate: x509AuthenticateCommand(credentials) };
}
async auth(authContext) {
const connection = authContext.connection;
const credentials = authContext.credentials;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('AuthContext must provide credentials.');
}
const response = authContext.response;
if (response?.speculativeAuthenticate) {
return;
}
await connection.command((0, utils_1.ns)('$external.$cmd'), x509AuthenticateCommand(credentials), undefined);
}
}
exports.X509 = X509;
function x509AuthenticateCommand(credentials) {
const command = { authenticate: 1, mechanism: 'MONGODB-X509' };
if (credentials.username) {
command.user = credentials.username;
}
return command;
}
//# sourceMappingURL=x509.js.map

1
node_modules/mongodb/lib/cmap/auth/x509.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"x509.js","sourceRoot":"","sources":["../../../src/cmap/auth/x509.ts"],"names":[],"mappings":";;;AACA,uCAA2D;AAC3D,uCAAiC;AAEjC,mDAAiE;AAGjE,MAAa,IAAK,SAAQ,4BAAY;IAC3B,KAAK,CAAC,OAAO,CACpB,YAA+B,EAC/B,WAAwB;QAExB,MAAM,EAAE,WAAW,EAAE,GAAG,WAAW,CAAC;QACpC,IAAI,CAAC,WAAW,EAAE,CAAC;YACjB,MAAM,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC;QAClF,CAAC;QACD,OAAO,EAAE,GAAG,YAAY,EAAE,uBAAuB,EAAE,uBAAuB,CAAC,WAAW,CAAC,EAAE,CAAC;IAC5F,CAAC;IAEQ,KAAK,CAAC,IAAI,CAAC,WAAwB;QAC1C,MAAM,UAAU,GAAG,WAAW,CAAC,UAAU,CAAC;QAC1C,MAAM,WAAW,GAAG,WAAW,CAAC,WAAW,CAAC;QAC5C,IAAI,CAAC,WAAW,EAAE,CAAC;YACjB,MAAM,IAAI,oCAA4B,CAAC,uCAAuC,CAAC,CAAC;QAClF,CAAC;QACD,MAAM,QAAQ,GAAG,WAAW,CAAC,QAAQ,CAAC;QAEtC,IAAI,QAAQ,EAAE,uBAAuB,EAAE,CAAC;YACtC,OAAO;QACT,CAAC;QAED,MAAM,UAAU,CAAC,OAAO,CAAC,IAAA,UAAE,EAAC,gBAAgB,CAAC,EAAE,uBAAuB,CAAC,WAAW,CAAC,EAAE,SAAS,CAAC,CAAC;IAClG,CAAC;CACF;AA1BD,oBA0BC;AAED,SAAS,uBAAuB,CAAC,WAA6B;IAC5D,MAAM,OAAO,GAAa,EAAE,YAAY,EAAE,CAAC,EAAE,SAAS,EAAE,cAAc,EAAE,CAAC;IACzE,IAAI,WAAW,CAAC,QAAQ,EAAE,CAAC;QACzB,OAAO,CAAC,IAAI,GAAG,WAAW,CAAC,QAAQ,CAAC;IACtC,CAAC;IAED,OAAO,OAAO,CAAC;AACjB,CAAC"}

View file

@ -0,0 +1,223 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.SENSITIVE_COMMANDS = exports.CommandFailedEvent = exports.CommandSucceededEvent = exports.CommandStartedEvent = void 0;
const constants_1 = require("../constants");
const utils_1 = require("../utils");
const commands_1 = require("./commands");
/**
* An event indicating the start of a given command
* @public
* @category Event
*/
class CommandStartedEvent {
/**
* Create a started event
*
* @internal
* @param pool - the pool that originated the command
* @param command - the command
*/
constructor(connection, command, serverConnectionId) {
/** @internal */
this.name = constants_1.COMMAND_STARTED;
const cmd = extractCommand(command);
const commandName = extractCommandName(cmd);
const { address, connectionId, serviceId } = extractConnectionDetails(connection);
// TODO: remove in major revision, this is not spec behavior
if (exports.SENSITIVE_COMMANDS.has(commandName)) {
this.commandObj = {};
this.commandObj[commandName] = true;
}
this.address = address;
this.connectionId = connectionId;
this.serviceId = serviceId;
this.requestId = command.requestId;
this.databaseName = command.databaseName;
this.commandName = commandName;
this.command = maybeRedact(commandName, cmd, cmd);
this.serverConnectionId = serverConnectionId;
}
/* @internal */
get hasServiceId() {
return !!this.serviceId;
}
}
exports.CommandStartedEvent = CommandStartedEvent;
/**
* An event indicating the success of a given command
* @public
* @category Event
*/
class CommandSucceededEvent {
/**
* Create a succeeded event
*
* @internal
* @param pool - the pool that originated the command
* @param command - the command
* @param reply - the reply for this command from the server
* @param started - a high resolution tuple timestamp of when the command was first sent, to calculate duration
*/
constructor(connection, command, reply, started, serverConnectionId) {
/** @internal */
this.name = constants_1.COMMAND_SUCCEEDED;
const cmd = extractCommand(command);
const commandName = extractCommandName(cmd);
const { address, connectionId, serviceId } = extractConnectionDetails(connection);
this.address = address;
this.connectionId = connectionId;
this.serviceId = serviceId;
this.requestId = command.requestId;
this.commandName = commandName;
this.duration = (0, utils_1.calculateDurationInMs)(started);
this.reply = maybeRedact(commandName, cmd, extractReply(reply));
this.serverConnectionId = serverConnectionId;
this.databaseName = command.databaseName;
}
/* @internal */
get hasServiceId() {
return !!this.serviceId;
}
}
exports.CommandSucceededEvent = CommandSucceededEvent;
/**
* An event indicating the failure of a given command
* @public
* @category Event
*/
class CommandFailedEvent {
/**
* Create a failure event
*
* @internal
* @param pool - the pool that originated the command
* @param command - the command
* @param error - the generated error or a server error response
* @param started - a high resolution tuple timestamp of when the command was first sent, to calculate duration
*/
constructor(connection, command, error, started, serverConnectionId) {
/** @internal */
this.name = constants_1.COMMAND_FAILED;
const cmd = extractCommand(command);
const commandName = extractCommandName(cmd);
const { address, connectionId, serviceId } = extractConnectionDetails(connection);
this.address = address;
this.connectionId = connectionId;
this.serviceId = serviceId;
this.requestId = command.requestId;
this.commandName = commandName;
this.duration = (0, utils_1.calculateDurationInMs)(started);
this.failure = maybeRedact(commandName, cmd, error);
this.serverConnectionId = serverConnectionId;
this.databaseName = command.databaseName;
}
/* @internal */
get hasServiceId() {
return !!this.serviceId;
}
}
exports.CommandFailedEvent = CommandFailedEvent;
/**
* Commands that we want to redact because of the sensitive nature of their contents
* @internal
*/
exports.SENSITIVE_COMMANDS = new Set([
'authenticate',
'saslStart',
'saslContinue',
'getnonce',
'createUser',
'updateUser',
'copydbgetnonce',
'copydbsaslstart',
'copydb'
]);
const HELLO_COMMANDS = new Set(['hello', constants_1.LEGACY_HELLO_COMMAND, constants_1.LEGACY_HELLO_COMMAND_CAMEL_CASE]);
// helper methods
const extractCommandName = (commandDoc) => Object.keys(commandDoc)[0];
const collectionName = (command) => command.ns.split('.')[1];
const maybeRedact = (commandName, commandDoc, result) => exports.SENSITIVE_COMMANDS.has(commandName) ||
(HELLO_COMMANDS.has(commandName) && commandDoc.speculativeAuthenticate)
? {}
: result;
const LEGACY_FIND_QUERY_MAP = {
$query: 'filter',
$orderby: 'sort',
$hint: 'hint',
$comment: 'comment',
$maxScan: 'maxScan',
$max: 'max',
$min: 'min',
$returnKey: 'returnKey',
$showDiskLoc: 'showRecordId',
$maxTimeMS: 'maxTimeMS',
$snapshot: 'snapshot'
};
const LEGACY_FIND_OPTIONS_MAP = {
numberToSkip: 'skip',
numberToReturn: 'batchSize',
returnFieldSelector: 'projection'
};
/** Extract the actual command from the query, possibly up-converting if it's a legacy format */
function extractCommand(command) {
if (command instanceof commands_1.OpMsgRequest) {
const cmd = { ...command.command };
// For OP_MSG with payload type 1 we need to pull the documents
// array out of the document sequence for monitoring.
if (cmd.ops instanceof commands_1.DocumentSequence) {
cmd.ops = cmd.ops.documents;
}
if (cmd.nsInfo instanceof commands_1.DocumentSequence) {
cmd.nsInfo = cmd.nsInfo.documents;
}
return cmd;
}
if (command.query?.$query) {
let result;
if (command.ns === 'admin.$cmd') {
// up-convert legacy command
result = Object.assign({}, command.query.$query);
}
else {
// up-convert legacy find command
result = { find: collectionName(command) };
Object.keys(LEGACY_FIND_QUERY_MAP).forEach(key => {
if (command.query[key] != null) {
result[LEGACY_FIND_QUERY_MAP[key]] = { ...command.query[key] };
}
});
}
Object.keys(LEGACY_FIND_OPTIONS_MAP).forEach(key => {
const legacyKey = key;
if (command[legacyKey] != null) {
result[LEGACY_FIND_OPTIONS_MAP[legacyKey]] = command[legacyKey];
}
});
return result;
}
let clonedQuery = {};
const clonedCommand = { ...command };
if (command.query) {
clonedQuery = { ...command.query };
clonedCommand.query = clonedQuery;
}
return command.query ? clonedQuery : clonedCommand;
}
function extractReply(reply) {
if (!reply) {
return reply;
}
return reply.result ? reply.result : reply;
}
function extractConnectionDetails(connection) {
let connectionId;
if ('id' in connection) {
connectionId = connection.id;
}
return {
address: connection.address,
serviceId: connection.serviceId,
connectionId
};
}
//# sourceMappingURL=command_monitoring_events.js.map

File diff suppressed because one or more lines are too long

535
node_modules/mongodb/lib/cmap/commands.js generated vendored Normal file
View file

@ -0,0 +1,535 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.OpCompressedRequest = exports.OpMsgResponse = exports.OpMsgRequest = exports.DocumentSequence = exports.OpReply = exports.OpQueryRequest = void 0;
const BSON = require("../bson");
const error_1 = require("../error");
const compression_1 = require("./wire_protocol/compression");
const constants_1 = require("./wire_protocol/constants");
// Incrementing request id
let _requestId = 0;
// Query flags
const OPTS_TAILABLE_CURSOR = 2;
const OPTS_SECONDARY = 4;
const OPTS_OPLOG_REPLAY = 8;
const OPTS_NO_CURSOR_TIMEOUT = 16;
const OPTS_AWAIT_DATA = 32;
const OPTS_EXHAUST = 64;
const OPTS_PARTIAL = 128;
// Response flags
const CURSOR_NOT_FOUND = 1;
const QUERY_FAILURE = 2;
const SHARD_CONFIG_STALE = 4;
const AWAIT_CAPABLE = 8;
const encodeUTF8Into = BSON.BSON.onDemand.ByteUtils.encodeUTF8Into;
/** @internal */
class OpQueryRequest {
constructor(databaseName, query, options) {
/** moreToCome is an OP_MSG only concept */
this.moreToCome = false;
// Basic options needed to be passed in
// TODO(NODE-3483): Replace with MongoCommandError
const ns = `${databaseName}.$cmd`;
if (typeof databaseName !== 'string') {
throw new error_1.MongoRuntimeError('Database name must be a string for a query');
}
// TODO(NODE-3483): Replace with MongoCommandError
if (query == null)
throw new error_1.MongoRuntimeError('A query document must be specified for query');
// Validate that we are not passing 0x00 in the collection name
if (ns.indexOf('\x00') !== -1) {
// TODO(NODE-3483): Use MongoNamespace static method
throw new error_1.MongoRuntimeError('Namespace cannot contain a null character');
}
// Basic optionsa
this.databaseName = databaseName;
this.query = query;
this.ns = ns;
// Additional options
this.numberToSkip = options.numberToSkip || 0;
this.numberToReturn = options.numberToReturn || 0;
this.returnFieldSelector = options.returnFieldSelector || undefined;
this.requestId = options.requestId ?? OpQueryRequest.getRequestId();
// special case for pre-3.2 find commands, delete ASAP
this.pre32Limit = options.pre32Limit;
// Serialization option
this.serializeFunctions =
typeof options.serializeFunctions === 'boolean' ? options.serializeFunctions : false;
this.ignoreUndefined =
typeof options.ignoreUndefined === 'boolean' ? options.ignoreUndefined : false;
this.maxBsonSize = options.maxBsonSize || 1024 * 1024 * 16;
this.checkKeys = typeof options.checkKeys === 'boolean' ? options.checkKeys : false;
this.batchSize = this.numberToReturn;
// Flags
this.tailable = false;
this.secondaryOk = typeof options.secondaryOk === 'boolean' ? options.secondaryOk : false;
this.oplogReplay = false;
this.noCursorTimeout = false;
this.awaitData = false;
this.exhaust = false;
this.partial = false;
}
/** Assign next request Id. */
incRequestId() {
this.requestId = _requestId++;
}
/** Peek next request Id. */
nextRequestId() {
return _requestId + 1;
}
/** Increment then return next request Id. */
static getRequestId() {
return ++_requestId;
}
// Uses a single allocated buffer for the process, avoiding multiple memory allocations
toBin() {
const buffers = [];
let projection = null;
// Set up the flags
let flags = 0;
if (this.tailable) {
flags |= OPTS_TAILABLE_CURSOR;
}
if (this.secondaryOk) {
flags |= OPTS_SECONDARY;
}
if (this.oplogReplay) {
flags |= OPTS_OPLOG_REPLAY;
}
if (this.noCursorTimeout) {
flags |= OPTS_NO_CURSOR_TIMEOUT;
}
if (this.awaitData) {
flags |= OPTS_AWAIT_DATA;
}
if (this.exhaust) {
flags |= OPTS_EXHAUST;
}
if (this.partial) {
flags |= OPTS_PARTIAL;
}
// If batchSize is different to this.numberToReturn
if (this.batchSize !== this.numberToReturn)
this.numberToReturn = this.batchSize;
// Allocate write protocol header buffer
const header = Buffer.alloc(4 * 4 + // Header
4 + // Flags
Buffer.byteLength(this.ns) +
1 + // namespace
4 + // numberToSkip
4 // numberToReturn
);
// Add header to buffers
buffers.push(header);
// Serialize the query
const query = BSON.serialize(this.query, {
checkKeys: this.checkKeys,
serializeFunctions: this.serializeFunctions,
ignoreUndefined: this.ignoreUndefined
});
// Add query document
buffers.push(query);
if (this.returnFieldSelector && Object.keys(this.returnFieldSelector).length > 0) {
// Serialize the projection document
projection = BSON.serialize(this.returnFieldSelector, {
checkKeys: this.checkKeys,
serializeFunctions: this.serializeFunctions,
ignoreUndefined: this.ignoreUndefined
});
// Add projection document
buffers.push(projection);
}
// Total message size
const totalLength = header.length + query.length + (projection ? projection.length : 0);
// Set up the index
let index = 4;
// Write total document length
header[3] = (totalLength >> 24) & 0xff;
header[2] = (totalLength >> 16) & 0xff;
header[1] = (totalLength >> 8) & 0xff;
header[0] = totalLength & 0xff;
// Write header information requestId
header[index + 3] = (this.requestId >> 24) & 0xff;
header[index + 2] = (this.requestId >> 16) & 0xff;
header[index + 1] = (this.requestId >> 8) & 0xff;
header[index] = this.requestId & 0xff;
index = index + 4;
// Write header information responseTo
header[index + 3] = (0 >> 24) & 0xff;
header[index + 2] = (0 >> 16) & 0xff;
header[index + 1] = (0 >> 8) & 0xff;
header[index] = 0 & 0xff;
index = index + 4;
// Write header information OP_QUERY
header[index + 3] = (constants_1.OP_QUERY >> 24) & 0xff;
header[index + 2] = (constants_1.OP_QUERY >> 16) & 0xff;
header[index + 1] = (constants_1.OP_QUERY >> 8) & 0xff;
header[index] = constants_1.OP_QUERY & 0xff;
index = index + 4;
// Write header information flags
header[index + 3] = (flags >> 24) & 0xff;
header[index + 2] = (flags >> 16) & 0xff;
header[index + 1] = (flags >> 8) & 0xff;
header[index] = flags & 0xff;
index = index + 4;
// Write collection name
index = index + header.write(this.ns, index, 'utf8') + 1;
header[index - 1] = 0;
// Write header information flags numberToSkip
header[index + 3] = (this.numberToSkip >> 24) & 0xff;
header[index + 2] = (this.numberToSkip >> 16) & 0xff;
header[index + 1] = (this.numberToSkip >> 8) & 0xff;
header[index] = this.numberToSkip & 0xff;
index = index + 4;
// Write header information flags numberToReturn
header[index + 3] = (this.numberToReturn >> 24) & 0xff;
header[index + 2] = (this.numberToReturn >> 16) & 0xff;
header[index + 1] = (this.numberToReturn >> 8) & 0xff;
header[index] = this.numberToReturn & 0xff;
index = index + 4;
// Return the buffers
return buffers;
}
}
exports.OpQueryRequest = OpQueryRequest;
/** @internal */
class OpReply {
constructor(message, msgHeader, msgBody, opts) {
this.index = 0;
this.sections = [];
/** moreToCome is an OP_MSG only concept */
this.moreToCome = false;
this.parsed = false;
this.raw = message;
this.data = msgBody;
this.opts = opts ?? {
useBigInt64: false,
promoteLongs: true,
promoteValues: true,
promoteBuffers: false,
bsonRegExp: false
};
// Read the message header
this.length = msgHeader.length;
this.requestId = msgHeader.requestId;
this.responseTo = msgHeader.responseTo;
this.opCode = msgHeader.opCode;
this.fromCompressed = msgHeader.fromCompressed;
// Flag values
this.useBigInt64 = typeof this.opts.useBigInt64 === 'boolean' ? this.opts.useBigInt64 : false;
this.promoteLongs = typeof this.opts.promoteLongs === 'boolean' ? this.opts.promoteLongs : true;
this.promoteValues =
typeof this.opts.promoteValues === 'boolean' ? this.opts.promoteValues : true;
this.promoteBuffers =
typeof this.opts.promoteBuffers === 'boolean' ? this.opts.promoteBuffers : false;
this.bsonRegExp = typeof this.opts.bsonRegExp === 'boolean' ? this.opts.bsonRegExp : false;
}
isParsed() {
return this.parsed;
}
parse() {
// Don't parse again if not needed
if (this.parsed)
return this.sections[0];
// Position within OP_REPLY at which documents start
// (See https://www.mongodb.com/docs/manual/reference/mongodb-wire-protocol/#wire-op-reply)
this.index = 20;
// Read the message body
this.responseFlags = this.data.readInt32LE(0);
this.cursorId = new BSON.Long(this.data.readInt32LE(4), this.data.readInt32LE(8));
this.startingFrom = this.data.readInt32LE(12);
this.numberReturned = this.data.readInt32LE(16);
if (this.numberReturned < 0 || this.numberReturned > 2 ** 32 - 1) {
throw new RangeError(`OP_REPLY numberReturned is an invalid array length ${this.numberReturned}`);
}
this.cursorNotFound = (this.responseFlags & CURSOR_NOT_FOUND) !== 0;
this.queryFailure = (this.responseFlags & QUERY_FAILURE) !== 0;
this.shardConfigStale = (this.responseFlags & SHARD_CONFIG_STALE) !== 0;
this.awaitCapable = (this.responseFlags & AWAIT_CAPABLE) !== 0;
// Parse Body
for (let i = 0; i < this.numberReturned; i++) {
const bsonSize = this.data[this.index] |
(this.data[this.index + 1] << 8) |
(this.data[this.index + 2] << 16) |
(this.data[this.index + 3] << 24);
const section = this.data.subarray(this.index, this.index + bsonSize);
this.sections.push(section);
// Adjust the index
this.index = this.index + bsonSize;
}
// Set parsed
this.parsed = true;
return this.sections[0];
}
}
exports.OpReply = OpReply;
// Msg Flags
const OPTS_CHECKSUM_PRESENT = 1;
const OPTS_MORE_TO_COME = 2;
const OPTS_EXHAUST_ALLOWED = 1 << 16;
/** @internal */
class DocumentSequence {
/**
* Create a new document sequence for the provided field.
* @param field - The field it will replace.
*/
constructor(field, documents) {
this.field = field;
this.documents = [];
this.chunks = [];
this.serializedDocumentsLength = 0;
// Document sequences starts with type 1 at the first byte.
// Field strings must always be UTF-8.
const buffer = Buffer.allocUnsafe(1 + 4 + this.field.length + 1);
buffer[0] = 1;
// Third part is the field name at offset 5 with trailing null byte.
encodeUTF8Into(buffer, `${this.field}\0`, 5);
this.chunks.push(buffer);
this.header = buffer;
if (documents) {
for (const doc of documents) {
this.push(doc, BSON.serialize(doc));
}
}
}
/**
* Push a document to the document sequence. Will serialize the document
* as well and return the current serialized length of all documents.
* @param document - The document to add.
* @param buffer - The serialized document in raw BSON.
* @returns The new total document sequence length.
*/
push(document, buffer) {
this.serializedDocumentsLength += buffer.length;
// Push the document.
this.documents.push(document);
// Push the document raw bson.
this.chunks.push(buffer);
// Write the new length.
this.header?.writeInt32LE(4 + this.field.length + 1 + this.serializedDocumentsLength, 1);
return this.serializedDocumentsLength + this.header.length;
}
/**
* Get the fully serialized bytes for the document sequence section.
* @returns The section bytes.
*/
toBin() {
return Buffer.concat(this.chunks);
}
}
exports.DocumentSequence = DocumentSequence;
/** @internal */
class OpMsgRequest {
constructor(databaseName, command, options) {
// Basic options needed to be passed in
if (command == null)
throw new error_1.MongoInvalidArgumentError('Query document must be specified for query');
// Basic optionsa
this.databaseName = databaseName;
this.command = command;
this.command.$db = databaseName;
// Ensure empty options
this.options = options ?? {};
// Additional options
this.requestId = options.requestId ? options.requestId : OpMsgRequest.getRequestId();
// Serialization option
this.serializeFunctions =
typeof options.serializeFunctions === 'boolean' ? options.serializeFunctions : false;
this.ignoreUndefined =
typeof options.ignoreUndefined === 'boolean' ? options.ignoreUndefined : false;
this.checkKeys = typeof options.checkKeys === 'boolean' ? options.checkKeys : false;
this.maxBsonSize = options.maxBsonSize || 1024 * 1024 * 16;
// flags
this.checksumPresent = false;
this.moreToCome = options.moreToCome ?? command.writeConcern?.w === 0;
this.exhaustAllowed =
typeof options.exhaustAllowed === 'boolean' ? options.exhaustAllowed : false;
}
toBin() {
const buffers = [];
let flags = 0;
if (this.checksumPresent) {
flags |= OPTS_CHECKSUM_PRESENT;
}
if (this.moreToCome) {
flags |= OPTS_MORE_TO_COME;
}
if (this.exhaustAllowed) {
flags |= OPTS_EXHAUST_ALLOWED;
}
const header = Buffer.alloc(4 * 4 + // Header
4 // Flags
);
buffers.push(header);
let totalLength = header.length;
const command = this.command;
totalLength += this.makeSections(buffers, command);
header.writeInt32LE(totalLength, 0); // messageLength
header.writeInt32LE(this.requestId, 4); // requestID
header.writeInt32LE(0, 8); // responseTo
header.writeInt32LE(constants_1.OP_MSG, 12); // opCode
header.writeUInt32LE(flags, 16); // flags
return buffers;
}
/**
* Add the sections to the OP_MSG request's buffers and returns the length.
*/
makeSections(buffers, document) {
const sequencesBuffer = this.extractDocumentSequences(document);
const payloadTypeBuffer = Buffer.allocUnsafe(1);
payloadTypeBuffer[0] = 0;
const documentBuffer = this.serializeBson(document);
// First section, type 0
buffers.push(payloadTypeBuffer);
buffers.push(documentBuffer);
// Subsequent sections, type 1
buffers.push(sequencesBuffer);
return payloadTypeBuffer.length + documentBuffer.length + sequencesBuffer.length;
}
/**
* Extracts the document sequences from the command document and returns
* a buffer to be added as multiple sections after the initial type 0
* section in the message.
*/
extractDocumentSequences(document) {
// Pull out any field in the command document that's value is a document sequence.
const chunks = [];
for (const [key, value] of Object.entries(document)) {
if (value instanceof DocumentSequence) {
chunks.push(value.toBin());
// Why are we removing the field from the command? This is because it needs to be
// removed in the OP_MSG request first section, and DocumentSequence is not a
// BSON type and is specific to the MongoDB wire protocol so there's nothing
// our BSON serializer can do about this. Since DocumentSequence is not exposed
// in the public API and only used internally, we are never mutating an original
// command provided by the user, just our own, and it's cheaper to delete from
// our own command than copying it.
delete document[key];
}
}
if (chunks.length > 0) {
return Buffer.concat(chunks);
}
// If we have no document sequences we return an empty buffer for nothing to add
// to the payload.
return Buffer.alloc(0);
}
serializeBson(document) {
return BSON.serialize(document, {
checkKeys: this.checkKeys,
serializeFunctions: this.serializeFunctions,
ignoreUndefined: this.ignoreUndefined
});
}
static getRequestId() {
_requestId = (_requestId + 1) & 0x7fffffff;
return _requestId;
}
}
exports.OpMsgRequest = OpMsgRequest;
/** @internal */
class OpMsgResponse {
constructor(message, msgHeader, msgBody, opts) {
this.index = 0;
this.sections = [];
this.parsed = false;
this.raw = message;
this.data = msgBody;
this.opts = opts ?? {
useBigInt64: false,
promoteLongs: true,
promoteValues: true,
promoteBuffers: false,
bsonRegExp: false
};
// Read the message header
this.length = msgHeader.length;
this.requestId = msgHeader.requestId;
this.responseTo = msgHeader.responseTo;
this.opCode = msgHeader.opCode;
this.fromCompressed = msgHeader.fromCompressed;
// Read response flags
this.responseFlags = msgBody.readInt32LE(0);
this.checksumPresent = (this.responseFlags & OPTS_CHECKSUM_PRESENT) !== 0;
this.moreToCome = (this.responseFlags & OPTS_MORE_TO_COME) !== 0;
this.exhaustAllowed = (this.responseFlags & OPTS_EXHAUST_ALLOWED) !== 0;
this.useBigInt64 = typeof this.opts.useBigInt64 === 'boolean' ? this.opts.useBigInt64 : false;
this.promoteLongs = typeof this.opts.promoteLongs === 'boolean' ? this.opts.promoteLongs : true;
this.promoteValues =
typeof this.opts.promoteValues === 'boolean' ? this.opts.promoteValues : true;
this.promoteBuffers =
typeof this.opts.promoteBuffers === 'boolean' ? this.opts.promoteBuffers : false;
this.bsonRegExp = typeof this.opts.bsonRegExp === 'boolean' ? this.opts.bsonRegExp : false;
}
isParsed() {
return this.parsed;
}
parse() {
// Don't parse again if not needed
if (this.parsed)
return this.sections[0];
this.index = 4;
while (this.index < this.data.length) {
const payloadType = this.data.readUInt8(this.index++);
if (payloadType === 0) {
const bsonSize = this.data.readUInt32LE(this.index);
const bin = this.data.subarray(this.index, this.index + bsonSize);
this.sections.push(bin);
this.index += bsonSize;
}
else if (payloadType === 1) {
// It was decided that no driver makes use of payload type 1
// TODO(NODE-3483): Replace with MongoDeprecationError
throw new error_1.MongoRuntimeError('OP_MSG Payload Type 1 detected unsupported protocol');
}
}
this.parsed = true;
return this.sections[0];
}
}
exports.OpMsgResponse = OpMsgResponse;
const MESSAGE_HEADER_SIZE = 16;
const COMPRESSION_DETAILS_SIZE = 9; // originalOpcode + uncompressedSize, compressorID
/**
* @internal
*
* An OP_COMPRESSED request wraps either an OP_QUERY or OP_MSG message.
*/
class OpCompressedRequest {
constructor(command, options) {
this.command = command;
this.options = {
zlibCompressionLevel: options.zlibCompressionLevel,
agreedCompressor: options.agreedCompressor
};
}
// Return whether a command contains an uncompressible command term
// Will return true if command contains no uncompressible command terms
static canCompress(command) {
const commandDoc = command instanceof OpMsgRequest ? command.command : command.query;
const commandName = Object.keys(commandDoc)[0];
return !compression_1.uncompressibleCommands.has(commandName);
}
async toBin() {
const concatenatedOriginalCommandBuffer = Buffer.concat(this.command.toBin());
// otherwise, compress the message
const messageToBeCompressed = concatenatedOriginalCommandBuffer.slice(MESSAGE_HEADER_SIZE);
// Extract information needed for OP_COMPRESSED from the uncompressed message
const originalCommandOpCode = concatenatedOriginalCommandBuffer.readInt32LE(12);
// Compress the message body
const compressedMessage = await (0, compression_1.compress)(this.options, messageToBeCompressed);
// Create the msgHeader of OP_COMPRESSED
const msgHeader = Buffer.alloc(MESSAGE_HEADER_SIZE);
msgHeader.writeInt32LE(MESSAGE_HEADER_SIZE + COMPRESSION_DETAILS_SIZE + compressedMessage.length, 0); // messageLength
msgHeader.writeInt32LE(this.command.requestId, 4); // requestID
msgHeader.writeInt32LE(0, 8); // responseTo (zero)
msgHeader.writeInt32LE(constants_1.OP_COMPRESSED, 12); // opCode
// Create the compression details of OP_COMPRESSED
const compressionDetails = Buffer.alloc(COMPRESSION_DETAILS_SIZE);
compressionDetails.writeInt32LE(originalCommandOpCode, 0); // originalOpcode
compressionDetails.writeInt32LE(messageToBeCompressed.length, 4); // Size of the uncompressed compressedMessage, excluding the MsgHeader
compressionDetails.writeUInt8(compression_1.Compressor[this.options.agreedCompressor], 8); // compressorID
return [msgHeader, compressionDetails, compressedMessage];
}
}
exports.OpCompressedRequest = OpCompressedRequest;
//# sourceMappingURL=commands.js.map

1
node_modules/mongodb/lib/cmap/commands.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

366
node_modules/mongodb/lib/cmap/connect.js generated vendored Normal file
View file

@ -0,0 +1,366 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.LEGAL_TCP_SOCKET_OPTIONS = exports.LEGAL_TLS_SOCKET_OPTIONS = void 0;
exports.connect = connect;
exports.makeConnection = makeConnection;
exports.performInitialHandshake = performInitialHandshake;
exports.prepareHandshakeDocument = prepareHandshakeDocument;
exports.makeSocket = makeSocket;
const net = require("net");
const tls = require("tls");
const constants_1 = require("../constants");
const deps_1 = require("../deps");
const error_1 = require("../error");
const utils_1 = require("../utils");
const auth_provider_1 = require("./auth/auth_provider");
const providers_1 = require("./auth/providers");
const connection_1 = require("./connection");
const constants_2 = require("./wire_protocol/constants");
async function connect(options) {
let connection = null;
try {
const socket = await makeSocket(options);
connection = makeConnection(options, socket);
await performInitialHandshake(connection, options);
return connection;
}
catch (error) {
connection?.destroy();
throw error;
}
}
function makeConnection(options, socket) {
let ConnectionType = options.connectionType ?? connection_1.Connection;
if (options.autoEncrypter) {
ConnectionType = connection_1.CryptoConnection;
}
return new ConnectionType(socket, options);
}
function checkSupportedServer(hello, options) {
const maxWireVersion = Number(hello.maxWireVersion);
const minWireVersion = Number(hello.minWireVersion);
const serverVersionHighEnough = !Number.isNaN(maxWireVersion) && maxWireVersion >= constants_2.MIN_SUPPORTED_WIRE_VERSION;
const serverVersionLowEnough = !Number.isNaN(minWireVersion) && minWireVersion <= constants_2.MAX_SUPPORTED_WIRE_VERSION;
if (serverVersionHighEnough) {
if (serverVersionLowEnough) {
return null;
}
const message = `Server at ${options.hostAddress} reports minimum wire version ${JSON.stringify(hello.minWireVersion)}, but this version of the Node.js Driver requires at most ${constants_2.MAX_SUPPORTED_WIRE_VERSION} (MongoDB ${constants_2.MAX_SUPPORTED_SERVER_VERSION})`;
return new error_1.MongoCompatibilityError(message);
}
const message = `Server at ${options.hostAddress} reports maximum wire version ${JSON.stringify(hello.maxWireVersion) ?? 0}, but this version of the Node.js Driver requires at least ${constants_2.MIN_SUPPORTED_WIRE_VERSION} (MongoDB ${constants_2.MIN_SUPPORTED_SERVER_VERSION})`;
return new error_1.MongoCompatibilityError(message);
}
async function performInitialHandshake(conn, options) {
const credentials = options.credentials;
if (credentials) {
if (!(credentials.mechanism === providers_1.AuthMechanism.MONGODB_DEFAULT) &&
!options.authProviders.getOrCreateProvider(credentials.mechanism, credentials.mechanismProperties)) {
throw new error_1.MongoInvalidArgumentError(`AuthMechanism '${credentials.mechanism}' not supported`);
}
}
const authContext = new auth_provider_1.AuthContext(conn, credentials, options);
conn.authContext = authContext;
const handshakeDoc = await prepareHandshakeDocument(authContext);
// @ts-expect-error: TODO(NODE-5141): The options need to be filtered properly, Connection options differ from Command options
const handshakeOptions = { ...options, raw: false };
if (typeof options.connectTimeoutMS === 'number') {
// The handshake technically is a monitoring check, so its socket timeout should be connectTimeoutMS
handshakeOptions.socketTimeoutMS = options.connectTimeoutMS;
}
const start = new Date().getTime();
const response = await executeHandshake(handshakeDoc, handshakeOptions);
if (!('isWritablePrimary' in response)) {
// Provide hello-style response document.
response.isWritablePrimary = response[constants_1.LEGACY_HELLO_COMMAND];
}
if (response.helloOk) {
conn.helloOk = true;
}
const supportedServerErr = checkSupportedServer(response, options);
if (supportedServerErr) {
throw supportedServerErr;
}
if (options.loadBalanced) {
if (!response.serviceId) {
throw new error_1.MongoCompatibilityError('Driver attempted to initialize in load balancing mode, ' +
'but the server does not support this mode.');
}
}
// NOTE: This is metadata attached to the connection while porting away from
// handshake being done in the `Server` class. Likely, it should be
// relocated, or at very least restructured.
conn.hello = response;
conn.lastHelloMS = new Date().getTime() - start;
if (!response.arbiterOnly && credentials) {
// store the response on auth context
authContext.response = response;
const resolvedCredentials = credentials.resolveAuthMechanism(response);
const provider = options.authProviders.getOrCreateProvider(resolvedCredentials.mechanism, resolvedCredentials.mechanismProperties);
if (!provider) {
throw new error_1.MongoInvalidArgumentError(`No AuthProvider for ${resolvedCredentials.mechanism} defined.`);
}
try {
await provider.auth(authContext);
}
catch (error) {
if (error instanceof error_1.MongoError) {
error.addErrorLabel(error_1.MongoErrorLabel.HandshakeError);
if ((0, error_1.needsRetryableWriteLabel)(error, response.maxWireVersion, conn.description.type)) {
error.addErrorLabel(error_1.MongoErrorLabel.RetryableWriteError);
}
}
throw error;
}
}
// Connection establishment is socket creation (tcp handshake, tls handshake, MongoDB handshake (saslStart, saslContinue))
// Once connection is established, command logging can log events (if enabled)
conn.established = true;
async function executeHandshake(handshakeDoc, handshakeOptions) {
try {
const handshakeResponse = await conn.command((0, utils_1.ns)('admin.$cmd'), handshakeDoc, handshakeOptions);
return handshakeResponse;
}
catch (error) {
if (error instanceof error_1.MongoError) {
error.addErrorLabel(error_1.MongoErrorLabel.HandshakeError);
}
throw error;
}
}
}
/**
* @internal
*
* This function is only exposed for testing purposes.
*/
async function prepareHandshakeDocument(authContext) {
const options = authContext.options;
const compressors = options.compressors ? options.compressors : [];
const { serverApi } = authContext.connection;
const clientMetadata = await options.metadata;
const handshakeDoc = {
[serverApi?.version || options.loadBalanced === true ? 'hello' : constants_1.LEGACY_HELLO_COMMAND]: 1,
helloOk: true,
client: clientMetadata,
compression: compressors
};
if (options.loadBalanced === true) {
handshakeDoc.loadBalanced = true;
}
const credentials = authContext.credentials;
if (credentials) {
if (credentials.mechanism === providers_1.AuthMechanism.MONGODB_DEFAULT && credentials.username) {
handshakeDoc.saslSupportedMechs = `${credentials.source}.${credentials.username}`;
const provider = authContext.options.authProviders.getOrCreateProvider(providers_1.AuthMechanism.MONGODB_SCRAM_SHA256, credentials.mechanismProperties);
if (!provider) {
// This auth mechanism is always present.
throw new error_1.MongoInvalidArgumentError(`No AuthProvider for ${providers_1.AuthMechanism.MONGODB_SCRAM_SHA256} defined.`);
}
return await provider.prepare(handshakeDoc, authContext);
}
const provider = authContext.options.authProviders.getOrCreateProvider(credentials.mechanism, credentials.mechanismProperties);
if (!provider) {
throw new error_1.MongoInvalidArgumentError(`No AuthProvider for ${credentials.mechanism} defined.`);
}
return await provider.prepare(handshakeDoc, authContext);
}
return handshakeDoc;
}
/** @public */
exports.LEGAL_TLS_SOCKET_OPTIONS = [
'allowPartialTrustChain',
'ALPNProtocols',
'ca',
'cert',
'checkServerIdentity',
'ciphers',
'crl',
'ecdhCurve',
'key',
'minDHSize',
'passphrase',
'pfx',
'rejectUnauthorized',
'secureContext',
'secureProtocol',
'servername',
'session'
];
/** @public */
exports.LEGAL_TCP_SOCKET_OPTIONS = [
'autoSelectFamily',
'autoSelectFamilyAttemptTimeout',
'keepAliveInitialDelay',
'family',
'hints',
'localAddress',
'localPort',
'lookup'
];
function parseConnectOptions(options) {
const hostAddress = options.hostAddress;
if (!hostAddress)
throw new error_1.MongoInvalidArgumentError('Option "hostAddress" is required');
const result = {};
for (const name of exports.LEGAL_TCP_SOCKET_OPTIONS) {
if (options[name] != null) {
result[name] = options[name];
}
}
result.keepAliveInitialDelay ??= 120000;
result.keepAlive = true;
result.noDelay = options.noDelay ?? true;
if (typeof hostAddress.socketPath === 'string') {
result.path = hostAddress.socketPath;
return result;
}
else if (typeof hostAddress.host === 'string') {
result.host = hostAddress.host;
result.port = hostAddress.port;
return result;
}
else {
// This should never happen since we set up HostAddresses
// But if we don't throw here the socket could hang until timeout
// TODO(NODE-3483)
throw new error_1.MongoRuntimeError(`Unexpected HostAddress ${JSON.stringify(hostAddress)}`);
}
}
function parseSslOptions(options) {
const result = parseConnectOptions(options);
// Merge in valid SSL options
for (const name of exports.LEGAL_TLS_SOCKET_OPTIONS) {
if (options[name] != null) {
result[name] = options[name];
}
}
if (options.existingSocket) {
result.socket = options.existingSocket;
}
// Set default sni servername to be the same as host
if (result.servername == null && result.host && !net.isIP(result.host)) {
result.servername = result.host;
}
return result;
}
async function makeSocket(options) {
const useTLS = options.tls ?? false;
const connectTimeoutMS = options.connectTimeoutMS ?? 30000;
const existingSocket = options.existingSocket;
let socket;
if (options.proxyHost != null) {
// Currently, only Socks5 is supported.
return await makeSocks5Connection({
...options,
connectTimeoutMS // Should always be present for Socks5
});
}
if (useTLS) {
const tlsSocket = tls.connect(parseSslOptions(options));
if (typeof tlsSocket.disableRenegotiation === 'function') {
tlsSocket.disableRenegotiation();
}
socket = tlsSocket;
}
else if (existingSocket) {
// In the TLS case, parseSslOptions() sets options.socket to existingSocket,
// so we only need to handle the non-TLS case here (where existingSocket
// gives us all we need out of the box).
socket = existingSocket;
}
else {
socket = net.createConnection(parseConnectOptions(options));
}
socket.setTimeout(connectTimeoutMS);
let cancellationHandler = null;
const { promise: connectedSocket, resolve, reject } = (0, utils_1.promiseWithResolvers)();
if (existingSocket) {
resolve(socket);
}
else {
const start = performance.now();
const connectEvent = useTLS ? 'secureConnect' : 'connect';
socket
.once(connectEvent, () => resolve(socket))
.once('error', cause => reject(new error_1.MongoNetworkError(error_1.MongoError.buildErrorMessage(cause), { cause })))
.once('timeout', () => {
reject(new error_1.MongoNetworkTimeoutError(`Socket '${connectEvent}' timed out after ${(performance.now() - start) | 0}ms (connectTimeoutMS: ${connectTimeoutMS})`));
})
.once('close', () => reject(new error_1.MongoNetworkError(`Socket closed after ${(performance.now() - start) | 0} during connection establishment`)));
if (options.cancellationToken != null) {
cancellationHandler = () => reject(new error_1.MongoNetworkError(`Socket connection establishment was cancelled after ${(performance.now() - start) | 0}`));
options.cancellationToken.once('cancel', cancellationHandler);
}
}
try {
socket = await connectedSocket;
return socket;
}
catch (error) {
socket.destroy();
throw error;
}
finally {
socket.setTimeout(0);
if (cancellationHandler != null) {
options.cancellationToken?.removeListener('cancel', cancellationHandler);
}
}
}
let socks = null;
function loadSocks() {
if (socks == null) {
const socksImport = (0, deps_1.getSocks)();
if ('kModuleError' in socksImport) {
throw socksImport.kModuleError;
}
socks = socksImport;
}
return socks;
}
async function makeSocks5Connection(options) {
const hostAddress = utils_1.HostAddress.fromHostPort(options.proxyHost ?? '', // proxyHost is guaranteed to set here
options.proxyPort ?? 1080);
// First, connect to the proxy server itself:
const rawSocket = await makeSocket({
...options,
hostAddress,
tls: false,
proxyHost: undefined
});
const destination = parseConnectOptions(options);
if (typeof destination.host !== 'string' || typeof destination.port !== 'number') {
throw new error_1.MongoInvalidArgumentError('Can only make Socks5 connections to TCP hosts');
}
socks ??= loadSocks();
let existingSocket;
try {
// Then, establish the Socks5 proxy connection:
const connection = await socks.SocksClient.createConnection({
existing_socket: rawSocket,
timeout: options.connectTimeoutMS,
command: 'connect',
destination: {
host: destination.host,
port: destination.port
},
proxy: {
// host and port are ignored because we pass existing_socket
host: 'iLoveJavaScript',
port: 0,
type: 5,
userId: options.proxyUsername || undefined,
password: options.proxyPassword || undefined
}
});
existingSocket = connection.socket;
}
catch (cause) {
throw new error_1.MongoNetworkError(error_1.MongoError.buildErrorMessage(cause), { cause });
}
// Finally, now treat the resulting duplex stream as the
// socket over which we send and receive wire protocol messages:
return await makeSocket({ ...options, existingSocket, proxyHost: undefined });
}
//# sourceMappingURL=connect.js.map

1
node_modules/mongodb/lib/cmap/connect.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

571
node_modules/mongodb/lib/cmap/connection.js generated vendored Normal file
View file

@ -0,0 +1,571 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.CryptoConnection = exports.SizedMessageTransform = exports.Connection = void 0;
exports.hasSessionSupport = hasSessionSupport;
const stream_1 = require("stream");
const timers_1 = require("timers");
const bson_1 = require("../bson");
const constants_1 = require("../constants");
const error_1 = require("../error");
const mongo_logger_1 = require("../mongo_logger");
const mongo_types_1 = require("../mongo_types");
const read_preference_1 = require("../read_preference");
const common_1 = require("../sdam/common");
const sessions_1 = require("../sessions");
const timeout_1 = require("../timeout");
const utils_1 = require("../utils");
const command_monitoring_events_1 = require("./command_monitoring_events");
const commands_1 = require("./commands");
const stream_description_1 = require("./stream_description");
const compression_1 = require("./wire_protocol/compression");
const on_data_1 = require("./wire_protocol/on_data");
const responses_1 = require("./wire_protocol/responses");
const shared_1 = require("./wire_protocol/shared");
/** @internal */
function hasSessionSupport(conn) {
const description = conn.description;
return description.logicalSessionTimeoutMinutes != null;
}
function streamIdentifier(stream, options) {
if (options.proxyHost) {
// If proxy options are specified, the properties of `stream` itself
// will not accurately reflect what endpoint this is connected to.
return options.hostAddress.toString();
}
const { remoteAddress, remotePort } = stream;
if (typeof remoteAddress === 'string' && typeof remotePort === 'number') {
return utils_1.HostAddress.fromHostPort(remoteAddress, remotePort).toString();
}
return (0, utils_1.uuidV4)().toString('hex');
}
/** @internal */
class Connection extends mongo_types_1.TypedEventEmitter {
/** @event */
static { this.COMMAND_STARTED = constants_1.COMMAND_STARTED; }
/** @event */
static { this.COMMAND_SUCCEEDED = constants_1.COMMAND_SUCCEEDED; }
/** @event */
static { this.COMMAND_FAILED = constants_1.COMMAND_FAILED; }
/** @event */
static { this.CLUSTER_TIME_RECEIVED = constants_1.CLUSTER_TIME_RECEIVED; }
/** @event */
static { this.CLOSE = constants_1.CLOSE; }
/** @event */
static { this.PINNED = constants_1.PINNED; }
/** @event */
static { this.UNPINNED = constants_1.UNPINNED; }
constructor(stream, options) {
super();
this.lastHelloMS = -1;
this.helloOk = false;
this.delayedTimeoutId = null;
/** Indicates that the connection (including underlying TCP socket) has been closed. */
this.closed = false;
this.clusterTime = null;
this.error = null;
this.dataEvents = null;
this.on('error', utils_1.noop);
this.socket = stream;
this.id = options.id;
this.address = streamIdentifier(stream, options);
this.socketTimeoutMS = options.socketTimeoutMS ?? 0;
this.monitorCommands = options.monitorCommands;
this.serverApi = options.serverApi;
this.mongoLogger = options.mongoLogger;
this.established = false;
this.description = new stream_description_1.StreamDescription(this.address, options);
this.generation = options.generation;
this.lastUseTime = (0, utils_1.now)();
this.messageStream = this.socket
.on('error', this.onSocketError.bind(this))
.pipe(new SizedMessageTransform({ connection: this }))
.on('error', this.onTransformError.bind(this));
this.socket.on('close', this.onClose.bind(this));
this.socket.on('timeout', this.onTimeout.bind(this));
this.messageStream.pause();
}
get hello() {
return this.description.hello;
}
// the `connect` method stores the result of the handshake hello on the connection
set hello(response) {
this.description.receiveResponse(response);
Object.freeze(this.description);
}
get serviceId() {
return this.hello?.serviceId;
}
get loadBalanced() {
return this.description.loadBalanced;
}
get idleTime() {
return (0, utils_1.calculateDurationInMs)(this.lastUseTime);
}
get hasSessionSupport() {
return this.description.logicalSessionTimeoutMinutes != null;
}
get supportsOpMsg() {
return (this.description != null &&
// TODO(NODE-6672,NODE-6287): This guard is primarily for maxWireVersion = 0
(0, utils_1.maxWireVersion)(this) >= 6 &&
!this.description.__nodejs_mock_server__);
}
get shouldEmitAndLogCommand() {
return ((this.monitorCommands ||
(this.established &&
!this.authContext?.reauthenticating &&
this.mongoLogger?.willLog(mongo_logger_1.MongoLoggableComponent.COMMAND, mongo_logger_1.SeverityLevel.DEBUG))) ??
false);
}
markAvailable() {
this.lastUseTime = (0, utils_1.now)();
}
onSocketError(cause) {
this.onError(new error_1.MongoNetworkError(cause.message, { cause }));
}
onTransformError(error) {
this.onError(error);
}
onError(error) {
this.cleanup(error);
}
onClose() {
const message = `connection ${this.id} to ${this.address} closed`;
this.cleanup(new error_1.MongoNetworkError(message));
}
onTimeout() {
this.delayedTimeoutId = (0, timers_1.setTimeout)(() => {
const message = `connection ${this.id} to ${this.address} timed out`;
const beforeHandshake = this.hello == null;
this.cleanup(new error_1.MongoNetworkTimeoutError(message, { beforeHandshake }));
}, 1).unref(); // No need for this timer to hold the event loop open
}
destroy() {
if (this.closed) {
return;
}
// load balanced mode requires that these listeners remain on the connection
// after cleanup on timeouts, errors or close so we remove them before calling
// cleanup.
this.removeAllListeners(Connection.PINNED);
this.removeAllListeners(Connection.UNPINNED);
const message = `connection ${this.id} to ${this.address} closed`;
this.cleanup(new error_1.MongoNetworkError(message));
}
/**
* A method that cleans up the connection. When `force` is true, this method
* forcibly destroys the socket.
*
* If an error is provided, any in-flight operations will be closed with the error.
*
* This method does nothing if the connection is already closed.
*/
cleanup(error) {
if (this.closed) {
return;
}
this.socket.destroy();
this.error = error;
this.dataEvents?.throw(error).then(undefined, utils_1.squashError);
this.closed = true;
this.emit(Connection.CLOSE);
}
prepareCommand(db, command, options) {
let cmd = { ...command };
const readPreference = (0, shared_1.getReadPreference)(options);
const session = options?.session;
let clusterTime = this.clusterTime;
if (this.serverApi) {
const { version, strict, deprecationErrors } = this.serverApi;
cmd.apiVersion = version;
if (strict != null)
cmd.apiStrict = strict;
if (deprecationErrors != null)
cmd.apiDeprecationErrors = deprecationErrors;
}
if (this.hasSessionSupport && session) {
if (session.clusterTime &&
clusterTime &&
session.clusterTime.clusterTime.greaterThan(clusterTime.clusterTime)) {
clusterTime = session.clusterTime;
}
const sessionError = (0, sessions_1.applySession)(session, cmd, options);
if (sessionError)
throw sessionError;
}
else if (session?.explicit) {
throw new error_1.MongoCompatibilityError('Current topology does not support sessions');
}
// if we have a known cluster time, gossip it
if (clusterTime) {
cmd.$clusterTime = clusterTime;
}
// For standalone, drivers MUST NOT set $readPreference.
if (this.description.type !== common_1.ServerType.Standalone) {
if (!(0, shared_1.isSharded)(this) &&
!this.description.loadBalanced &&
this.supportsOpMsg &&
options.directConnection === true &&
readPreference?.mode === 'primary') {
// For mongos and load balancers with 'primary' mode, drivers MUST NOT set $readPreference.
// For all other types with a direct connection, if the read preference is 'primary'
// (driver sets 'primary' as default if no read preference is configured),
// the $readPreference MUST be set to 'primaryPreferred'
// to ensure that any server type can handle the request.
cmd.$readPreference = read_preference_1.ReadPreference.primaryPreferred.toJSON();
}
else if ((0, shared_1.isSharded)(this) && !this.supportsOpMsg && readPreference?.mode !== 'primary') {
// When sending a read operation via OP_QUERY and the $readPreference modifier,
// the query MUST be provided using the $query modifier.
cmd = {
$query: cmd,
$readPreference: readPreference.toJSON()
};
}
else if (readPreference?.mode !== 'primary') {
// For mode 'primary', drivers MUST NOT set $readPreference.
// For all other read preference modes (i.e. 'secondary', 'primaryPreferred', ...),
// drivers MUST set $readPreference
cmd.$readPreference = readPreference.toJSON();
}
}
const commandOptions = {
numberToSkip: 0,
numberToReturn: -1,
checkKeys: false,
// This value is not overridable
secondaryOk: readPreference.secondaryOk(),
...options
};
options.timeoutContext?.addMaxTimeMSToCommand(cmd, options);
const message = this.supportsOpMsg
? new commands_1.OpMsgRequest(db, cmd, commandOptions)
: new commands_1.OpQueryRequest(db, cmd, commandOptions);
return message;
}
async *sendWire(message, options, responseType) {
this.throwIfAborted();
const timeout = options.socketTimeoutMS ??
options?.timeoutContext?.getSocketTimeoutMS() ??
this.socketTimeoutMS;
this.socket.setTimeout(timeout);
try {
await this.writeCommand(message, {
agreedCompressor: this.description.compressor ?? 'none',
zlibCompressionLevel: this.description.zlibCompressionLevel,
timeoutContext: options.timeoutContext,
signal: options.signal
});
if (message.moreToCome) {
yield responses_1.MongoDBResponse.empty;
return;
}
this.throwIfAborted();
if (options.timeoutContext?.csotEnabled() &&
options.timeoutContext.minRoundTripTime != null &&
options.timeoutContext.remainingTimeMS < options.timeoutContext.minRoundTripTime) {
throw new error_1.MongoOperationTimeoutError('Server roundtrip time is greater than the time remaining');
}
for await (const response of this.readMany(options)) {
this.socket.setTimeout(0);
const bson = response.parse();
const document = (responseType ?? responses_1.MongoDBResponse).make(bson);
yield document;
this.throwIfAborted();
this.socket.setTimeout(timeout);
}
}
finally {
this.socket.setTimeout(0);
}
}
async *sendCommand(ns, command, options, responseType) {
options?.signal?.throwIfAborted();
const message = this.prepareCommand(ns.db, command, options);
let started = 0;
if (this.shouldEmitAndLogCommand) {
started = (0, utils_1.now)();
this.emitAndLogCommand(this.monitorCommands, Connection.COMMAND_STARTED, message.databaseName, this.established, new command_monitoring_events_1.CommandStartedEvent(this, message, this.description.serverConnectionId));
}
// If `documentsReturnedIn` not set or raw is not enabled, use input bson options
// Otherwise, support raw flag. Raw only works for cursors that hardcode firstBatch/nextBatch fields
const bsonOptions = options.documentsReturnedIn == null || !options.raw
? options
: {
...options,
raw: false,
fieldsAsRaw: { [options.documentsReturnedIn]: true }
};
/** MongoDBResponse instance or subclass */
let document = undefined;
/** Cached result of a toObject call */
let object = undefined;
try {
this.throwIfAborted();
for await (document of this.sendWire(message, options, responseType)) {
object = undefined;
if (options.session != null) {
(0, sessions_1.updateSessionFromResponse)(options.session, document);
}
if (document.$clusterTime) {
this.clusterTime = document.$clusterTime;
this.emit(Connection.CLUSTER_TIME_RECEIVED, document.$clusterTime);
}
if (document.ok === 0) {
if (options.timeoutContext?.csotEnabled() && document.isMaxTimeExpiredError) {
throw new error_1.MongoOperationTimeoutError('Server reported a timeout error', {
cause: new error_1.MongoServerError((object ??= document.toObject(bsonOptions)))
});
}
throw new error_1.MongoServerError((object ??= document.toObject(bsonOptions)));
}
if (this.shouldEmitAndLogCommand) {
this.emitAndLogCommand(this.monitorCommands, Connection.COMMAND_SUCCEEDED, message.databaseName, this.established, new command_monitoring_events_1.CommandSucceededEvent(this, message, message.moreToCome ? { ok: 1 } : (object ??= document.toObject(bsonOptions)), started, this.description.serverConnectionId));
}
if (responseType == null) {
yield (object ??= document.toObject(bsonOptions));
}
else {
yield document;
}
this.throwIfAborted();
}
}
catch (error) {
if (this.shouldEmitAndLogCommand) {
this.emitAndLogCommand(this.monitorCommands, Connection.COMMAND_FAILED, message.databaseName, this.established, new command_monitoring_events_1.CommandFailedEvent(this, message, error, started, this.description.serverConnectionId));
}
throw error;
}
}
async command(ns, command, options = {}, responseType) {
this.throwIfAborted();
options.signal?.throwIfAborted();
for await (const document of this.sendCommand(ns, command, options, responseType)) {
if (options.timeoutContext?.csotEnabled()) {
if (responses_1.MongoDBResponse.is(document)) {
if (document.isMaxTimeExpiredError) {
throw new error_1.MongoOperationTimeoutError('Server reported a timeout error', {
cause: new error_1.MongoServerError(document.toObject())
});
}
}
else {
if ((Array.isArray(document?.writeErrors) &&
document.writeErrors.some(error => error?.code === error_1.MONGODB_ERROR_CODES.MaxTimeMSExpired)) ||
document?.writeConcernError?.code === error_1.MONGODB_ERROR_CODES.MaxTimeMSExpired) {
throw new error_1.MongoOperationTimeoutError('Server reported a timeout error', {
cause: new error_1.MongoServerError(document)
});
}
}
}
return document;
}
throw new error_1.MongoUnexpectedServerResponseError('Unable to get response from server');
}
exhaustCommand(ns, command, options, replyListener) {
const exhaustLoop = async () => {
this.throwIfAborted();
for await (const reply of this.sendCommand(ns, command, options)) {
replyListener(undefined, reply);
this.throwIfAborted();
}
throw new error_1.MongoUnexpectedServerResponseError('Server ended moreToCome unexpectedly');
};
exhaustLoop().then(undefined, replyListener);
}
throwIfAborted() {
if (this.error)
throw this.error;
}
/**
* @internal
*
* Writes an OP_MSG or OP_QUERY request to the socket, optionally compressing the command. This method
* waits until the socket's buffer has emptied (the Nodejs socket `drain` event has fired).
*/
async writeCommand(command, options) {
const finalCommand = options.agreedCompressor === 'none' || !commands_1.OpCompressedRequest.canCompress(command)
? command
: new commands_1.OpCompressedRequest(command, {
agreedCompressor: options.agreedCompressor ?? 'none',
zlibCompressionLevel: options.zlibCompressionLevel ?? 0
});
const buffer = Buffer.concat(await finalCommand.toBin());
if (options.timeoutContext?.csotEnabled()) {
if (options.timeoutContext.minRoundTripTime != null &&
options.timeoutContext.remainingTimeMS < options.timeoutContext.minRoundTripTime) {
throw new error_1.MongoOperationTimeoutError('Server roundtrip time is greater than the time remaining');
}
}
try {
if (this.socket.write(buffer))
return;
}
catch (writeError) {
const networkError = new error_1.MongoNetworkError('unexpected error writing to socket', {
cause: writeError
});
this.onError(networkError);
throw networkError;
}
const drainEvent = (0, utils_1.once)(this.socket, 'drain', options);
const timeout = options?.timeoutContext?.timeoutForSocketWrite;
const drained = timeout ? Promise.race([drainEvent, timeout]) : drainEvent;
try {
return await drained;
}
catch (writeError) {
if (timeout_1.TimeoutError.is(writeError)) {
const timeoutError = new error_1.MongoOperationTimeoutError('Timed out at socket write');
this.onError(timeoutError);
throw timeoutError;
}
else if (writeError === options.signal?.reason) {
this.onError(writeError);
}
throw writeError;
}
finally {
timeout?.clear();
}
}
/**
* @internal
*
* Returns an async generator that yields full wire protocol messages from the underlying socket. This function
* yields messages until `moreToCome` is false or not present in a response, or the caller cancels the request
* by calling `return` on the generator.
*
* Note that `for-await` loops call `return` automatically when the loop is exited.
*/
async *readMany(options) {
try {
this.dataEvents = (0, on_data_1.onData)(this.messageStream, options);
this.messageStream.resume();
for await (const message of this.dataEvents) {
const response = await (0, compression_1.decompressResponse)(message);
yield response;
if (!response.moreToCome) {
return;
}
}
}
catch (readError) {
if (timeout_1.TimeoutError.is(readError)) {
const timeoutError = new error_1.MongoOperationTimeoutError(`Timed out during socket read (${readError.duration}ms)`);
this.dataEvents = null;
this.onError(timeoutError);
throw timeoutError;
}
else if (readError === options.signal?.reason) {
this.onError(readError);
}
throw readError;
}
finally {
this.dataEvents = null;
this.messageStream.pause();
}
}
}
exports.Connection = Connection;
/** @internal */
class SizedMessageTransform extends stream_1.Transform {
constructor({ connection }) {
super({ writableObjectMode: false, readableObjectMode: true });
this.bufferPool = new utils_1.BufferPool();
this.connection = connection;
}
_transform(chunk, encoding, callback) {
if (this.connection.delayedTimeoutId != null) {
(0, timers_1.clearTimeout)(this.connection.delayedTimeoutId);
this.connection.delayedTimeoutId = null;
}
this.bufferPool.append(chunk);
while (this.bufferPool.length) {
// While there are any bytes in the buffer
// Try to fetch a size from the top 4 bytes
const sizeOfMessage = this.bufferPool.getInt32();
if (sizeOfMessage == null) {
// Not even an int32 worth of data. Stop the loop, we need more chunks.
break;
}
if (sizeOfMessage < 0) {
// The size in the message has a negative value, this is probably corruption, throw:
return callback(new error_1.MongoParseError(`Message size cannot be negative: ${sizeOfMessage}`));
}
if (sizeOfMessage > this.bufferPool.length) {
// We do not have enough bytes to make a sizeOfMessage chunk
break;
}
// Add a message to the stream
const message = this.bufferPool.read(sizeOfMessage);
if (!this.push(message)) {
// We only subscribe to data events so we should never get backpressure
// if we do, we do not have the handling for it.
return callback(new error_1.MongoRuntimeError(`SizedMessageTransform does not support backpressure`));
}
}
callback();
}
}
exports.SizedMessageTransform = SizedMessageTransform;
/** @internal */
class CryptoConnection extends Connection {
constructor(stream, options) {
super(stream, options);
this.autoEncrypter = options.autoEncrypter;
}
async command(ns, cmd, options, responseType) {
const { autoEncrypter } = this;
if (!autoEncrypter) {
throw new error_1.MongoRuntimeError('No AutoEncrypter available for encryption');
}
const serverWireVersion = (0, utils_1.maxWireVersion)(this);
if (serverWireVersion === 0) {
// This means the initial handshake hasn't happened yet
return await super.command(ns, cmd, options, responseType);
}
// Save sort or indexKeys based on the command being run
// the encrypt API serializes our JS objects to BSON to pass to the native code layer
// and then deserializes the encrypted result, the protocol level components
// of the command (ex. sort) are then converted to JS objects potentially losing
// import key order information. These fields are never encrypted so we can save the values
// from before the encryption and replace them after encryption has been performed
const sort = cmd.find || cmd.findAndModify ? cmd.sort : null;
const indexKeys = cmd.createIndexes
? cmd.indexes.map((index) => index.key)
: null;
const encrypted = await autoEncrypter.encrypt(ns.toString(), cmd, options);
// Replace the saved values
if (sort != null && (cmd.find || cmd.findAndModify)) {
encrypted.sort = sort;
}
if (indexKeys != null && cmd.createIndexes) {
for (const [offset, index] of indexKeys.entries()) {
// @ts-expect-error `encrypted` is a generic "command", but we've narrowed for only `createIndexes` commands here
encrypted.indexes[offset].key = index;
}
}
const encryptedResponse = await super.command(ns, encrypted, options,
// Eventually we want to require `responseType` which means we would satisfy `T` as the return type.
// In the meantime, we want encryptedResponse to always be _at least_ a MongoDBResponse if not a more specific subclass
// So that we can ensure we have access to the on-demand APIs for decorate response
responseType ?? responses_1.MongoDBResponse);
const result = await autoEncrypter.decrypt(encryptedResponse.toBytes(), options);
const decryptedResponse = responseType?.make(result) ?? (0, bson_1.deserialize)(result, options);
if (autoEncrypter[constants_1.kDecorateResult]) {
if (responseType == null) {
(0, utils_1.decorateDecryptionResult)(decryptedResponse, encryptedResponse.toObject(), true);
}
else if (decryptedResponse instanceof responses_1.CursorResponse) {
decryptedResponse.encryptedResponse = encryptedResponse;
}
}
return decryptedResponse;
}
}
exports.CryptoConnection = CryptoConnection;
//# sourceMappingURL=connection.js.map

1
node_modules/mongodb/lib/cmap/connection.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

558
node_modules/mongodb/lib/cmap/connection_pool.js generated vendored Normal file
View file

@ -0,0 +1,558 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ConnectionPool = exports.PoolState = void 0;
const timers_1 = require("timers");
const constants_1 = require("../constants");
const error_1 = require("../error");
const mongo_types_1 = require("../mongo_types");
const timeout_1 = require("../timeout");
const utils_1 = require("../utils");
const connect_1 = require("./connect");
const connection_1 = require("./connection");
const connection_pool_events_1 = require("./connection_pool_events");
const errors_1 = require("./errors");
const metrics_1 = require("./metrics");
/** @internal */
exports.PoolState = Object.freeze({
paused: 'paused',
ready: 'ready',
closed: 'closed'
});
/**
* A pool of connections which dynamically resizes, and emit events related to pool activity
* @internal
*/
class ConnectionPool extends mongo_types_1.TypedEventEmitter {
/**
* Emitted when the connection pool is created.
* @event
*/
static { this.CONNECTION_POOL_CREATED = constants_1.CONNECTION_POOL_CREATED; }
/**
* Emitted once when the connection pool is closed
* @event
*/
static { this.CONNECTION_POOL_CLOSED = constants_1.CONNECTION_POOL_CLOSED; }
/**
* Emitted each time the connection pool is cleared and it's generation incremented
* @event
*/
static { this.CONNECTION_POOL_CLEARED = constants_1.CONNECTION_POOL_CLEARED; }
/**
* Emitted each time the connection pool is marked ready
* @event
*/
static { this.CONNECTION_POOL_READY = constants_1.CONNECTION_POOL_READY; }
/**
* Emitted when a connection is created.
* @event
*/
static { this.CONNECTION_CREATED = constants_1.CONNECTION_CREATED; }
/**
* Emitted when a connection becomes established, and is ready to use
* @event
*/
static { this.CONNECTION_READY = constants_1.CONNECTION_READY; }
/**
* Emitted when a connection is closed
* @event
*/
static { this.CONNECTION_CLOSED = constants_1.CONNECTION_CLOSED; }
/**
* Emitted when an attempt to check out a connection begins
* @event
*/
static { this.CONNECTION_CHECK_OUT_STARTED = constants_1.CONNECTION_CHECK_OUT_STARTED; }
/**
* Emitted when an attempt to check out a connection fails
* @event
*/
static { this.CONNECTION_CHECK_OUT_FAILED = constants_1.CONNECTION_CHECK_OUT_FAILED; }
/**
* Emitted each time a connection is successfully checked out of the connection pool
* @event
*/
static { this.CONNECTION_CHECKED_OUT = constants_1.CONNECTION_CHECKED_OUT; }
/**
* Emitted each time a connection is successfully checked into the connection pool
* @event
*/
static { this.CONNECTION_CHECKED_IN = constants_1.CONNECTION_CHECKED_IN; }
constructor(server, options) {
super();
this.on('error', utils_1.noop);
this.options = Object.freeze({
connectionType: connection_1.Connection,
...options,
maxPoolSize: options.maxPoolSize ?? 100,
minPoolSize: options.minPoolSize ?? 0,
maxConnecting: options.maxConnecting ?? 2,
maxIdleTimeMS: options.maxIdleTimeMS ?? 0,
waitQueueTimeoutMS: options.waitQueueTimeoutMS ?? 0,
minPoolSizeCheckFrequencyMS: options.minPoolSizeCheckFrequencyMS ?? 100,
autoEncrypter: options.autoEncrypter
});
if (this.options.minPoolSize > this.options.maxPoolSize) {
throw new error_1.MongoInvalidArgumentError('Connection pool minimum size must not be greater than maximum pool size');
}
this.poolState = exports.PoolState.paused;
this.server = server;
this.connections = new utils_1.List();
this.pending = 0;
this.checkedOut = new Set();
this.minPoolSizeTimer = undefined;
this.generation = 0;
this.serviceGenerations = new Map();
this.connectionCounter = (0, utils_1.makeCounter)(1);
this.cancellationToken = new mongo_types_1.CancellationToken();
this.cancellationToken.setMaxListeners(Infinity);
this.waitQueue = new utils_1.List();
this.metrics = new metrics_1.ConnectionPoolMetrics();
this.processingWaitQueue = false;
this.mongoLogger = this.server.topology.client?.mongoLogger;
this.component = 'connection';
process.nextTick(() => {
this.emitAndLog(ConnectionPool.CONNECTION_POOL_CREATED, new connection_pool_events_1.ConnectionPoolCreatedEvent(this));
});
}
/** The address of the endpoint the pool is connected to */
get address() {
return this.options.hostAddress.toString();
}
/**
* Check if the pool has been closed
*
* TODO(NODE-3263): We can remove this property once shell no longer needs it
*/
get closed() {
return this.poolState === exports.PoolState.closed;
}
/** An integer expressing how many total connections (available + pending + in use) the pool currently has */
get totalConnectionCount() {
return (this.availableConnectionCount + this.pendingConnectionCount + this.currentCheckedOutCount);
}
/** An integer expressing how many connections are currently available in the pool. */
get availableConnectionCount() {
return this.connections.length;
}
get pendingConnectionCount() {
return this.pending;
}
get currentCheckedOutCount() {
return this.checkedOut.size;
}
get waitQueueSize() {
return this.waitQueue.length;
}
get loadBalanced() {
return this.options.loadBalanced;
}
get serverError() {
return this.server.description.error;
}
/**
* This is exposed ONLY for use in mongosh, to enable
* killing all connections if a user quits the shell with
* operations in progress.
*
* This property may be removed as a part of NODE-3263.
*/
get checkedOutConnections() {
return this.checkedOut;
}
/**
* Get the metrics information for the pool when a wait queue timeout occurs.
*/
waitQueueErrorMetrics() {
return this.metrics.info(this.options.maxPoolSize);
}
/**
* Set the pool state to "ready"
*/
ready() {
if (this.poolState !== exports.PoolState.paused) {
return;
}
this.poolState = exports.PoolState.ready;
this.emitAndLog(ConnectionPool.CONNECTION_POOL_READY, new connection_pool_events_1.ConnectionPoolReadyEvent(this));
(0, timers_1.clearTimeout)(this.minPoolSizeTimer);
this.ensureMinPoolSize();
}
/**
* Check a connection out of this pool. The connection will continue to be tracked, but no reference to it
* will be held by the pool. This means that if a connection is checked out it MUST be checked back in or
* explicitly destroyed by the new owner.
*/
async checkOut(options) {
const checkoutTime = (0, utils_1.now)();
this.emitAndLog(ConnectionPool.CONNECTION_CHECK_OUT_STARTED, new connection_pool_events_1.ConnectionCheckOutStartedEvent(this));
const { promise, resolve, reject } = (0, utils_1.promiseWithResolvers)();
const timeout = options.timeoutContext.connectionCheckoutTimeout;
const waitQueueMember = {
resolve,
reject,
cancelled: false,
checkoutTime
};
const abortListener = (0, utils_1.addAbortListener)(options.signal, function () {
waitQueueMember.cancelled = true;
reject(this.reason);
});
this.waitQueue.push(waitQueueMember);
process.nextTick(() => this.processWaitQueue());
try {
timeout?.throwIfExpired();
return await (timeout ? Promise.race([promise, timeout]) : promise);
}
catch (error) {
if (timeout_1.TimeoutError.is(error)) {
timeout?.clear();
waitQueueMember.cancelled = true;
this.emitAndLog(ConnectionPool.CONNECTION_CHECK_OUT_FAILED, new connection_pool_events_1.ConnectionCheckOutFailedEvent(this, 'timeout', waitQueueMember.checkoutTime));
const timeoutError = new errors_1.WaitQueueTimeoutError(this.loadBalanced
? this.waitQueueErrorMetrics()
: 'Timed out while checking out a connection from connection pool', this.address);
if (options.timeoutContext.csotEnabled()) {
throw new error_1.MongoOperationTimeoutError('Timed out during connection checkout', {
cause: timeoutError
});
}
throw timeoutError;
}
throw error;
}
finally {
abortListener?.[utils_1.kDispose]();
timeout?.clear();
}
}
/**
* Check a connection into the pool.
*
* @param connection - The connection to check in
*/
checkIn(connection) {
if (!this.checkedOut.has(connection)) {
return;
}
const poolClosed = this.closed;
const stale = this.connectionIsStale(connection);
const willDestroy = !!(poolClosed || stale || connection.closed);
if (!willDestroy) {
connection.markAvailable();
this.connections.unshift(connection);
}
this.checkedOut.delete(connection);
this.emitAndLog(ConnectionPool.CONNECTION_CHECKED_IN, new connection_pool_events_1.ConnectionCheckedInEvent(this, connection));
if (willDestroy) {
const reason = connection.closed ? 'error' : poolClosed ? 'poolClosed' : 'stale';
this.destroyConnection(connection, reason);
}
process.nextTick(() => this.processWaitQueue());
}
/**
* Clear the pool
*
* Pool reset is handled by incrementing the pool's generation count. Any existing connection of a
* previous generation will eventually be pruned during subsequent checkouts.
*/
clear(options = {}) {
if (this.closed) {
return;
}
// handle load balanced case
if (this.loadBalanced) {
const { serviceId } = options;
if (!serviceId) {
throw new error_1.MongoRuntimeError('ConnectionPool.clear() called in load balanced mode with no serviceId.');
}
const sid = serviceId.toHexString();
const generation = this.serviceGenerations.get(sid);
// Only need to worry if the generation exists, since it should
// always be there but typescript needs the check.
if (generation == null) {
throw new error_1.MongoRuntimeError('Service generations are required in load balancer mode.');
}
else {
// Increment the generation for the service id.
this.serviceGenerations.set(sid, generation + 1);
}
this.emitAndLog(ConnectionPool.CONNECTION_POOL_CLEARED, new connection_pool_events_1.ConnectionPoolClearedEvent(this, { serviceId }));
return;
}
// handle non load-balanced case
const interruptInUseConnections = options.interruptInUseConnections ?? false;
const oldGeneration = this.generation;
this.generation += 1;
const alreadyPaused = this.poolState === exports.PoolState.paused;
this.poolState = exports.PoolState.paused;
this.clearMinPoolSizeTimer();
if (!alreadyPaused) {
this.emitAndLog(ConnectionPool.CONNECTION_POOL_CLEARED, new connection_pool_events_1.ConnectionPoolClearedEvent(this, {
interruptInUseConnections
}));
}
if (interruptInUseConnections) {
process.nextTick(() => this.interruptInUseConnections(oldGeneration));
}
this.processWaitQueue();
}
/**
* Closes all stale in-use connections in the pool with a resumable PoolClearedOnNetworkError.
*
* Only connections where `connection.generation <= minGeneration` are killed.
*/
interruptInUseConnections(minGeneration) {
for (const connection of this.checkedOut) {
if (connection.generation <= minGeneration) {
connection.onError(new errors_1.PoolClearedOnNetworkError(this));
}
}
}
/** For MongoClient.close() procedures */
closeCheckedOutConnections() {
for (const conn of this.checkedOut) {
conn.onError(new error_1.MongoClientClosedError());
}
}
/** Close the pool */
close() {
if (this.closed) {
return;
}
// immediately cancel any in-flight connections
this.cancellationToken.emit('cancel');
// end the connection counter
if (typeof this.connectionCounter.return === 'function') {
this.connectionCounter.return(undefined);
}
this.poolState = exports.PoolState.closed;
this.clearMinPoolSizeTimer();
this.processWaitQueue();
for (const conn of this.connections) {
this.emitAndLog(ConnectionPool.CONNECTION_CLOSED, new connection_pool_events_1.ConnectionClosedEvent(this, conn, 'poolClosed'));
conn.destroy();
}
this.connections.clear();
this.emitAndLog(ConnectionPool.CONNECTION_POOL_CLOSED, new connection_pool_events_1.ConnectionPoolClosedEvent(this));
}
/**
* @internal
* Reauthenticate a connection
*/
async reauthenticate(connection) {
const authContext = connection.authContext;
if (!authContext) {
throw new error_1.MongoRuntimeError('No auth context found on connection.');
}
const credentials = authContext.credentials;
if (!credentials) {
throw new error_1.MongoMissingCredentialsError('Connection is missing credentials when asked to reauthenticate');
}
const resolvedCredentials = credentials.resolveAuthMechanism(connection.hello);
const provider = this.server.topology.client.s.authProviders.getOrCreateProvider(resolvedCredentials.mechanism, resolvedCredentials.mechanismProperties);
if (!provider) {
throw new error_1.MongoMissingCredentialsError(`Reauthenticate failed due to no auth provider for ${credentials.mechanism}`);
}
await provider.reauth(authContext);
return;
}
/** Clear the min pool size timer */
clearMinPoolSizeTimer() {
const minPoolSizeTimer = this.minPoolSizeTimer;
if (minPoolSizeTimer) {
(0, timers_1.clearTimeout)(minPoolSizeTimer);
}
}
destroyConnection(connection, reason) {
this.emitAndLog(ConnectionPool.CONNECTION_CLOSED, new connection_pool_events_1.ConnectionClosedEvent(this, connection, reason));
// destroy the connection
connection.destroy();
}
connectionIsStale(connection) {
const serviceId = connection.serviceId;
if (this.loadBalanced && serviceId) {
const sid = serviceId.toHexString();
const generation = this.serviceGenerations.get(sid);
return connection.generation !== generation;
}
return connection.generation !== this.generation;
}
connectionIsIdle(connection) {
return !!(this.options.maxIdleTimeMS && connection.idleTime > this.options.maxIdleTimeMS);
}
/**
* Destroys a connection if the connection is perished.
*
* @returns `true` if the connection was destroyed, `false` otherwise.
*/
destroyConnectionIfPerished(connection) {
const isStale = this.connectionIsStale(connection);
const isIdle = this.connectionIsIdle(connection);
if (!isStale && !isIdle && !connection.closed) {
return false;
}
const reason = connection.closed ? 'error' : isStale ? 'stale' : 'idle';
this.destroyConnection(connection, reason);
return true;
}
createConnection(callback) {
// Note that metadata may have changed on the client but have
// been frozen here, so we pull the metadata promise always from the client
// no matter what options were set at the construction of the pool.
const connectOptions = {
...this.options,
id: this.connectionCounter.next().value,
generation: this.generation,
cancellationToken: this.cancellationToken,
mongoLogger: this.mongoLogger,
authProviders: this.server.topology.client.s.authProviders,
metadata: this.server.topology.client.options.metadata
};
this.pending++;
// This is our version of a "virtual" no-I/O connection as the spec requires
const connectionCreatedTime = (0, utils_1.now)();
this.emitAndLog(ConnectionPool.CONNECTION_CREATED, new connection_pool_events_1.ConnectionCreatedEvent(this, { id: connectOptions.id }));
(0, connect_1.connect)(connectOptions).then(connection => {
// The pool might have closed since we started trying to create a connection
if (this.poolState !== exports.PoolState.ready) {
this.pending--;
connection.destroy();
callback(this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this));
return;
}
// forward all events from the connection to the pool
for (const event of [...constants_1.APM_EVENTS, connection_1.Connection.CLUSTER_TIME_RECEIVED]) {
connection.on(event, (e) => this.emit(event, e));
}
if (this.loadBalanced) {
connection.on(connection_1.Connection.PINNED, pinType => this.metrics.markPinned(pinType));
connection.on(connection_1.Connection.UNPINNED, pinType => this.metrics.markUnpinned(pinType));
const serviceId = connection.serviceId;
if (serviceId) {
let generation;
const sid = serviceId.toHexString();
if ((generation = this.serviceGenerations.get(sid))) {
connection.generation = generation;
}
else {
this.serviceGenerations.set(sid, 0);
connection.generation = 0;
}
}
}
connection.markAvailable();
this.emitAndLog(ConnectionPool.CONNECTION_READY, new connection_pool_events_1.ConnectionReadyEvent(this, connection, connectionCreatedTime));
this.pending--;
callback(undefined, connection);
}, error => {
this.pending--;
this.server.handleError(error);
this.emitAndLog(ConnectionPool.CONNECTION_CLOSED, new connection_pool_events_1.ConnectionClosedEvent(this, { id: connectOptions.id, serviceId: undefined }, 'error',
// TODO(NODE-5192): Remove this cast
error));
if (error instanceof error_1.MongoNetworkError || error instanceof error_1.MongoServerError) {
error.connectionGeneration = connectOptions.generation;
}
callback(error ?? new error_1.MongoRuntimeError('Connection creation failed without error'));
});
}
ensureMinPoolSize() {
const minPoolSize = this.options.minPoolSize;
if (this.poolState !== exports.PoolState.ready) {
return;
}
this.connections.prune(connection => this.destroyConnectionIfPerished(connection));
if (this.totalConnectionCount < minPoolSize &&
this.pendingConnectionCount < this.options.maxConnecting) {
// NOTE: ensureMinPoolSize should not try to get all the pending
// connection permits because that potentially delays the availability of
// the connection to a checkout request
this.createConnection((err, connection) => {
if (!err && connection) {
this.connections.push(connection);
process.nextTick(() => this.processWaitQueue());
}
if (this.poolState === exports.PoolState.ready) {
(0, timers_1.clearTimeout)(this.minPoolSizeTimer);
this.minPoolSizeTimer = (0, timers_1.setTimeout)(() => this.ensureMinPoolSize(), this.options.minPoolSizeCheckFrequencyMS);
}
});
}
else {
(0, timers_1.clearTimeout)(this.minPoolSizeTimer);
this.minPoolSizeTimer = (0, timers_1.setTimeout)(() => this.ensureMinPoolSize(), this.options.minPoolSizeCheckFrequencyMS);
}
}
processWaitQueue() {
if (this.processingWaitQueue) {
return;
}
this.processingWaitQueue = true;
while (this.waitQueueSize) {
const waitQueueMember = this.waitQueue.first();
if (!waitQueueMember) {
this.waitQueue.shift();
continue;
}
if (waitQueueMember.cancelled) {
this.waitQueue.shift();
continue;
}
if (this.poolState !== exports.PoolState.ready) {
const reason = this.closed ? 'poolClosed' : 'connectionError';
const error = this.closed ? new errors_1.PoolClosedError(this) : new errors_1.PoolClearedError(this);
this.emitAndLog(ConnectionPool.CONNECTION_CHECK_OUT_FAILED, new connection_pool_events_1.ConnectionCheckOutFailedEvent(this, reason, waitQueueMember.checkoutTime, error));
this.waitQueue.shift();
waitQueueMember.reject(error);
continue;
}
if (!this.availableConnectionCount) {
break;
}
const connection = this.connections.shift();
if (!connection) {
break;
}
if (!this.destroyConnectionIfPerished(connection)) {
this.checkedOut.add(connection);
this.emitAndLog(ConnectionPool.CONNECTION_CHECKED_OUT, new connection_pool_events_1.ConnectionCheckedOutEvent(this, connection, waitQueueMember.checkoutTime));
this.waitQueue.shift();
waitQueueMember.resolve(connection);
}
}
const { maxPoolSize, maxConnecting } = this.options;
while (this.waitQueueSize > 0 &&
this.pendingConnectionCount < maxConnecting &&
(maxPoolSize === 0 || this.totalConnectionCount < maxPoolSize)) {
const waitQueueMember = this.waitQueue.shift();
if (!waitQueueMember || waitQueueMember.cancelled) {
continue;
}
this.createConnection((err, connection) => {
if (waitQueueMember.cancelled) {
if (!err && connection) {
this.connections.push(connection);
}
}
else {
if (err) {
this.emitAndLog(ConnectionPool.CONNECTION_CHECK_OUT_FAILED,
// TODO(NODE-5192): Remove this cast
new connection_pool_events_1.ConnectionCheckOutFailedEvent(this, 'connectionError', waitQueueMember.checkoutTime, err));
waitQueueMember.reject(err);
}
else if (connection) {
this.checkedOut.add(connection);
this.emitAndLog(ConnectionPool.CONNECTION_CHECKED_OUT, new connection_pool_events_1.ConnectionCheckedOutEvent(this, connection, waitQueueMember.checkoutTime));
waitQueueMember.resolve(connection);
}
}
process.nextTick(() => this.processWaitQueue());
});
}
this.processingWaitQueue = false;
}
}
exports.ConnectionPool = ConnectionPool;
//# sourceMappingURL=connection_pool.js.map

1
node_modules/mongodb/lib/cmap/connection_pool.js.map generated vendored Normal file

File diff suppressed because one or more lines are too long

190
node_modules/mongodb/lib/cmap/connection_pool_events.js generated vendored Normal file
View file

@ -0,0 +1,190 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ConnectionPoolClearedEvent = exports.ConnectionCheckedInEvent = exports.ConnectionCheckedOutEvent = exports.ConnectionCheckOutFailedEvent = exports.ConnectionCheckOutStartedEvent = exports.ConnectionClosedEvent = exports.ConnectionReadyEvent = exports.ConnectionCreatedEvent = exports.ConnectionPoolClosedEvent = exports.ConnectionPoolReadyEvent = exports.ConnectionPoolCreatedEvent = exports.ConnectionPoolMonitoringEvent = void 0;
const constants_1 = require("../constants");
const utils_1 = require("../utils");
/**
* The base export class for all monitoring events published from the connection pool
* @public
* @category Event
*/
class ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
this.time = new Date();
this.address = pool.address;
}
}
exports.ConnectionPoolMonitoringEvent = ConnectionPoolMonitoringEvent;
/**
* An event published when a connection pool is created
* @public
* @category Event
*/
class ConnectionPoolCreatedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_POOL_CREATED;
const { maxConnecting, maxPoolSize, minPoolSize, maxIdleTimeMS, waitQueueTimeoutMS } = pool.options;
this.options = { maxConnecting, maxPoolSize, minPoolSize, maxIdleTimeMS, waitQueueTimeoutMS };
}
}
exports.ConnectionPoolCreatedEvent = ConnectionPoolCreatedEvent;
/**
* An event published when a connection pool is ready
* @public
* @category Event
*/
class ConnectionPoolReadyEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_POOL_READY;
}
}
exports.ConnectionPoolReadyEvent = ConnectionPoolReadyEvent;
/**
* An event published when a connection pool is closed
* @public
* @category Event
*/
class ConnectionPoolClosedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_POOL_CLOSED;
}
}
exports.ConnectionPoolClosedEvent = ConnectionPoolClosedEvent;
/**
* An event published when a connection pool creates a new connection
* @public
* @category Event
*/
class ConnectionCreatedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_CREATED;
this.connectionId = connection.id;
}
}
exports.ConnectionCreatedEvent = ConnectionCreatedEvent;
/**
* An event published when a connection is ready for use
* @public
* @category Event
*/
class ConnectionReadyEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection, connectionCreatedEventTime) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_READY;
this.durationMS = (0, utils_1.now)() - connectionCreatedEventTime;
this.connectionId = connection.id;
}
}
exports.ConnectionReadyEvent = ConnectionReadyEvent;
/**
* An event published when a connection is closed
* @public
* @category Event
*/
class ConnectionClosedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection, reason, error) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_CLOSED;
this.connectionId = connection.id;
this.reason = reason;
this.serviceId = connection.serviceId;
this.error = error ?? null;
}
}
exports.ConnectionClosedEvent = ConnectionClosedEvent;
/**
* An event published when a request to check a connection out begins
* @public
* @category Event
*/
class ConnectionCheckOutStartedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_CHECK_OUT_STARTED;
}
}
exports.ConnectionCheckOutStartedEvent = ConnectionCheckOutStartedEvent;
/**
* An event published when a request to check a connection out fails
* @public
* @category Event
*/
class ConnectionCheckOutFailedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, reason, checkoutTime, error) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_CHECK_OUT_FAILED;
this.durationMS = (0, utils_1.now)() - checkoutTime;
this.reason = reason;
this.error = error;
}
}
exports.ConnectionCheckOutFailedEvent = ConnectionCheckOutFailedEvent;
/**
* An event published when a connection is checked out of the connection pool
* @public
* @category Event
*/
class ConnectionCheckedOutEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection, checkoutTime) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_CHECKED_OUT;
this.durationMS = (0, utils_1.now)() - checkoutTime;
this.connectionId = connection.id;
}
}
exports.ConnectionCheckedOutEvent = ConnectionCheckedOutEvent;
/**
* An event published when a connection is checked into the connection pool
* @public
* @category Event
*/
class ConnectionCheckedInEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, connection) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_CHECKED_IN;
this.connectionId = connection.id;
}
}
exports.ConnectionCheckedInEvent = ConnectionCheckedInEvent;
/**
* An event published when a connection pool is cleared
* @public
* @category Event
*/
class ConnectionPoolClearedEvent extends ConnectionPoolMonitoringEvent {
/** @internal */
constructor(pool, options = {}) {
super(pool);
/** @internal */
this.name = constants_1.CONNECTION_POOL_CLEARED;
this.serviceId = options.serviceId;
this.interruptInUseConnections = options.interruptInUseConnections;
}
}
exports.ConnectionPoolClearedEvent = ConnectionPoolClearedEvent;
//# sourceMappingURL=connection_pool_events.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"connection_pool_events.js","sourceRoot":"","sources":["../../src/cmap/connection_pool_events.ts"],"names":[],"mappings":";;;AACA,4CAYsB;AAEtB,oCAA+B;AAI/B;;;;GAIG;AACH,MAAsB,6BAA6B;IAmBjD,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,IAAI,CAAC,IAAI,GAAG,IAAI,IAAI,EAAE,CAAC;QACvB,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;IAC9B,CAAC;CACF;AAxBD,sEAwBC;AAED;;;;GAIG;AACH,MAAa,0BAA2B,SAAQ,6BAA6B;IAS3E,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,mCAAuB,CAAC;QAK7B,MAAM,EAAE,aAAa,EAAE,WAAW,EAAE,WAAW,EAAE,aAAa,EAAE,kBAAkB,EAAE,GAClF,IAAI,CAAC,OAAO,CAAC;QACf,IAAI,CAAC,OAAO,GAAG,EAAE,aAAa,EAAE,WAAW,EAAE,WAAW,EAAE,aAAa,EAAE,kBAAkB,EAAE,CAAC;IAChG,CAAC;CACF;AAhBD,gEAgBC;AAED;;;;GAIG;AACH,MAAa,wBAAyB,SAAQ,6BAA6B;IAIzE,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,iCAAqB,CAAC;IAK7B,CAAC;CACF;AARD,4DAQC;AAED;;;;GAIG;AACH,MAAa,yBAA0B,SAAQ,6BAA6B;IAI1E,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,kCAAsB,CAAC;IAK9B,CAAC;CACF;AARD,8DAQC;AAED;;;;GAIG;AACH,MAAa,sBAAuB,SAAQ,6BAA6B;IAMvE,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAwC;QACxE,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,8BAAkB,CAAC;QAKxB,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AAXD,wDAWC;AAED;;;;GAIG;AACH,MAAa,oBAAqB,SAAQ,6BAA6B;IAkBrE,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAsB,EAAE,0BAAkC;QAC1F,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,4BAAgB,CAAC;QAKtB,IAAI,CAAC,UAAU,GAAG,IAAA,WAAG,GAAE,GAAG,0BAA0B,CAAC;QACrD,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AAxBD,oDAwBC;AAED;;;;GAIG;AACH,MAAa,qBAAsB,SAAQ,6BAA6B;IAWtE,gBAAgB;IAChB,YACE,IAAoB,EACpB,UAAgD,EAChD,MAAiD,EACjD,KAAkB;QAElB,KAAK,CAAC,IAAI,CAAC,CAAC;QAZd,gBAAgB;QAChB,SAAI,GAAG,6BAAiB,CAAC;QAYvB,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;QAClC,IAAI,CAAC,MAAM,GAAG,MAAM,CAAC;QACrB,IAAI,CAAC,SAAS,GAAG,UAAU,CAAC,SAAS,CAAC;QACtC,IAAI,CAAC,KAAK,GAAG,KAAK,IAAI,IAAI,CAAC;IAC7B,CAAC;CACF;AAxBD,sDAwBC;AAED;;;;GAIG;AACH,MAAa,8BAA+B,SAAQ,6BAA6B;IAI/E,gBAAgB;IAChB,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,wCAA4B,CAAC;IAKpC,CAAC;CACF;AARD,wEAQC;AAED;;;;GAIG;AACH,MAAa,6BAA8B,SAAQ,6BAA6B;IAe9E,gBAAgB;IAChB,YACE,IAAoB,EACpB,MAAoD,EACpD,YAAoB,EACpB,KAAkB;QAElB,KAAK,CAAC,IAAI,CAAC,CAAC;QAjBd,gBAAgB;QAChB,SAAI,GAAG,uCAA2B,CAAC;QAiBjC,IAAI,CAAC,UAAU,GAAG,IAAA,WAAG,GAAE,GAAG,YAAY,CAAC;QACvC,IAAI,CAAC,MAAM,GAAG,MAAM,CAAC;QACrB,IAAI,CAAC,KAAK,GAAG,KAAK,CAAC;IACrB,CAAC;CACF;AA3BD,sEA2BC;AAED;;;;GAIG;AACH,MAAa,yBAA0B,SAAQ,6BAA6B;IAc1E,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAsB,EAAE,YAAoB;QAC5E,KAAK,CAAC,IAAI,CAAC,CAAC;QAbd,gBAAgB;QAChB,SAAI,GAAG,kCAAsB,CAAC;QAa5B,IAAI,CAAC,UAAU,GAAG,IAAA,WAAG,GAAE,GAAG,YAAY,CAAC;QACvC,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AApBD,8DAoBC;AAED;;;;GAIG;AACH,MAAa,wBAAyB,SAAQ,6BAA6B;IAMzE,gBAAgB;IAChB,YAAY,IAAoB,EAAE,UAAsB;QACtD,KAAK,CAAC,IAAI,CAAC,CAAC;QALd,gBAAgB;QAChB,SAAI,GAAG,iCAAqB,CAAC;QAK3B,IAAI,CAAC,YAAY,GAAG,UAAU,CAAC,EAAE,CAAC;IACpC,CAAC;CACF;AAXD,4DAWC;AAED;;;;GAIG;AACH,MAAa,0BAA2B,SAAQ,6BAA6B;IAQ3E,gBAAgB;IAChB,YACE,IAAoB,EACpB,UAAyE,EAAE;QAE3E,KAAK,CAAC,IAAI,CAAC,CAAC;QARd,gBAAgB;QAChB,SAAI,GAAG,mCAAuB,CAAC;QAQ7B,IAAI,CAAC,SAAS,GAAG,OAAO,CAAC,SAAS,CAAC;QACnC,IAAI,CAAC,yBAAyB,GAAG,OAAO,CAAC,yBAAyB,CAAC;IACrE,CAAC;CACF;AAjBD,gEAiBC"}

108
node_modules/mongodb/lib/cmap/errors.js generated vendored Normal file
View file

@ -0,0 +1,108 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.WaitQueueTimeoutError = exports.PoolClearedOnNetworkError = exports.PoolClearedError = exports.PoolClosedError = void 0;
const error_1 = require("../error");
/**
* An error indicating a connection pool is closed
* @category Error
*/
class PoolClosedError extends error_1.MongoDriverError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(pool) {
super('Attempted to check out a connection from closed connection pool');
this.address = pool.address;
}
get name() {
return 'MongoPoolClosedError';
}
}
exports.PoolClosedError = PoolClosedError;
/**
* An error indicating a connection pool is currently paused
* @category Error
*/
class PoolClearedError extends error_1.MongoNetworkError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(pool, message) {
const errorMessage = message
? message
: `Connection pool for ${pool.address} was cleared because another operation failed with: "${pool.serverError?.message}"`;
super(errorMessage, pool.serverError ? { cause: pool.serverError } : undefined);
this.address = pool.address;
this.addErrorLabel(error_1.MongoErrorLabel.PoolRequestedRetry);
}
get name() {
return 'MongoPoolClearedError';
}
}
exports.PoolClearedError = PoolClearedError;
/**
* An error indicating that a connection pool has been cleared after the monitor for that server timed out.
* @category Error
*/
class PoolClearedOnNetworkError extends PoolClearedError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(pool) {
super(pool, `Connection to ${pool.address} interrupted due to server monitor timeout`);
}
get name() {
return 'PoolClearedOnNetworkError';
}
}
exports.PoolClearedOnNetworkError = PoolClearedOnNetworkError;
/**
* An error thrown when a request to check out a connection times out
* @category Error
*/
class WaitQueueTimeoutError extends error_1.MongoDriverError {
/**
* **Do not use this constructor!**
*
* Meant for internal use only.
*
* @remarks
* This class is only meant to be constructed within the driver. This constructor is
* not subject to semantic versioning compatibility guarantees and may change at any time.
*
* @public
**/
constructor(message, address) {
super(message);
this.address = address;
}
get name() {
return 'MongoWaitQueueTimeoutError';
}
}
exports.WaitQueueTimeoutError = WaitQueueTimeoutError;
//# sourceMappingURL=errors.js.map

1
node_modules/mongodb/lib/cmap/errors.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"errors.js","sourceRoot":"","sources":["../../src/cmap/errors.ts"],"names":[],"mappings":";;;AAAA,oCAAgF;AAGhF;;;GAGG;AACH,MAAa,eAAgB,SAAQ,wBAAgB;IAInD;;;;;;;;;;QAUI;IACJ,YAAY,IAAoB;QAC9B,KAAK,CAAC,iEAAiE,CAAC,CAAC;QACzE,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;IAC9B,CAAC;IAED,IAAa,IAAI;QACf,OAAO,sBAAsB,CAAC;IAChC,CAAC;CACF;AAvBD,0CAuBC;AAED;;;GAGG;AACH,MAAa,gBAAiB,SAAQ,yBAAiB;IAIrD;;;;;;;;;;QAUI;IACJ,YAAY,IAAoB,EAAE,OAAgB;QAChD,MAAM,YAAY,GAAG,OAAO;YAC1B,CAAC,CAAC,OAAO;YACT,CAAC,CAAC,uBAAuB,IAAI,CAAC,OAAO,wDAAwD,IAAI,CAAC,WAAW,EAAE,OAAO,GAAG,CAAC;QAC5H,KAAK,CAAC,YAAY,EAAE,IAAI,CAAC,WAAW,CAAC,CAAC,CAAC,EAAE,KAAK,EAAE,IAAI,CAAC,WAAW,EAAE,CAAC,CAAC,CAAC,SAAS,CAAC,CAAC;QAChF,IAAI,CAAC,OAAO,GAAG,IAAI,CAAC,OAAO,CAAC;QAE5B,IAAI,CAAC,aAAa,CAAC,uBAAe,CAAC,kBAAkB,CAAC,CAAC;IACzD,CAAC;IAED,IAAa,IAAI;QACf,OAAO,uBAAuB,CAAC;IACjC,CAAC;CACF;AA5BD,4CA4BC;AAED;;;GAGG;AACH,MAAa,yBAA0B,SAAQ,gBAAgB;IAC7D;;;;;;;;;;QAUI;IACJ,YAAY,IAAoB;QAC9B,KAAK,CAAC,IAAI,EAAE,iBAAiB,IAAI,CAAC,OAAO,4CAA4C,CAAC,CAAC;IACzF,CAAC;IAED,IAAa,IAAI;QACf,OAAO,2BAA2B,CAAC;IACrC,CAAC;CACF;AAnBD,8DAmBC;AAED;;;GAGG;AACH,MAAa,qBAAsB,SAAQ,wBAAgB;IAIzD;;;;;;;;;;QAUI;IACJ,YAAY,OAAe,EAAE,OAAe;QAC1C,KAAK,CAAC,OAAO,CAAC,CAAC;QACf,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;IACzB,CAAC;IAED,IAAa,IAAI;QACf,OAAO,4BAA4B,CAAC;IACtC,CAAC;CACF;AAvBD,sDAuBC"}

View file

@ -0,0 +1,241 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.LimitedSizeDocument = void 0;
exports.isDriverInfoEqual = isDriverInfoEqual;
exports.makeClientMetadata = makeClientMetadata;
exports.getFAASEnv = getFAASEnv;
const os = require("os");
const process = require("process");
const bson_1 = require("../../bson");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
// eslint-disable-next-line @typescript-eslint/no-require-imports
const NODE_DRIVER_VERSION = require('../../../package.json').version;
/** @internal */
function isDriverInfoEqual(info1, info2) {
/** for equality comparison, we consider "" as unset */
const nonEmptyCmp = (s1, s2) => {
s1 ||= undefined;
s2 ||= undefined;
return s1 === s2;
};
return (nonEmptyCmp(info1.name, info2.name) &&
nonEmptyCmp(info1.platform, info2.platform) &&
nonEmptyCmp(info1.version, info2.version));
}
/** @internal */
class LimitedSizeDocument {
constructor(maxSize) {
this.document = new Map();
/** BSON overhead: Int32 + Null byte */
this.documentSize = 5;
this.maxSize = maxSize;
}
/** Only adds key/value if the bsonByteLength is less than MAX_SIZE */
ifItFitsItSits(key, value) {
// The BSON byteLength of the new element is the same as serializing it to its own document
// subtracting the document size int32 and the null terminator.
const newElementSize = bson_1.BSON.serialize(new Map().set(key, value)).byteLength - 5;
if (newElementSize + this.documentSize > this.maxSize) {
return false;
}
this.documentSize += newElementSize;
this.document.set(key, value);
return true;
}
toObject() {
return bson_1.BSON.deserialize(bson_1.BSON.serialize(this.document), {
promoteLongs: false,
promoteBuffers: false,
promoteValues: false,
useBigInt64: false
});
}
}
exports.LimitedSizeDocument = LimitedSizeDocument;
/**
* From the specs:
* Implementors SHOULD cumulatively update fields in the following order until the document is under the size limit:
* 1. Omit fields from `env` except `env.name`.
* 2. Omit fields from `os` except `os.type`.
* 3. Omit the `env` document entirely.
* 4. Truncate `platform`. -- special we do not truncate this field
*/
async function makeClientMetadata(driverInfoList, { appName = '' }) {
const metadataDocument = new LimitedSizeDocument(512);
// Add app name first, it must be sent
if (appName.length > 0) {
const name = Buffer.byteLength(appName, 'utf8') <= 128
? appName
: Buffer.from(appName, 'utf8').subarray(0, 128).toString('utf8');
metadataDocument.ifItFitsItSits('application', { name });
}
const driverInfo = {
name: 'nodejs',
version: NODE_DRIVER_VERSION
};
// This is where we handle additional driver info added after client construction.
for (const { name: n = '', version: v = '' } of driverInfoList) {
if (n.length > 0) {
driverInfo.name = `${driverInfo.name}|${n}`;
}
if (v.length > 0) {
driverInfo.version = `${driverInfo.version}|${v}`;
}
}
if (!metadataDocument.ifItFitsItSits('driver', driverInfo)) {
throw new error_1.MongoInvalidArgumentError('Unable to include driverInfo name and version, metadata cannot exceed 512 bytes');
}
let runtimeInfo = getRuntimeInfo();
// This is where we handle additional driver info added after client construction.
for (const { platform = '' } of driverInfoList) {
if (platform.length > 0) {
runtimeInfo = `${runtimeInfo}|${platform}`;
}
}
if (!metadataDocument.ifItFitsItSits('platform', runtimeInfo)) {
throw new error_1.MongoInvalidArgumentError('Unable to include driverInfo platform, metadata cannot exceed 512 bytes');
}
// Note: order matters, os.type is last so it will be removed last if we're at maxSize
const osInfo = new Map()
.set('name', process.platform)
.set('architecture', process.arch)
.set('version', os.release())
.set('type', os.type());
if (!metadataDocument.ifItFitsItSits('os', osInfo)) {
for (const key of osInfo.keys()) {
osInfo.delete(key);
if (osInfo.size === 0)
break;
if (metadataDocument.ifItFitsItSits('os', osInfo))
break;
}
}
const faasEnv = getFAASEnv();
if (faasEnv != null) {
if (!metadataDocument.ifItFitsItSits('env', faasEnv)) {
for (const key of faasEnv.keys()) {
faasEnv.delete(key);
if (faasEnv.size === 0)
break;
if (metadataDocument.ifItFitsItSits('env', faasEnv))
break;
}
}
}
return await addContainerMetadata(metadataDocument.toObject());
}
let dockerPromise;
/** @internal */
async function getContainerMetadata() {
dockerPromise ??= (0, utils_1.fileIsAccessible)('/.dockerenv');
const isDocker = await dockerPromise;
const { KUBERNETES_SERVICE_HOST = '' } = process.env;
const isKubernetes = KUBERNETES_SERVICE_HOST.length > 0 ? true : false;
const containerMetadata = {};
if (isDocker)
containerMetadata.runtime = 'docker';
if (isKubernetes)
containerMetadata.orchestrator = 'kubernetes';
return containerMetadata;
}
/**
* @internal
* Re-add each metadata value.
* Attempt to add new env container metadata, but keep old data if it does not fit.
*/
async function addContainerMetadata(originalMetadata) {
const containerMetadata = await getContainerMetadata();
if (Object.keys(containerMetadata).length === 0)
return originalMetadata;
const extendedMetadata = new LimitedSizeDocument(512);
const extendedEnvMetadata = {
...originalMetadata?.env,
container: containerMetadata
};
for (const [key, val] of Object.entries(originalMetadata)) {
if (key !== 'env') {
extendedMetadata.ifItFitsItSits(key, val);
}
else {
if (!extendedMetadata.ifItFitsItSits('env', extendedEnvMetadata)) {
// add in old data if newer / extended metadata does not fit
extendedMetadata.ifItFitsItSits('env', val);
}
}
}
if (!('env' in originalMetadata)) {
extendedMetadata.ifItFitsItSits('env', extendedEnvMetadata);
}
return extendedMetadata.toObject();
}
/**
* Collects FaaS metadata.
* - `name` MUST be the last key in the Map returned.
*/
function getFAASEnv() {
const { AWS_EXECUTION_ENV = '', AWS_LAMBDA_RUNTIME_API = '', FUNCTIONS_WORKER_RUNTIME = '', K_SERVICE = '', FUNCTION_NAME = '', VERCEL = '', AWS_LAMBDA_FUNCTION_MEMORY_SIZE = '', AWS_REGION = '', FUNCTION_MEMORY_MB = '', FUNCTION_REGION = '', FUNCTION_TIMEOUT_SEC = '', VERCEL_REGION = '' } = process.env;
const isAWSFaaS = AWS_EXECUTION_ENV.startsWith('AWS_Lambda_') || AWS_LAMBDA_RUNTIME_API.length > 0;
const isAzureFaaS = FUNCTIONS_WORKER_RUNTIME.length > 0;
const isGCPFaaS = K_SERVICE.length > 0 || FUNCTION_NAME.length > 0;
const isVercelFaaS = VERCEL.length > 0;
// Note: order matters, name must always be the last key
const faasEnv = new Map();
// When isVercelFaaS is true so is isAWSFaaS; Vercel inherits the AWS env
if (isVercelFaaS && !(isAzureFaaS || isGCPFaaS)) {
if (VERCEL_REGION.length > 0) {
faasEnv.set('region', VERCEL_REGION);
}
faasEnv.set('name', 'vercel');
return faasEnv;
}
if (isAWSFaaS && !(isAzureFaaS || isGCPFaaS || isVercelFaaS)) {
if (AWS_REGION.length > 0) {
faasEnv.set('region', AWS_REGION);
}
if (AWS_LAMBDA_FUNCTION_MEMORY_SIZE.length > 0 &&
Number.isInteger(+AWS_LAMBDA_FUNCTION_MEMORY_SIZE)) {
faasEnv.set('memory_mb', new bson_1.Int32(AWS_LAMBDA_FUNCTION_MEMORY_SIZE));
}
faasEnv.set('name', 'aws.lambda');
return faasEnv;
}
if (isAzureFaaS && !(isGCPFaaS || isAWSFaaS || isVercelFaaS)) {
faasEnv.set('name', 'azure.func');
return faasEnv;
}
if (isGCPFaaS && !(isAzureFaaS || isAWSFaaS || isVercelFaaS)) {
if (FUNCTION_REGION.length > 0) {
faasEnv.set('region', FUNCTION_REGION);
}
if (FUNCTION_MEMORY_MB.length > 0 && Number.isInteger(+FUNCTION_MEMORY_MB)) {
faasEnv.set('memory_mb', new bson_1.Int32(FUNCTION_MEMORY_MB));
}
if (FUNCTION_TIMEOUT_SEC.length > 0 && Number.isInteger(+FUNCTION_TIMEOUT_SEC)) {
faasEnv.set('timeout_sec', new bson_1.Int32(FUNCTION_TIMEOUT_SEC));
}
faasEnv.set('name', 'gcp.func');
return faasEnv;
}
return null;
}
/**
* @internal
* Get current JavaScript runtime platform
*
* NOTE: The version information fetching is intentionally written defensively
* to avoid having a released driver version that becomes incompatible
* with a future change to these global objects.
*/
function getRuntimeInfo() {
if ('Deno' in globalThis) {
const version = typeof Deno?.version?.deno === 'string' ? Deno?.version?.deno : '0.0.0-unknown';
return `Deno v${version}, ${os.endianness()}`;
}
if ('Bun' in globalThis) {
const version = typeof Bun?.version === 'string' ? Bun?.version : '0.0.0-unknown';
return `Bun v${version}, ${os.endianness()}`;
}
return `Node.js ${process.version}, ${os.endianness()}`;
}
//# sourceMappingURL=client_metadata.js.map

File diff suppressed because one or more lines are too long

62
node_modules/mongodb/lib/cmap/metrics.js generated vendored Normal file
View file

@ -0,0 +1,62 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ConnectionPoolMetrics = void 0;
/** @internal */
class ConnectionPoolMetrics {
constructor() {
this.txnConnections = 0;
this.cursorConnections = 0;
this.otherConnections = 0;
}
static { this.TXN = 'txn'; }
static { this.CURSOR = 'cursor'; }
static { this.OTHER = 'other'; }
/**
* Mark a connection as pinned for a specific operation.
*/
markPinned(pinType) {
if (pinType === ConnectionPoolMetrics.TXN) {
this.txnConnections += 1;
}
else if (pinType === ConnectionPoolMetrics.CURSOR) {
this.cursorConnections += 1;
}
else {
this.otherConnections += 1;
}
}
/**
* Unmark a connection as pinned for an operation.
*/
markUnpinned(pinType) {
if (pinType === ConnectionPoolMetrics.TXN) {
this.txnConnections -= 1;
}
else if (pinType === ConnectionPoolMetrics.CURSOR) {
this.cursorConnections -= 1;
}
else {
this.otherConnections -= 1;
}
}
/**
* Return information about the cmap metrics as a string.
*/
info(maxPoolSize) {
return ('Timed out while checking out a connection from connection pool: ' +
`maxPoolSize: ${maxPoolSize}, ` +
`connections in use by cursors: ${this.cursorConnections}, ` +
`connections in use by transactions: ${this.txnConnections}, ` +
`connections in use by other operations: ${this.otherConnections}`);
}
/**
* Reset the metrics to the initial values.
*/
reset() {
this.txnConnections = 0;
this.cursorConnections = 0;
this.otherConnections = 0;
}
}
exports.ConnectionPoolMetrics = ConnectionPoolMetrics;
//# sourceMappingURL=metrics.js.map

1
node_modules/mongodb/lib/cmap/metrics.js.map generated vendored Normal file
View file

@ -0,0 +1 @@
{"version":3,"file":"metrics.js","sourceRoot":"","sources":["../../src/cmap/metrics.ts"],"names":[],"mappings":";;;AAAA,gBAAgB;AAChB,MAAa,qBAAqB;IAAlC;QAKE,mBAAc,GAAG,CAAC,CAAC;QACnB,sBAAiB,GAAG,CAAC,CAAC;QACtB,qBAAgB,GAAG,CAAC,CAAC;IAiDvB,CAAC;aAvDiB,QAAG,GAAG,KAAc,AAAjB,CAAkB;aACrB,WAAM,GAAG,QAAiB,AAApB,CAAqB;aAC3B,UAAK,GAAG,OAAgB,AAAnB,CAAoB;IAMzC;;OAEG;IACH,UAAU,CAAC,OAAe;QACxB,IAAI,OAAO,KAAK,qBAAqB,CAAC,GAAG,EAAE,CAAC;YAC1C,IAAI,CAAC,cAAc,IAAI,CAAC,CAAC;QAC3B,CAAC;aAAM,IAAI,OAAO,KAAK,qBAAqB,CAAC,MAAM,EAAE,CAAC;YACpD,IAAI,CAAC,iBAAiB,IAAI,CAAC,CAAC;QAC9B,CAAC;aAAM,CAAC;YACN,IAAI,CAAC,gBAAgB,IAAI,CAAC,CAAC;QAC7B,CAAC;IACH,CAAC;IAED;;OAEG;IACH,YAAY,CAAC,OAAe;QAC1B,IAAI,OAAO,KAAK,qBAAqB,CAAC,GAAG,EAAE,CAAC;YAC1C,IAAI,CAAC,cAAc,IAAI,CAAC,CAAC;QAC3B,CAAC;aAAM,IAAI,OAAO,KAAK,qBAAqB,CAAC,MAAM,EAAE,CAAC;YACpD,IAAI,CAAC,iBAAiB,IAAI,CAAC,CAAC;QAC9B,CAAC;aAAM,CAAC;YACN,IAAI,CAAC,gBAAgB,IAAI,CAAC,CAAC;QAC7B,CAAC;IACH,CAAC;IAED;;OAEG;IACH,IAAI,CAAC,WAAmB;QACtB,OAAO,CACL,kEAAkE;YAClE,gBAAgB,WAAW,IAAI;YAC/B,kCAAkC,IAAI,CAAC,iBAAiB,IAAI;YAC5D,uCAAuC,IAAI,CAAC,cAAc,IAAI;YAC9D,2CAA2C,IAAI,CAAC,gBAAgB,EAAE,CACnE,CAAC;IACJ,CAAC;IAED;;OAEG;IACH,KAAK;QACH,IAAI,CAAC,cAAc,GAAG,CAAC,CAAC;QACxB,IAAI,CAAC,iBAAiB,GAAG,CAAC,CAAC;QAC3B,IAAI,CAAC,gBAAgB,GAAG,CAAC,CAAC;IAC5B,CAAC;;AAvDH,sDAwDC"}

70
node_modules/mongodb/lib/cmap/stream_description.js generated vendored Normal file
View file

@ -0,0 +1,70 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.StreamDescription = void 0;
const bson_1 = require("../bson");
const common_1 = require("../sdam/common");
const server_description_1 = require("../sdam/server_description");
const RESPONSE_FIELDS = [
'minWireVersion',
'maxWireVersion',
'maxBsonObjectSize',
'maxMessageSizeBytes',
'maxWriteBatchSize',
'logicalSessionTimeoutMinutes'
];
/** @public */
class StreamDescription {
constructor(address, options) {
this.hello = null;
this.address = address;
this.type = common_1.ServerType.Unknown;
this.minWireVersion = undefined;
this.maxWireVersion = undefined;
this.maxBsonObjectSize = 16777216;
this.maxMessageSizeBytes = 48000000;
this.maxWriteBatchSize = 100000;
this.logicalSessionTimeoutMinutes = options?.logicalSessionTimeoutMinutes;
this.loadBalanced = !!options?.loadBalanced;
this.compressors =
options && options.compressors && Array.isArray(options.compressors)
? options.compressors
: [];
this.serverConnectionId = null;
}
receiveResponse(response) {
if (response == null) {
return;
}
this.hello = response;
this.type = (0, server_description_1.parseServerType)(response);
if ('connectionId' in response) {
this.serverConnectionId = this.parseServerConnectionID(response.connectionId);
}
else {
this.serverConnectionId = null;
}
for (const field of RESPONSE_FIELDS) {
if (response[field] != null) {
this[field] = response[field];
}
// testing case
if ('__nodejs_mock_server__' in response) {
this.__nodejs_mock_server__ = response['__nodejs_mock_server__'];
}
}
if (response.compression) {
this.compressor = this.compressors.filter(c => response.compression?.includes(c))[0];
}
}
/* @internal */
parseServerConnectionID(serverConnectionId) {
// Connection ids are always integral, so it's safe to coerce doubles as well as
// any integral type.
return bson_1.Long.isLong(serverConnectionId)
? serverConnectionId.toBigInt()
: // @ts-expect-error: Doubles are coercible to number
BigInt(serverConnectionId);
}
}
exports.StreamDescription = StreamDescription;
//# sourceMappingURL=stream_description.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"stream_description.js","sourceRoot":"","sources":["../../src/cmap/stream_description.ts"],"names":[],"mappings":";;;AAAA,kCAA2D;AAC3D,2CAA4C;AAC5C,mEAA6D;AAG7D,MAAM,eAAe,GAAG;IACtB,gBAAgB;IAChB,gBAAgB;IAChB,mBAAmB;IACnB,qBAAqB;IACrB,mBAAmB;IACnB,8BAA8B;CACtB,CAAC;AASX,cAAc;AACd,MAAa,iBAAiB;IAoB5B,YAAY,OAAe,EAAE,OAAkC;QAFxD,UAAK,GAAoB,IAAI,CAAC;QAGnC,IAAI,CAAC,OAAO,GAAG,OAAO,CAAC;QACvB,IAAI,CAAC,IAAI,GAAG,mBAAU,CAAC,OAAO,CAAC;QAC/B,IAAI,CAAC,cAAc,GAAG,SAAS,CAAC;QAChC,IAAI,CAAC,cAAc,GAAG,SAAS,CAAC;QAChC,IAAI,CAAC,iBAAiB,GAAG,QAAQ,CAAC;QAClC,IAAI,CAAC,mBAAmB,GAAG,QAAQ,CAAC;QACpC,IAAI,CAAC,iBAAiB,GAAG,MAAM,CAAC;QAChC,IAAI,CAAC,4BAA4B,GAAG,OAAO,EAAE,4BAA4B,CAAC;QAC1E,IAAI,CAAC,YAAY,GAAG,CAAC,CAAC,OAAO,EAAE,YAAY,CAAC;QAC5C,IAAI,CAAC,WAAW;YACd,OAAO,IAAI,OAAO,CAAC,WAAW,IAAI,KAAK,CAAC,OAAO,CAAC,OAAO,CAAC,WAAW,CAAC;gBAClE,CAAC,CAAC,OAAO,CAAC,WAAW;gBACrB,CAAC,CAAC,EAAE,CAAC;QACT,IAAI,CAAC,kBAAkB,GAAG,IAAI,CAAC;IACjC,CAAC;IAED,eAAe,CAAC,QAAyB;QACvC,IAAI,QAAQ,IAAI,IAAI,EAAE,CAAC;YACrB,OAAO;QACT,CAAC;QACD,IAAI,CAAC,KAAK,GAAG,QAAQ,CAAC;QACtB,IAAI,CAAC,IAAI,GAAG,IAAA,oCAAe,EAAC,QAAQ,CAAC,CAAC;QACtC,IAAI,cAAc,IAAI,QAAQ,EAAE,CAAC;YAC/B,IAAI,CAAC,kBAAkB,GAAG,IAAI,CAAC,uBAAuB,CAAC,QAAQ,CAAC,YAAY,CAAC,CAAC;QAChF,CAAC;aAAM,CAAC;YACN,IAAI,CAAC,kBAAkB,GAAG,IAAI,CAAC;QACjC,CAAC;QACD,KAAK,MAAM,KAAK,IAAI,eAAe,EAAE,CAAC;YACpC,IAAI,QAAQ,CAAC,KAAK,CAAC,IAAI,IAAI,EAAE,CAAC;gBAC5B,IAAI,CAAC,KAAK,CAAC,GAAG,QAAQ,CAAC,KAAK,CAAC,CAAC;YAChC,CAAC;YAED,eAAe;YACf,IAAI,wBAAwB,IAAI,QAAQ,EAAE,CAAC;gBACzC,IAAI,CAAC,sBAAsB,GAAG,QAAQ,CAAC,wBAAwB,CAAC,CAAC;YACnE,CAAC;QACH,CAAC;QAED,IAAI,QAAQ,CAAC,WAAW,EAAE,CAAC;YACzB,IAAI,CAAC,UAAU,GAAG,IAAI,CAAC,WAAW,CAAC,MAAM,CAAC,CAAC,CAAC,EAAE,CAAC,QAAQ,CAAC,WAAW,EAAE,QAAQ,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC,CAAC;QACvF,CAAC;IACH,CAAC;IAED,eAAe;IACf,uBAAuB,CAAC,kBAAmD;QACzE,gFAAgF;QAChF,qBAAqB;QACrB,OAAO,WAAI,CAAC,MAAM,CAAC,kBAAkB,CAAC;YACpC,CAAC,CAAC,kBAAkB,CAAC,QAAQ,EAAE;YAC/B,CAAC,CAAC,oDAAoD;gBACpD,MAAM,CAAC,kBAAkB,CAAC,CAAC;IACjC,CAAC;CACF;AAzED,8CAyEC"}

View file

@ -0,0 +1,163 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.uncompressibleCommands = exports.Compressor = void 0;
exports.compress = compress;
exports.decompress = decompress;
exports.compressCommand = compressCommand;
exports.decompressResponse = decompressResponse;
const util_1 = require("util");
const zlib = require("zlib");
const constants_1 = require("../../constants");
const deps_1 = require("../../deps");
const error_1 = require("../../error");
const commands_1 = require("../commands");
const constants_2 = require("./constants");
/** @public */
exports.Compressor = Object.freeze({
none: 0,
snappy: 1,
zlib: 2,
zstd: 3
});
exports.uncompressibleCommands = new Set([
constants_1.LEGACY_HELLO_COMMAND,
'saslStart',
'saslContinue',
'getnonce',
'authenticate',
'createUser',
'updateUser',
'copydbSaslStart',
'copydbgetnonce',
'copydb'
]);
const ZSTD_COMPRESSION_LEVEL = 3;
const zlibInflate = (0, util_1.promisify)(zlib.inflate.bind(zlib));
const zlibDeflate = (0, util_1.promisify)(zlib.deflate.bind(zlib));
let zstd;
let Snappy = null;
function loadSnappy() {
if (Snappy == null) {
const snappyImport = (0, deps_1.getSnappy)();
if ('kModuleError' in snappyImport) {
throw snappyImport.kModuleError;
}
Snappy = snappyImport;
}
return Snappy;
}
// Facilitate compressing a message using an agreed compressor
async function compress(options, dataToBeCompressed) {
const zlibOptions = {};
switch (options.agreedCompressor) {
case 'snappy': {
Snappy ??= loadSnappy();
return await Snappy.compress(dataToBeCompressed);
}
case 'zstd': {
loadZstd();
if ('kModuleError' in zstd) {
throw zstd['kModuleError'];
}
return await zstd.compress(dataToBeCompressed, ZSTD_COMPRESSION_LEVEL);
}
case 'zlib': {
if (options.zlibCompressionLevel) {
zlibOptions.level = options.zlibCompressionLevel;
}
return await zlibDeflate(dataToBeCompressed, zlibOptions);
}
default: {
throw new error_1.MongoInvalidArgumentError(`Unknown compressor ${options.agreedCompressor} failed to compress`);
}
}
}
// Decompress a message using the given compressor
async function decompress(compressorID, compressedData) {
if (compressorID !== exports.Compressor.snappy &&
compressorID !== exports.Compressor.zstd &&
compressorID !== exports.Compressor.zlib &&
compressorID !== exports.Compressor.none) {
throw new error_1.MongoDecompressionError(`Server sent message compressed using an unsupported compressor. (Received compressor ID ${compressorID})`);
}
switch (compressorID) {
case exports.Compressor.snappy: {
Snappy ??= loadSnappy();
return await Snappy.uncompress(compressedData, { asBuffer: true });
}
case exports.Compressor.zstd: {
loadZstd();
if ('kModuleError' in zstd) {
throw zstd['kModuleError'];
}
return await zstd.decompress(compressedData);
}
case exports.Compressor.zlib: {
return await zlibInflate(compressedData);
}
default: {
return compressedData;
}
}
}
/**
* Load ZStandard if it is not already set.
*/
function loadZstd() {
if (!zstd) {
zstd = (0, deps_1.getZstdLibrary)();
}
}
const MESSAGE_HEADER_SIZE = 16;
/**
* @internal
*
* Compresses an OP_MSG or OP_QUERY message, if compression is configured. This method
* also serializes the command to BSON.
*/
async function compressCommand(command, description) {
const finalCommand = description.agreedCompressor === 'none' || !commands_1.OpCompressedRequest.canCompress(command)
? command
: new commands_1.OpCompressedRequest(command, {
agreedCompressor: description.agreedCompressor ?? 'none',
zlibCompressionLevel: description.zlibCompressionLevel ?? 0
});
const data = await finalCommand.toBin();
return Buffer.concat(data);
}
/**
* @internal
*
* Decompresses an OP_MSG or OP_QUERY response from the server, if compression is configured.
*
* This method does not parse the response's BSON.
*/
async function decompressResponse(message) {
const messageHeader = {
length: message.readInt32LE(0),
requestId: message.readInt32LE(4),
responseTo: message.readInt32LE(8),
opCode: message.readInt32LE(12)
};
if (messageHeader.opCode !== constants_2.OP_COMPRESSED) {
const ResponseType = messageHeader.opCode === constants_2.OP_MSG ? commands_1.OpMsgResponse : commands_1.OpReply;
const messageBody = message.subarray(MESSAGE_HEADER_SIZE);
return new ResponseType(message, messageHeader, messageBody);
}
const header = {
...messageHeader,
fromCompressed: true,
opCode: message.readInt32LE(MESSAGE_HEADER_SIZE),
length: message.readInt32LE(MESSAGE_HEADER_SIZE + 4)
};
const compressorID = message[MESSAGE_HEADER_SIZE + 8];
const compressedBuffer = message.slice(MESSAGE_HEADER_SIZE + 9);
// recalculate based on wrapped opcode
const ResponseType = header.opCode === constants_2.OP_MSG ? commands_1.OpMsgResponse : commands_1.OpReply;
const messageBody = await decompress(compressorID, compressedBuffer);
if (messageBody.length !== header.length) {
throw new error_1.MongoDecompressionError('Message body and message header must be the same length');
}
return new ResponseType(message, header, messageBody);
}
//# sourceMappingURL=compression.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"compression.js","sourceRoot":"","sources":["../../../src/cmap/wire_protocol/compression.ts"],"names":[],"mappings":";;;AA8DA,4BA6BC;AAGD,gCA+BC;AAmBD,0CAaC;AASD,gDA8BC;AApMD,+BAAiC;AACjC,6BAA6B;AAE7B,+CAAuD;AACvD,qCAAuF;AACvF,uCAAiF;AACjF,0CAOqB;AACrB,2CAAoD;AAEpD,cAAc;AACD,QAAA,UAAU,GAAG,MAAM,CAAC,MAAM,CAAC;IACtC,IAAI,EAAE,CAAC;IACP,MAAM,EAAE,CAAC;IACT,IAAI,EAAE,CAAC;IACP,IAAI,EAAE,CAAC;CACC,CAAC,CAAC;AAQC,QAAA,sBAAsB,GAAG,IAAI,GAAG,CAAC;IAC5C,gCAAoB;IACpB,WAAW;IACX,cAAc;IACd,UAAU;IACV,cAAc;IACd,YAAY;IACZ,YAAY;IACZ,iBAAiB;IACjB,gBAAgB;IAChB,QAAQ;CACT,CAAC,CAAC;AAEH,MAAM,sBAAsB,GAAG,CAAC,CAAC;AAEjC,MAAM,WAAW,GAAG,IAAA,gBAAS,EAAC,IAAI,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC;AACvD,MAAM,WAAW,GAAG,IAAA,gBAAS,EAAC,IAAI,CAAC,OAAO,CAAC,IAAI,CAAC,IAAI,CAAC,CAAC,CAAC;AAEvD,IAAI,IAAe,CAAC;AACpB,IAAI,MAAM,GAAqB,IAAI,CAAC;AACpC,SAAS,UAAU;IACjB,IAAI,MAAM,IAAI,IAAI,EAAE,CAAC;QACnB,MAAM,YAAY,GAAG,IAAA,gBAAS,GAAE,CAAC;QACjC,IAAI,cAAc,IAAI,YAAY,EAAE,CAAC;YACnC,MAAM,YAAY,CAAC,YAAY,CAAC;QAClC,CAAC;QACD,MAAM,GAAG,YAAY,CAAC;IACxB,CAAC;IACD,OAAO,MAAM,CAAC;AAChB,CAAC;AAED,8DAA8D;AACvD,KAAK,UAAU,QAAQ,CAC5B,OAAmC,EACnC,kBAA0B;IAE1B,MAAM,WAAW,GAAG,EAAsB,CAAC;IAC3C,QAAQ,OAAO,CAAC,gBAAgB,EAAE,CAAC;QACjC,KAAK,QAAQ,CAAC,CAAC,CAAC;YACd,MAAM,KAAK,UAAU,EAAE,CAAC;YACxB,OAAO,MAAM,MAAM,CAAC,QAAQ,CAAC,kBAAkB,CAAC,CAAC;QACnD,CAAC;QACD,KAAK,MAAM,CAAC,CAAC,CAAC;YACZ,QAAQ,EAAE,CAAC;YACX,IAAI,cAAc,IAAI,IAAI,EAAE,CAAC;gBAC3B,MAAM,IAAI,CAAC,cAAc,CAAC,CAAC;YAC7B,CAAC;YACD,OAAO,MAAM,IAAI,CAAC,QAAQ,CAAC,kBAAkB,EAAE,sBAAsB,CAAC,CAAC;QACzE,CAAC;QACD,KAAK,MAAM,CAAC,CAAC,CAAC;YACZ,IAAI,OAAO,CAAC,oBAAoB,EAAE,CAAC;gBACjC,WAAW,CAAC,KAAK,GAAG,OAAO,CAAC,oBAAoB,CAAC;YACnD,CAAC;YACD,OAAO,MAAM,WAAW,CAAC,kBAAkB,EAAE,WAAW,CAAC,CAAC;QAC5D,CAAC;QACD,OAAO,CAAC,CAAC,CAAC;YACR,MAAM,IAAI,iCAAyB,CACjC,sBAAsB,OAAO,CAAC,gBAAgB,qBAAqB,CACpE,CAAC;QACJ,CAAC;IACH,CAAC;AACH,CAAC;AAED,kDAAkD;AAC3C,KAAK,UAAU,UAAU,CAAC,YAAoB,EAAE,cAAsB;IAC3E,IACE,YAAY,KAAK,kBAAU,CAAC,MAAM;QAClC,YAAY,KAAK,kBAAU,CAAC,IAAI;QAChC,YAAY,KAAK,kBAAU,CAAC,IAAI;QAChC,YAAY,KAAK,kBAAU,CAAC,IAAI,EAChC,CAAC;QACD,MAAM,IAAI,+BAAuB,CAC/B,2FAA2F,YAAY,GAAG,CAC3G,CAAC;IACJ,CAAC;IAED,QAAQ,YAAY,EAAE,CAAC;QACrB,KAAK,kBAAU,CAAC,MAAM,CAAC,CAAC,CAAC;YACvB,MAAM,KAAK,UAAU,EAAE,CAAC;YACxB,OAAO,MAAM,MAAM,CAAC,UAAU,CAAC,cAAc,EAAE,EAAE,QAAQ,EAAE,IAAI,EAAE,CAAC,CAAC;QACrE,CAAC;QACD,KAAK,kBAAU,CAAC,IAAI,CAAC,CAAC,CAAC;YACrB,QAAQ,EAAE,CAAC;YACX,IAAI,cAAc,IAAI,IAAI,EAAE,CAAC;gBAC3B,MAAM,IAAI,CAAC,cAAc,CAAC,CAAC;YAC7B,CAAC;YACD,OAAO,MAAM,IAAI,CAAC,UAAU,CAAC,cAAc,CAAC,CAAC;QAC/C,CAAC;QACD,KAAK,kBAAU,CAAC,IAAI,CAAC,CAAC,CAAC;YACrB,OAAO,MAAM,WAAW,CAAC,cAAc,CAAC,CAAC;QAC3C,CAAC;QACD,OAAO,CAAC,CAAC,CAAC;YACR,OAAO,cAAc,CAAC;QACxB,CAAC;IACH,CAAC;AACH,CAAC;AAED;;GAEG;AACH,SAAS,QAAQ;IACf,IAAI,CAAC,IAAI,EAAE,CAAC;QACV,IAAI,GAAG,IAAA,qBAAc,GAAE,CAAC;IAC1B,CAAC;AACH,CAAC;AAED,MAAM,mBAAmB,GAAG,EAAE,CAAC;AAE/B;;;;;GAKG;AACI,KAAK,UAAU,eAAe,CACnC,OAAiC,EACjC,WAAiF;IAEjF,MAAM,YAAY,GAChB,WAAW,CAAC,gBAAgB,KAAK,MAAM,IAAI,CAAC,8BAAmB,CAAC,WAAW,CAAC,OAAO,CAAC;QAClF,CAAC,CAAC,OAAO;QACT,CAAC,CAAC,IAAI,8BAAmB,CAAC,OAAO,EAAE;YAC/B,gBAAgB,EAAE,WAAW,CAAC,gBAAgB,IAAI,MAAM;YACxD,oBAAoB,EAAE,WAAW,CAAC,oBAAoB,IAAI,CAAC;SAC5D,CAAC,CAAC;IACT,MAAM,IAAI,GAAG,MAAM,YAAY,CAAC,KAAK,EAAE,CAAC;IACxC,OAAO,MAAM,CAAC,MAAM,CAAC,IAAI,CAAC,CAAC;AAC7B,CAAC;AAED;;;;;;GAMG;AACI,KAAK,UAAU,kBAAkB,CAAC,OAAe;IACtD,MAAM,aAAa,GAAkB;QACnC,MAAM,EAAE,OAAO,CAAC,WAAW,CAAC,CAAC,CAAC;QAC9B,SAAS,EAAE,OAAO,CAAC,WAAW,CAAC,CAAC,CAAC;QACjC,UAAU,EAAE,OAAO,CAAC,WAAW,CAAC,CAAC,CAAC;QAClC,MAAM,EAAE,OAAO,CAAC,WAAW,CAAC,EAAE,CAAC;KAChC,CAAC;IAEF,IAAI,aAAa,CAAC,MAAM,KAAK,yBAAa,EAAE,CAAC;QAC3C,MAAM,YAAY,GAAG,aAAa,CAAC,MAAM,KAAK,kBAAM,CAAC,CAAC,CAAC,wBAAa,CAAC,CAAC,CAAC,kBAAO,CAAC;QAC/E,MAAM,WAAW,GAAG,OAAO,CAAC,QAAQ,CAAC,mBAAmB,CAAC,CAAC;QAC1D,OAAO,IAAI,YAAY,CAAC,OAAO,EAAE,aAAa,EAAE,WAAW,CAAC,CAAC;IAC/D,CAAC;IAED,MAAM,MAAM,GAAkB;QAC5B,GAAG,aAAa;QAChB,cAAc,EAAE,IAAI;QACpB,MAAM,EAAE,OAAO,CAAC,WAAW,CAAC,mBAAmB,CAAC;QAChD,MAAM,EAAE,OAAO,CAAC,WAAW,CAAC,mBAAmB,GAAG,CAAC,CAAC;KACrD,CAAC;IACF,MAAM,YAAY,GAAG,OAAO,CAAC,mBAAmB,GAAG,CAAC,CAAC,CAAC;IACtD,MAAM,gBAAgB,GAAG,OAAO,CAAC,KAAK,CAAC,mBAAmB,GAAG,CAAC,CAAC,CAAC;IAEhE,sCAAsC;IACtC,MAAM,YAAY,GAAG,MAAM,CAAC,MAAM,KAAK,kBAAM,CAAC,CAAC,CAAC,wBAAa,CAAC,CAAC,CAAC,kBAAO,CAAC;IACxE,MAAM,WAAW,GAAG,MAAM,UAAU,CAAC,YAAY,EAAE,gBAAgB,CAAC,CAAC;IACrE,IAAI,WAAW,CAAC,MAAM,KAAK,MAAM,CAAC,MAAM,EAAE,CAAC;QACzC,MAAM,IAAI,+BAAuB,CAAC,yDAAyD,CAAC,CAAC;IAC/F,CAAC;IACD,OAAO,IAAI,YAAY,CAAC,OAAO,EAAE,MAAM,EAAE,WAAW,CAAC,CAAC;AACxD,CAAC"}

View file

@ -0,0 +1,21 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.OP_MSG = exports.OP_COMPRESSED = exports.OP_DELETE = exports.OP_QUERY = exports.OP_INSERT = exports.OP_UPDATE = exports.OP_REPLY = exports.MIN_SUPPORTED_RAW_DATA_SERVER_VERSION = exports.MIN_SUPPORTED_RAW_DATA_WIRE_VERSION = exports.MIN_SUPPORTED_QE_SERVER_VERSION = exports.MIN_SUPPORTED_QE_WIRE_VERSION = exports.MAX_SUPPORTED_WIRE_VERSION = exports.MIN_SUPPORTED_WIRE_VERSION = exports.MIN_SUPPORTED_SNAPSHOT_READS_SERVER_VERSION = exports.MIN_SUPPORTED_SNAPSHOT_READS_WIRE_VERSION = exports.MAX_SUPPORTED_SERVER_VERSION = exports.MIN_SUPPORTED_SERVER_VERSION = void 0;
exports.MIN_SUPPORTED_SERVER_VERSION = '4.2';
exports.MAX_SUPPORTED_SERVER_VERSION = '8.2';
exports.MIN_SUPPORTED_SNAPSHOT_READS_WIRE_VERSION = 13;
exports.MIN_SUPPORTED_SNAPSHOT_READS_SERVER_VERSION = '5.0';
exports.MIN_SUPPORTED_WIRE_VERSION = 8;
exports.MAX_SUPPORTED_WIRE_VERSION = 27;
exports.MIN_SUPPORTED_QE_WIRE_VERSION = 21;
exports.MIN_SUPPORTED_QE_SERVER_VERSION = '7.0';
exports.MIN_SUPPORTED_RAW_DATA_WIRE_VERSION = 27;
exports.MIN_SUPPORTED_RAW_DATA_SERVER_VERSION = '8.2';
exports.OP_REPLY = 1;
exports.OP_UPDATE = 2001;
exports.OP_INSERT = 2002;
exports.OP_QUERY = 2004;
exports.OP_DELETE = 2006;
exports.OP_COMPRESSED = 2012;
exports.OP_MSG = 2013;
//# sourceMappingURL=constants.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"constants.js","sourceRoot":"","sources":["../../../src/cmap/wire_protocol/constants.ts"],"names":[],"mappings":";;;AAAa,QAAA,4BAA4B,GAAG,KAAK,CAAC;AACrC,QAAA,4BAA4B,GAAG,KAAK,CAAC;AACrC,QAAA,yCAAyC,GAAG,EAAE,CAAC;AAC/C,QAAA,2CAA2C,GAAG,KAAK,CAAC;AACpD,QAAA,0BAA0B,GAAG,CAAC,CAAC;AAC/B,QAAA,0BAA0B,GAAG,EAAE,CAAC;AAChC,QAAA,6BAA6B,GAAG,EAAE,CAAC;AACnC,QAAA,+BAA+B,GAAG,KAAK,CAAC;AACxC,QAAA,mCAAmC,GAAG,EAAE,CAAC;AACzC,QAAA,qCAAqC,GAAG,KAAK,CAAC;AAC9C,QAAA,QAAQ,GAAG,CAAC,CAAC;AACb,QAAA,SAAS,GAAG,IAAI,CAAC;AACjB,QAAA,SAAS,GAAG,IAAI,CAAC;AACjB,QAAA,QAAQ,GAAG,IAAI,CAAC;AAChB,QAAA,SAAS,GAAG,IAAI,CAAC;AACjB,QAAA,aAAa,GAAG,IAAI,CAAC;AACrB,QAAA,MAAM,GAAG,IAAI,CAAC"}

111
node_modules/mongodb/lib/cmap/wire_protocol/on_data.js generated vendored Normal file
View file

@ -0,0 +1,111 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.onData = onData;
const utils_1 = require("../../utils");
/**
* onData is adapted from Node.js' events.on helper
* https://nodejs.org/api/events.html#eventsonemitter-eventname-options
*
* Returns an AsyncIterator that iterates each 'data' event emitted from emitter.
* It will reject upon an error event.
*/
function onData(emitter, { timeoutContext, signal }) {
signal?.throwIfAborted();
// Setup pending events and pending promise lists
/**
* When the caller has not yet called .next(), we store the
* value from the event in this list. Next time they call .next()
* we pull the first value out of this list and resolve a promise with it.
*/
const unconsumedEvents = new utils_1.List();
/**
* When there has not yet been an event, a new promise will be created
* and implicitly stored in this list. When an event occurs we take the first
* promise in this list and resolve it.
*/
const unconsumedPromises = new utils_1.List();
/**
* Stored an error created by an error event.
* This error will turn into a rejection for the subsequent .next() call
*/
let error = null;
/** Set to true only after event listeners have been removed. */
let finished = false;
const iterator = {
next() {
// First, we consume all unread events
const value = unconsumedEvents.shift();
if (value != null) {
return Promise.resolve({ value, done: false });
}
// Then we error, if an error happened
// This happens one time if at all, because after 'error'
// we stop listening
if (error != null) {
const p = Promise.reject(error);
// Only the first element errors
error = null;
return p;
}
// If the iterator is finished, resolve to done
if (finished)
return closeHandler();
// Wait until an event happens
const { promise, resolve, reject } = (0, utils_1.promiseWithResolvers)();
unconsumedPromises.push({ resolve, reject });
return promise;
},
return() {
return closeHandler();
},
throw(err) {
errorHandler(err);
return Promise.resolve({ value: undefined, done: true });
},
[Symbol.asyncIterator]() {
return this;
},
async [Symbol.asyncDispose]() {
await closeHandler();
}
};
// Adding event handlers
emitter.on('data', eventHandler);
emitter.on('error', errorHandler);
const abortListener = (0, utils_1.addAbortListener)(signal, function () {
errorHandler(this.reason);
});
const timeoutForSocketRead = timeoutContext?.timeoutForSocketRead;
timeoutForSocketRead?.throwIfExpired();
timeoutForSocketRead?.then(undefined, errorHandler);
return iterator;
function eventHandler(value) {
const promise = unconsumedPromises.shift();
if (promise != null)
promise.resolve({ value, done: false });
else
unconsumedEvents.push(value);
}
function errorHandler(err) {
const promise = unconsumedPromises.shift();
if (promise != null)
promise.reject(err);
else
error = err;
void closeHandler();
}
function closeHandler() {
// Adding event handlers
emitter.off('data', eventHandler);
emitter.off('error', errorHandler);
abortListener?.[utils_1.kDispose]();
finished = true;
timeoutForSocketRead?.clear();
const doneResult = { value: undefined, done: finished };
for (const promise of unconsumedPromises) {
promise.resolve(doneResult);
}
return Promise.resolve(doneResult);
}
}
//# sourceMappingURL=on_data.js.map

View file

@ -0,0 +1 @@
{"version":3,"file":"on_data.js","sourceRoot":"","sources":["../../../src/cmap/wire_protocol/on_data.ts"],"names":[],"mappings":";;AAsBA,wBAoHC;AAtID,uCAAqF;AAWrF;;;;;;GAMG;AACH,SAAgB,MAAM,CACpB,OAAqB,EACrB,EAAE,cAAc,EAAE,MAAM,EAAmD;IAE3E,MAAM,EAAE,cAAc,EAAE,CAAC;IAEzB,iDAAiD;IACjD;;;;OAIG;IACH,MAAM,gBAAgB,GAAG,IAAI,YAAI,EAAU,CAAC;IAC5C;;;;OAIG;IACH,MAAM,kBAAkB,GAAG,IAAI,YAAI,EAAmB,CAAC;IAEvD;;;OAGG;IACH,IAAI,KAAK,GAAiB,IAAI,CAAC;IAE/B,gEAAgE;IAChE,IAAI,QAAQ,GAAG,KAAK,CAAC;IAErB,MAAM,QAAQ,GAA6C;QACzD,IAAI;YACF,sCAAsC;YACtC,MAAM,KAAK,GAAG,gBAAgB,CAAC,KAAK,EAAE,CAAC;YACvC,IAAI,KAAK,IAAI,IAAI,EAAE,CAAC;gBAClB,OAAO,OAAO,CAAC,OAAO,CAAC,EAAE,KAAK,EAAE,IAAI,EAAE,KAAK,EAAE,CAAC,CAAC;YACjD,CAAC;YAED,sCAAsC;YACtC,yDAAyD;YACzD,oBAAoB;YACpB,IAAI,KAAK,IAAI,IAAI,EAAE,CAAC;gBAClB,MAAM,CAAC,GAAG,OAAO,CAAC,MAAM,CAAC,KAAK,CAAC,CAAC;gBAChC,gCAAgC;gBAChC,KAAK,GAAG,IAAI,CAAC;gBACb,OAAO,CAAC,CAAC;YACX,CAAC;YAED,+CAA+C;YAC/C,IAAI,QAAQ;gBAAE,OAAO,YAAY,EAAE,CAAC;YAEpC,8BAA8B;YAC9B,MAAM,EAAE,OAAO,EAAE,OAAO,EAAE,MAAM,EAAE,GAAG,IAAA,4BAAoB,GAA0B,CAAC;YACpF,kBAAkB,CAAC,IAAI,CAAC,EAAE,OAAO,EAAE,MAAM,EAAE,CAAC,CAAC;YAC7C,OAAO,OAAO,CAAC;QACjB,CAAC;QAED,MAAM;YACJ,OAAO,YAAY,EAAE,CAAC;QACxB,CAAC;QAED,KAAK,CAAC,GAAU;YACd,YAAY,CAAC,GAAG,CAAC,CAAC;YAClB,OAAO,OAAO,CAAC,OAAO,CAAC,EAAE,KAAK,EAAE,SAAS,EAAE,IAAI,EAAE,IAAI,EAAE,CAAC,CAAC;QAC3D,CAAC;QAED,CAAC,MAAM,CAAC,aAAa,CAAC;YACpB,OAAO,IAAI,CAAC;QACd,CAAC;QAED,KAAK,CAAC,CAAC,MAAM,CAAC,YAAY,CAAC;YACzB,MAAM,YAAY,EAAE,CAAC;QACvB,CAAC;KACF,CAAC;IAEF,wBAAwB;IACxB,OAAO,CAAC,EAAE,CAAC,MAAM,EAAE,YAAY,CAAC,CAAC;IACjC,OAAO,CAAC,EAAE,CAAC,OAAO,EAAE,YAAY,CAAC,CAAC;IAClC,MAAM,aAAa,GAAG,IAAA,wBAAgB,EAAC,MAAM,EAAE;QAC7C,YAAY,CAAC,IAAI,CAAC,MAAM,CAAC,CAAC;IAC5B,CAAC,CAAC,CAAC;IAEH,MAAM,oBAAoB,GAAG,cAAc,EAAE,oBAAoB,CAAC;IAClE,oBAAoB,EAAE,cAAc,EAAE,CAAC;IACvC,oBAAoB,EAAE,IAAI,CAAC,SAAS,EAAE,YAAY,CAAC,CAAC;IAEpD,OAAO,QAAQ,CAAC;IAEhB,SAAS,YAAY,CAAC,KAAa;QACjC,MAAM,OAAO,GAAG,kBAAkB,CAAC,KAAK,EAAE,CAAC;QAC3C,IAAI,OAAO,IAAI,IAAI;YAAE,OAAO,CAAC,OAAO,CAAC,EAAE,KAAK,EAAE,IAAI,EAAE,KAAK,EAAE,CAAC,CAAC;;YACxD,gBAAgB,CAAC,IAAI,CAAC,KAAK,CAAC,CAAC;IACpC,CAAC;IAED,SAAS,YAAY,CAAC,GAAU;QAC9B,MAAM,OAAO,GAAG,kBAAkB,CAAC,KAAK,EAAE,CAAC;QAE3C,IAAI,OAAO,IAAI,IAAI;YAAE,OAAO,CAAC,MAAM,CAAC,GAAG,CAAC,CAAC;;YACpC,KAAK,GAAG,GAAG,CAAC;QACjB,KAAK,YAAY,EAAE,CAAC;IACtB,CAAC;IAED,SAAS,YAAY;QACnB,wBAAwB;QACxB,OAAO,CAAC,GAAG,CAAC,MAAM,EAAE,YAAY,CAAC,CAAC;QAClC,OAAO,CAAC,GAAG,CAAC,OAAO,EAAE,YAAY,CAAC,CAAC;QACnC,aAAa,EAAE,CAAC,gBAAQ,CAAC,EAAE,CAAC;QAC5B,QAAQ,GAAG,IAAI,CAAC;QAChB,oBAAoB,EAAE,KAAK,EAAE,CAAC;QAC9B,MAAM,UAAU,GAAG,EAAE,KAAK,EAAE,SAAS,EAAE,IAAI,EAAE,QAAQ,EAAW,CAAC;QAEjE,KAAK,MAAM,OAAO,IAAI,kBAAkB,EAAE,CAAC;YACzC,OAAO,CAAC,OAAO,CAAC,UAAU,CAAC,CAAC;QAC9B,CAAC;QAED,OAAO,OAAO,CAAC,OAAO,CAAC,UAAU,CAAC,CAAC;IACrC,CAAC;AACH,CAAC"}

View file

@ -0,0 +1,222 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.OnDemandDocument = void 0;
const bson_1 = require("../../../bson");
const BSONElementOffset = {
type: 0,
nameOffset: 1,
nameLength: 2,
offset: 3,
length: 4
};
/** @internal */
class OnDemandDocument {
constructor(bson, offset = 0, isArray = false,
/** If elements was already calculated */
elements) {
/**
* Maps JS strings to elements and jsValues for speeding up subsequent lookups.
* - If `false` then name does not exist in the BSON document
* - If `CachedBSONElement` instance name exists
* - If `cache[name].value == null` jsValue has not yet been parsed
* - Null/Undefined values do not get cached because they are zero-length values.
*/
this.cache = Object.create(null);
/** Caches the index of elements that have been named */
this.indexFound = Object.create(null);
this.bson = bson;
this.offset = offset;
this.isArray = isArray;
this.elements = elements ?? (0, bson_1.parseToElementsToArray)(this.bson, offset);
}
/** Only supports basic latin strings */
isElementName(name, element) {
const nameLength = element[BSONElementOffset.nameLength];
const nameOffset = element[BSONElementOffset.nameOffset];
if (name.length !== nameLength)
return false;
const nameEnd = nameOffset + nameLength;
for (let byteIndex = nameOffset, charIndex = 0; charIndex < name.length && byteIndex < nameEnd; charIndex++, byteIndex++) {
if (this.bson[byteIndex] !== name.charCodeAt(charIndex))
return false;
}
return true;
}
/**
* Seeks into the elements array for an element matching the given name.
*
* @remarks
* Caching:
* - Caches the existence of a property making subsequent look ups for non-existent properties return immediately
* - Caches names mapped to elements to avoid reiterating the array and comparing the name again
* - Caches the index at which an element has been found to prevent rechecking against elements already determined to belong to another name
*
* @param name - a basic latin string name of a BSON element
* @returns
*/
getElement(name) {
const cachedElement = this.cache[name];
if (cachedElement === false)
return null;
if (cachedElement != null) {
return cachedElement;
}
if (typeof name === 'number') {
if (this.isArray) {
if (name < this.elements.length) {
const element = this.elements[name];
const cachedElement = { element, value: undefined };
this.cache[name] = cachedElement;
this.indexFound[name] = true;
return cachedElement;
}
else {
return null;
}
}
else {
return null;
}
}
for (let index = 0; index < this.elements.length; index++) {
const element = this.elements[index];
// skip this element if it has already been associated with a name
if (!(index in this.indexFound) && this.isElementName(name, element)) {
const cachedElement = { element, value: undefined };
this.cache[name] = cachedElement;
this.indexFound[index] = true;
return cachedElement;
}
}
this.cache[name] = false;
return null;
}
toJSValue(element, as) {
const type = element[BSONElementOffset.type];
const offset = element[BSONElementOffset.offset];
const length = element[BSONElementOffset.length];
if (as !== type) {
return null;
}
switch (as) {
case bson_1.BSONType.null:
case bson_1.BSONType.undefined:
return null;
case bson_1.BSONType.double:
return (0, bson_1.getFloat64LE)(this.bson, offset);
case bson_1.BSONType.int:
return (0, bson_1.getInt32LE)(this.bson, offset);
case bson_1.BSONType.long:
return (0, bson_1.getBigInt64LE)(this.bson, offset);
case bson_1.BSONType.bool:
return Boolean(this.bson[offset]);
case bson_1.BSONType.objectId:
return new bson_1.ObjectId(this.bson.subarray(offset, offset + 12));
case bson_1.BSONType.timestamp:
return new bson_1.Timestamp((0, bson_1.getBigInt64LE)(this.bson, offset));
case bson_1.BSONType.string:
return (0, bson_1.toUTF8)(this.bson, offset + 4, offset + length - 1, false);
case bson_1.BSONType.binData: {
const totalBinarySize = (0, bson_1.getInt32LE)(this.bson, offset);
const subType = this.bson[offset + 4];
if (subType === 2) {
const subType2BinarySize = (0, bson_1.getInt32LE)(this.bson, offset + 1 + 4);
if (subType2BinarySize < 0)
throw new bson_1.BSONError('Negative binary type element size found for subtype 0x02');
if (subType2BinarySize > totalBinarySize - 4)
throw new bson_1.BSONError('Binary type with subtype 0x02 contains too long binary size');
if (subType2BinarySize < totalBinarySize - 4)
throw new bson_1.BSONError('Binary type with subtype 0x02 contains too short binary size');
return new bson_1.Binary(this.bson.subarray(offset + 1 + 4 + 4, offset + 1 + 4 + 4 + subType2BinarySize), 2);
}
return new bson_1.Binary(this.bson.subarray(offset + 1 + 4, offset + 1 + 4 + totalBinarySize), subType);
}
case bson_1.BSONType.date:
// Pretend this is correct.
return new Date(Number((0, bson_1.getBigInt64LE)(this.bson, offset)));
case bson_1.BSONType.object:
return new OnDemandDocument(this.bson, offset);
case bson_1.BSONType.array:
return new OnDemandDocument(this.bson, offset, true);
default:
throw new bson_1.BSONError(`Unsupported BSON type: ${as}`);
}
}
/**
* Returns the number of elements in this BSON document
*/
size() {
return this.elements.length;
}
/**
* Checks for the existence of an element by name.
*
* @remarks
* Uses `getElement` with the expectation that will populate caches such that a `has` call
* followed by a `getElement` call will not repeat the cost paid by the first look up.
*
* @param name - element name
*/
has(name) {
const cachedElement = this.cache[name];
if (cachedElement === false)
return false;
if (cachedElement != null)
return true;
return this.getElement(name) != null;
}
get(name, as, required) {
const element = this.getElement(name);
if (element == null) {
if (required === true) {
throw new bson_1.BSONError(`BSON element "${name}" is missing`);
}
else {
return null;
}
}
if (element.value == null) {
const value = this.toJSValue(element.element, as);
if (value == null) {
if (required === true) {
throw new bson_1.BSONError(`BSON element "${name}" is missing`);
}
else {
return null;
}
}
// It is important to never store null
element.value = value;
}
return element.value;
}
getNumber(name, required) {
const maybeBool = this.get(name, bson_1.BSONType.bool);
const bool = maybeBool == null ? null : maybeBool ? 1 : 0;
const maybeLong = this.get(name, bson_1.BSONType.long);
const long = maybeLong == null ? null : Number(maybeLong);
const result = bool ?? long ?? this.get(name, bson_1.BSONType.int) ?? this.get(name, bson_1.BSONType.double);
if (required === true && result == null) {
throw new bson_1.BSONError(`BSON element "${name}" is missing`);
}
return result;
}
/**
* Deserialize this object, DOES NOT cache result so avoid multiple invocations
* @param options - BSON deserialization options
*/
toObject(options) {
return (0, bson_1.deserialize)(this.bson, {
...options,
index: this.offset,
allowObjectSmallerThanBufferSize: true
});
}
/** Returns this document's bytes only */
toBytes() {
const size = (0, bson_1.getInt32LE)(this.bson, this.offset);
return this.bson.subarray(this.offset, this.offset + size);
}
}
exports.OnDemandDocument = OnDemandDocument;
//# sourceMappingURL=document.js.map

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,315 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.ClientBulkWriteCursorResponse = exports.ExplainedCursorResponse = exports.CursorResponse = exports.MongoDBResponse = void 0;
exports.isErrorResponse = isErrorResponse;
const bson_1 = require("../../bson");
const error_1 = require("../../error");
const utils_1 = require("../../utils");
const document_1 = require("./on_demand/document");
const BSONElementOffset = {
type: 0,
nameOffset: 1,
nameLength: 2,
offset: 3,
length: 4
};
/**
* Accepts a BSON payload and checks for na "ok: 0" element.
* This utility is intended to prevent calling response class constructors
* that expect the result to be a success and demand certain properties to exist.
*
* For example, a cursor response always expects a cursor embedded document.
* In order to write the class such that the properties reflect that assertion (non-null)
* we cannot invoke the subclass constructor if the BSON represents an error.
*
* @param bytes - BSON document returned from the server
*/
function isErrorResponse(bson, elements) {
for (let eIdx = 0; eIdx < elements.length; eIdx++) {
const element = elements[eIdx];
if (element[BSONElementOffset.nameLength] === 2) {
const nameOffset = element[BSONElementOffset.nameOffset];
// 111 == "o", 107 == "k"
if (bson[nameOffset] === 111 && bson[nameOffset + 1] === 107) {
const valueOffset = element[BSONElementOffset.offset];
const valueLength = element[BSONElementOffset.length];
// If any byte in the length of the ok number (works for any type) is non zero,
// then it is considered "ok: 1"
for (let i = valueOffset; i < valueOffset + valueLength; i++) {
if (bson[i] !== 0x00)
return false;
}
return true;
}
}
}
return true;
}
/** @internal */
class MongoDBResponse extends document_1.OnDemandDocument {
get(name, as, required) {
try {
return super.get(name, as, required);
}
catch (cause) {
throw new error_1.MongoUnexpectedServerResponseError(cause.message, { cause });
}
}
static is(value) {
return value instanceof MongoDBResponse;
}
static make(bson) {
const elements = (0, bson_1.parseToElementsToArray)(bson, 0);
const isError = isErrorResponse(bson, elements);
return isError
? new MongoDBResponse(bson, 0, false, elements)
: new this(bson, 0, false, elements);
}
// {ok:1}
static { this.empty = new MongoDBResponse(new Uint8Array([13, 0, 0, 0, 16, 111, 107, 0, 1, 0, 0, 0, 0])); }
/**
* Returns true iff:
* - ok is 0 and the top-level code === 50
* - ok is 1 and the writeErrors array contains a code === 50
* - ok is 1 and the writeConcern object contains a code === 50
*/
get isMaxTimeExpiredError() {
// {ok: 0, code: 50 ... }
const isTopLevel = this.ok === 0 && this.code === error_1.MONGODB_ERROR_CODES.MaxTimeMSExpired;
if (isTopLevel)
return true;
if (this.ok === 0)
return false;
// {ok: 1, writeConcernError: {code: 50 ... }}
const isWriteConcern = this.get('writeConcernError', bson_1.BSONType.object)?.getNumber('code') ===
error_1.MONGODB_ERROR_CODES.MaxTimeMSExpired;
if (isWriteConcern)
return true;
const writeErrors = this.get('writeErrors', bson_1.BSONType.array);
if (writeErrors?.size()) {
for (let i = 0; i < writeErrors.size(); i++) {
const isWriteError = writeErrors.get(i, bson_1.BSONType.object)?.getNumber('code') ===
error_1.MONGODB_ERROR_CODES.MaxTimeMSExpired;
// {ok: 1, writeErrors: [{code: 50 ... }]}
if (isWriteError)
return true;
}
}
return false;
}
/**
* Drivers can safely assume that the `recoveryToken` field is always a BSON document but drivers MUST NOT modify the
* contents of the document.
*/
get recoveryToken() {
return (this.get('recoveryToken', bson_1.BSONType.object)?.toObject({
promoteValues: false,
promoteLongs: false,
promoteBuffers: false,
validation: { utf8: true }
}) ?? null);
}
/**
* The server creates a cursor in response to a snapshot find/aggregate command and reports atClusterTime within the cursor field in the response.
* For the distinct command the server adds a top-level atClusterTime field to the response.
* The atClusterTime field represents the timestamp of the read and is guaranteed to be majority committed.
*/
get atClusterTime() {
return (this.get('cursor', bson_1.BSONType.object)?.get('atClusterTime', bson_1.BSONType.timestamp) ??
this.get('atClusterTime', bson_1.BSONType.timestamp));
}
get operationTime() {
return this.get('operationTime', bson_1.BSONType.timestamp);
}
/** Normalizes whatever BSON value is "ok" to a JS number 1 or 0. */
get ok() {
return this.getNumber('ok') ? 1 : 0;
}
get $err() {
return this.get('$err', bson_1.BSONType.string);
}
get errmsg() {
return this.get('errmsg', bson_1.BSONType.string);
}
get code() {
return this.getNumber('code');
}
get $clusterTime() {
if (!('clusterTime' in this)) {
const clusterTimeDoc = this.get('$clusterTime', bson_1.BSONType.object);
if (clusterTimeDoc == null) {
this.clusterTime = null;
return null;
}
const clusterTime = clusterTimeDoc.get('clusterTime', bson_1.BSONType.timestamp, true);
const signature = clusterTimeDoc.get('signature', bson_1.BSONType.object)?.toObject();
// @ts-expect-error: `signature` is incorrectly typed. It is public API.
this.clusterTime = { clusterTime, signature };
}
return this.clusterTime ?? null;
}
toObject(options) {
const exactBSONOptions = {
...(0, bson_1.pluckBSONSerializeOptions)(options ?? {}),
validation: (0, bson_1.parseUtf8ValidationOption)(options)
};
return super.toObject(exactBSONOptions);
}
}
exports.MongoDBResponse = MongoDBResponse;
/** @internal */
class CursorResponse extends MongoDBResponse {
constructor() {
super(...arguments);
this._batch = null;
this.iterated = 0;
this._encryptedBatch = null;
}
/**
* This supports a feature of the FindCursor.
* It is an optimization to avoid an extra getMore when the limit has been reached
*/
static get emptyGetMore() {
return new CursorResponse((0, bson_1.serialize)({ ok: 1, cursor: { id: 0n, nextBatch: [] } }));
}
static is(value) {
return value instanceof CursorResponse || value === CursorResponse.emptyGetMore;
}
get cursor() {
return this.get('cursor', bson_1.BSONType.object, true);
}
get id() {
try {
return bson_1.Long.fromBigInt(this.cursor.get('id', bson_1.BSONType.long, true));
}
catch (cause) {
throw new error_1.MongoUnexpectedServerResponseError(cause.message, { cause });
}
}
get ns() {
const namespace = this.cursor.get('ns', bson_1.BSONType.string);
if (namespace != null)
return (0, utils_1.ns)(namespace);
return null;
}
get length() {
return Math.max(this.batchSize - this.iterated, 0);
}
get encryptedBatch() {
if (this.encryptedResponse == null)
return null;
if (this._encryptedBatch != null)
return this._encryptedBatch;
const cursor = this.encryptedResponse?.get('cursor', bson_1.BSONType.object);
if (cursor?.has('firstBatch'))
this._encryptedBatch = cursor.get('firstBatch', bson_1.BSONType.array, true);
else if (cursor?.has('nextBatch'))
this._encryptedBatch = cursor.get('nextBatch', bson_1.BSONType.array, true);
else
throw new error_1.MongoUnexpectedServerResponseError('Cursor document did not contain a batch');
return this._encryptedBatch;
}
get batch() {
if (this._batch != null)
return this._batch;
const cursor = this.cursor;
if (cursor.has('firstBatch'))
this._batch = cursor.get('firstBatch', bson_1.BSONType.array, true);
else if (cursor.has('nextBatch'))
this._batch = cursor.get('nextBatch', bson_1.BSONType.array, true);
else
throw new error_1.MongoUnexpectedServerResponseError('Cursor document did not contain a batch');
return this._batch;
}
get batchSize() {
return this.batch?.size();
}
get postBatchResumeToken() {
return (this.cursor.get('postBatchResumeToken', bson_1.BSONType.object)?.toObject({
promoteValues: false,
promoteLongs: false,
promoteBuffers: false,
validation: { utf8: true }
}) ?? null);
}
shift(options) {
if (this.iterated >= this.batchSize) {
return null;
}
const result = this.batch.get(this.iterated, bson_1.BSONType.object, true) ?? null;
const encryptedResult = this.encryptedBatch?.get(this.iterated, bson_1.BSONType.object, true) ?? null;
this.iterated += 1;
if (options?.raw) {
return result.toBytes();
}
else {
const object = result.toObject(options);
if (encryptedResult) {
(0, utils_1.decorateDecryptionResult)(object, encryptedResult.toObject(options), true);
}
return object;
}
}
clear() {
this.iterated = this.batchSize;
}
}
exports.CursorResponse = CursorResponse;
/**
* Explain responses have nothing to do with cursor responses
* This class serves to temporarily avoid refactoring how cursors handle
* explain responses which is to detect that the response is not cursor-like and return the explain
* result as the "first and only" document in the "batch" and end the "cursor"
*/
class ExplainedCursorResponse extends CursorResponse {
constructor() {
super(...arguments);
this.isExplain = true;
this._length = 1;
}
get id() {
return bson_1.Long.fromBigInt(0n);
}
get batchSize() {
return 0;
}
get ns() {
return null;
}
get length() {
return this._length;
}
shift(options) {
if (this._length === 0)
return null;
this._length -= 1;
return this.toObject(options);
}
}
exports.ExplainedCursorResponse = ExplainedCursorResponse;
/**
* Client bulk writes have some extra metadata at the top level that needs to be
* included in the result returned to the user.
*/
class ClientBulkWriteCursorResponse extends CursorResponse {
get insertedCount() {
return this.get('nInserted', bson_1.BSONType.int, true);
}
get upsertedCount() {
return this.get('nUpserted', bson_1.BSONType.int, true);
}
get matchedCount() {
return this.get('nMatched', bson_1.BSONType.int, true);
}
get modifiedCount() {
return this.get('nModified', bson_1.BSONType.int, true);
}
get deletedCount() {
return this.get('nDeleted', bson_1.BSONType.int, true);
}
get writeConcernError() {
return this.get('writeConcernError', bson_1.BSONType.object, false);
}
}
exports.ClientBulkWriteCursorResponse = ClientBulkWriteCursorResponse;
//# sourceMappingURL=responses.js.map

Some files were not shown because too many files have changed in this diff Show more