A Vert.x client allowing applications to interact with a MongoDB instance, whether that’s saving, retrieving, searching, or deleting documents.
Mongo is a great match for persisting data in a Vert.x application as it natively handles JSON (BSON) documents.
Features
Completely non-blocking
Custom codec to support fast serialization to/from Vert.x JSON
Supports a majority of the configuration options from the MongoDB Java Driver
This client is based on the MongoDB Async Driver.
To use this project, add the following dependency to the dependencies section of your build descriptor:
Maven (in your pom.xml
):
<dependency>
<groupId>io.vertx</groupId>
<artifactId>vertx-mongo-client</artifactId>
<version>3.5.1</version>
</dependency>
Gradle (in your build.gradle
file):
compile 'io.vertx:vertx-mongo-client:3.5.1'
You can create a client in several ways:
In most cases you will want to share a pool between different client instances.
E.g. you scale your application by deploying multiple instances of your verticle and you want each verticle instance to share the same pool so you don’t end up with multiple pools
The simplest way to do this is as follows:
require 'vertx-mongo/mongo_client'
client = VertxMongo::MongoClient.create_shared(vertx, config)
The first call to MongoClient.createShared
will actually create the pool, and the specified config will be used.
Subsequent calls will return a new client instance that uses the same pool, so the configuration won’t be used.
You can create a client specifying a pool source name as follows
require 'vertx-mongo/mongo_client'
client = VertxMongo::MongoClient.create_shared(vertx, config, "MyPoolName")
If different clients are created using the same Vert.x instance and specifying the same pool name, they will share the same pool.
The first call to MongoClient.createShared
will actually create the pool, and the specified config will be used.
Subsequent calls will return a new client instance that uses the same pool, so the configuration won’t be used.
Use this way of creating if you wish different groups of clients to have different pools, e.g. they’re interacting with different databases.
In most cases you will want to share a pool between different client instances. However, it’s possible you want to create a client instance that doesn’t share its pool with any other client.
In that case you can use MongoClient.createNonShared
.
require 'vertx-mongo/mongo_client'
client = VertxMongo::MongoClient.create_non_shared(vertx, config)
This is equivalent to calling MongoClient.createShared
with a unique pool name each time.
The client API is represented by MongoClient
.
To save a document you use save
.
If the document has no \_id
field, it is inserted, otherwise, it is upserted.
Upserted means it is inserted if it doesn’t already exist, otherwise it is updated.
If the document is inserted and has no id, then the id field generated will be returned to the result handler.
Here’s an example of saving a document and getting the id back
# Document has no id
document = {
'title' => "The Hobbit"
}
mongoClient.save("books", document) { |res_err,res|
if (res_err == nil)
id = res
puts "Saved book with id #{id}"
else
res_err.print_stack_trace()
end
}
And here’s an example of saving a document which already has an id.
# Document has an id already
document = {
'title' => "The Hobbit",
'_id' => "123244"
}
mongoClient.save("books", document) { |res_err,res|
if (res_err == nil)
# ...
else
res_err.print_stack_trace()
end
}
To insert a document you use insert
.
If the document is inserted and has no id, then the id field generated will be returned to the result handler.
# Document has an id already
document = {
'title' => "The Hobbit"
}
mongoClient.insert("books", document) { |res_err,res|
if (res_err == nil)
id = res
puts "Inserted book with id #{id}"
else
res_err.print_stack_trace()
end
}
If a document is inserted with an id, and a document with that id already exists, the insert will fail:
# Document has an id already
document = {
'title' => "The Hobbit",
'_id' => "123244"
}
mongoClient.insert("books", document) { |res_err,res|
if (res_err == nil)
#...
else
# Will fail if the book with that id already exists.
end
}
To update a documents you use updateCollection
.
This updates one or multiple documents in a collection.
The json object that is passed in the updateCollection
parameter must contain
Update Operators
and determines how the object is updated.
The json object specified in the query parameter determines which documents in the collection will be updated.
Here’s an example of updating a document in the books collection:
# Match any documents with title=The Hobbit
query = {
'title' => "The Hobbit"
}
# Set the author field
update = {
'$set' => {
'author' => "J. R. R. Tolkien"
}
}
mongoClient.update_collection("books", query, update) { |res_err,res|
if (res_err == nil)
puts "Book updated !"
else
res_err.print_stack_trace()
end
}
To specify if the update should upsert or update multiple documents, use
updateCollectionWithOptions
and pass in an instance of UpdateOptions
.
This has the following fields:
multi
set to true to update multiple documents
upsert
set to true to insert the document if the query doesn’t match
writeConcern
the write concern for this operation
# Match any documents with title=The Hobbit
query = {
'title' => "The Hobbit"
}
# Set the author field
update = {
'$set' => {
'author' => "J. R. R. Tolkien"
}
}
options = {
'multi' => true
}
mongoClient.update_collection_with_options("books", query, update, options) { |res_err,res|
if (res_err == nil)
puts "Book updated !"
else
res_err.print_stack_trace()
end
}
To replace documents you use replaceDocuments
.
This is similar to the update operation, however it does not take any operator. Instead it replaces the entire document with the one provided.
Here’s an example of replacing a document in the books collection
query = {
'title' => "The Hobbit"
}
replace = {
'title' => "The Lord of the Rings",
'author' => "J. R. R. Tolkien"
}
mongoClient.replace_documents("books", query, replace) { |res_err,res|
if (res_err == nil)
puts "Book replaced !"
else
res_err.print_stack_trace()
end
}
To execute multiple insert, update, replace, or delete operations at once, use bulkWrite
.
You can pass a list of BulkOperations
, with each working similar to the matching single operation.
You can pass as many operations, even of the same type, as you wish.
To specify if the bulk operation should be executed in order, and with what write option, use bulkWriteWithOptions
and pass an instance of BulkWriteOptions
.
For more explanation what ordered means, see
Execution of Operations.
To find documents you use find
.
The query
parameter is used to match the documents in the collection.
Here’s a simple example with an empty query that will match all books:
require 'json'
# empty query = match any
query = {
}
mongoClient.find("books", query) { |res_err,res|
if (res_err == nil)
res.each do |json|
puts JSON.generate(json)
end
else
res_err.print_stack_trace()
end
}
Here’s another example that will match all books by Tolkien:
require 'json'
# will match all Tolkien books
query = {
'author' => "J. R. R. Tolkien"
}
mongoClient.find("books", query) { |res_err,res|
if (res_err == nil)
res.each do |json|
puts JSON.generate(json)
end
else
res_err.print_stack_trace()
end
}
The matching documents are returned as a list of json objects in the result handler.
To specify things like what fields to return, how many results to return, etc use findWithOptions
and pass in the an instance of FindOptions
.
This has the following fields:
fields
The fields to return in the results. Defaults to null
, meaning all fields will be returned
sort
The fields to sort by. Defaults to null
.
limit
The limit of the number of results to return. Default to -1
, meaning all results will be returned.
skip
The number of documents to skip before returning the results. Defaults to 0
.
When dealing with large data sets, it is not advised to use the
find
and
findWithOptions
methods.
In order to avoid inflating the whole response into memory, use findBatch
:
require 'json'
# will match all Tolkien books
query = {
'author' => "J. R. R. Tolkien"
}
mongoClient.find_batch("book", query).exception_handler() { |throwable|
throwable.print_stack_trace()
}.end_handler() { |v|
puts "End of research"
}.handler() { |doc|
puts "Found doc: #{JSON.generate(doc)}"
}
The matching documents are emitted one by one by the ReadStream
handler.
FindOptions
has an extra parameter batchSize
which you can use to set the number of documents to load at once:
require 'json'
# will match all Tolkien books
query = {
'author' => "J. R. R. Tolkien"
}
options = {
'batchSize' => 100
}
mongoClient.find_batch_with_options("book", query, options).exception_handler() { |throwable|
throwable.print_stack_trace()
}.end_handler() { |v|
puts "End of research"
}.handler() { |doc|
puts "Found doc: #{JSON.generate(doc)}"
}
By default, batchSize
is set to 20.
To find a single document you use findOne
.
This works just like find
but it returns just the first matching document.
To remove documents use removeDocuments
.
The query
parameter is used to match the documents in the collection to determine which ones to remove.
Here’s an example of removing all Tolkien books:
query = {
'author' => "J. R. R. Tolkien"
}
mongoClient.remove_documents("books", query) { |res_err,res|
if (res_err == nil)
puts "Never much liked Tolkien stuff!"
else
res_err.print_stack_trace()
end
}
To remove a single document you use removeDocument
.
This works just like removeDocuments
but it removes just the first matching document.
To count documents use count
.
Here’s an example that counts the number of Tolkien books. The number is passed to the result handler.
query = {
'author' => "J. R. R. Tolkien"
}
mongoClient.count("books", query) { |res_err,res|
if (res_err == nil)
num = res
else
res_err.print_stack_trace()
end
}
All MongoDB documents are stored in collections.
To get a list of all collections you can use getCollections
mongoClient.get_collections() { |res_err,res|
if (res_err == nil)
collections = res
else
res_err.print_stack_trace()
end
}
To create a new collection you can use createCollection
mongoClient.create_collection("mynewcollectionr") { |res_err,res|
if (res_err == nil)
# Created ok!
else
res_err.print_stack_trace()
end
}
To drop a collection you can use dropCollection
Note
|
Dropping a collection will delete all documents within it! |
mongoClient.drop_collection("mynewcollectionr") { |res_err,res|
if (res_err == nil)
# Dropped ok!
else
res_err.print_stack_trace()
end
}
You can run arbitrary MongoDB commands with runCommand
.
Commands can be used to run more advanced MongoDB features, such as using MapReduce. For more information see the mongo docs for supported Commands.
Here’s an example of running an aggregate command. Note that the command name must be specified as a parameter and also be contained in the JSON that represents the command. This is because JSON is not ordered but BSON is ordered and MongoDB expects the first BSON entry to be the name of the command. In order for us to know which of the entries in the JSON is the command name it must be specified as a parameter.
command = {
'aggregate' => "collection_name",
'pipeline' => [
]
}
mongoClient.run_command("aggregate", command) { |res_err,res|
if (res_err == nil)
resArr = res['result']
# etc
else
res_err.print_stack_trace()
end
}
For now, only date
, oid
and binary
types are supported
(see MongoDB Extended JSON).
Here’s an example of inserting a document with a date
field:
document = {
'title' => "The Hobbit",
'publicationDate' => {
'$date' => "1937-09-21T00:00:00+00:00"
}
}
mongoService.save("publishedBooks", document) { |res_err,res|
if (res_err == nil)
id = res
mongoService.find_one("publishedBooks", {
'_id' => id
}, nil) { |res2_err,res2|
if (res2_err == nil)
puts "To retrieve ISO-8601 date : #{res2['publicationDate']['$date']}"
else
res2_err.print_stack_trace()
end
}
else
res_err.print_stack_trace()
end
}
Here’s an example (in Java) of inserting a document with a binary field and reading it back
byte[] binaryObject = new byte[40];
JsonObject document = new JsonObject()
.put("name", "Alan Turing")
.put("binaryStuff", new JsonObject().put("$binary", binaryObject));
mongoService.save("smartPeople", document, res -> {
if (res.succeeded()) {
String id = res.result();
mongoService.findOne("smartPeople", new JsonObject().put("_id", id), null, res2 -> {
if (res2.succeeded()) {
byte[] reconstitutedBinaryObject = res2.result().getJsonObject("binaryStuff").getBinary("$binary");
//This could now be de-serialized into an object in real life
} else {
res2.cause().printStackTrace();
}
});
} else {
res.cause().printStackTrace();
}
});
Here’s an example of inserting a base 64 encoded string, typing it as binary a binary field, and reading it back
#This could be a the byte contents of a pdf file, etc converted to base 64
base64EncodedString = "a2FpbHVhIGlzIHRoZSAjMSBiZWFjaCBpbiB0aGUgd29ybGQ="
document = {
'name' => "Alan Turing",
'binaryStuff' => {
'$binary' => base64EncodedString
}
}
mongoService.save("smartPeople", document) { |res_err,res|
if (res_err == nil)
id = res
mongoService.find_one("smartPeople", {
'_id' => id
}, nil) { |res2_err,res2|
if (res2_err == nil)
reconstitutedBase64EncodedString = res2['binaryStuff']['$binary']
#This could now converted back to bytes from the base 64 string
else
res2_err.print_stack_trace()
end
}
else
res_err.print_stack_trace()
end
}
Here’s an example of inserting an object ID and reading it back
individualId = Java::OrgBsonTypes::ObjectId.new().to_hex_string()
document = {
'name' => "Stephen Hawking",
'individualId' => {
'$oid' => individualId
}
}
mongoService.save("smartPeople", document) { |res_err,res|
if (res_err == nil)
id = res
query = {
'_id' => id
}
mongoService.find_one("smartPeople", query, nil) { |res2_err,res2|
if (res2_err == nil)
reconstitutedIndividualId = res2['individualId']['$oid']
else
res2_err.print_stack_trace()
end
}
else
res_err.print_stack_trace()
end
}
Here’s an example of getting distinct value
document = {
'title' => "The Hobbit"
}
mongoClient.save("books", document) { |res_err,res|
if (res_err == nil)
mongoClient.distinct("books", "title", Java::JavaLang::String::class.get_name()) { |res2_err,res2|
puts "Title is : #{res2[0]}"
}
else
res_err.print_stack_trace()
end
}
Here’s an example of getting distinct value in batch mode
document = {
'title' => "The Hobbit"
}
mongoClient.save("books", document) { |res_err,res|
if (res_err == nil)
mongoClient.distinct_batch("books", "title", Java::JavaLang::String::class.get_name()).handler() { |book|
puts "Title is : #{book['title']}"
}
else
res_err.print_stack_trace()
end
}
Here’s an example of getting distinct value with query
document = {
'title' => "The Hobbit",
'publicationDate' => {
'$date' => "1937-09-21T00:00:00+00:00"
}
}
query = {
'publicationDate' => {
'$gte' => {
'$date' => "1937-09-21T00:00:00+00:00"
}
}
}
mongoClient.save("books", document) { |res_err,res|
if (res_err == nil)
mongoClient.distinct_with_query("books", "title", Java::JavaLang::String::class.get_name(), query) { |res2_err,res2|
puts "Title is : #{res2[0]}"
}
end
}
Here’s an example of getting distinct value in batch mode with query
document = {
'title' => "The Hobbit",
'publicationDate' => {
'$date' => "1937-09-21T00:00:00+00:00"
}
}
query = {
'publicationDate' => {
'$gte' => {
'$date' => "1937-09-21T00:00:00+00:00"
}
}
}
mongoClient.save("books", document) { |res_err,res|
if (res_err == nil)
mongoClient.distinct_batch_with_query("books", "title", Java::JavaLang::String::class.get_name(), query).handler() { |book|
puts "Title is : #{book['title']}"
}
end
}
The client is configured with a json object.
The following configuration is supported by the mongo client:
db_name
Name of the database in the MongoDB instance to use. Defaults to default_db
useObjectId
Toggle this option to support persisting and retrieving ObjectId’s as strings. If true
, hex-strings will
be saved as native Mongodb ObjectId types in the document collection. This will allow the sorting of documents based on creation
time. You can also derive the creation time from the hex-string using ObjectId::getDate(). Set to false
for other types of your choosing.
If set to false, or left to default, hex strings will be generated as the document _id if the _id is omitted from the document.
Defaults to false
.
The mongo client tries to support most options that are allowed by the driver. There are two ways to configure mongo for use by the driver, either by a connection string or by separate configuration options.
Note
|
If the connection string is used the mongo client will ignore any driver configuration options. |
connection_string
The connection string the driver uses to create the client. E.g. mongodb://localhost:27017
.
For more information on the format of the connection string please consult the driver documentation.
Specific driver configuration options
{
// Single Cluster Settings
"host" : "127.0.0.1", // string
"port" : 27017, // int
// Multiple Cluster Settings
"hosts" : [
{
"host" : "cluster1", // string
"port" : 27000 // int
},
{
"host" : "cluster2", // string
"port" : 28000 // int
},
...
],
"replicaSet" : "foo", // string
"serverSelectionTimeoutMS" : 30000, // long
// Connection Pool Settings
"maxPoolSize" : 50, // int
"minPoolSize" : 25, // int
"maxIdleTimeMS" : 300000, // long
"maxLifeTimeMS" : 3600000, // long
"waitQueueMultiple" : 10, // int
"waitQueueTimeoutMS" : 10000, // long
"maintenanceFrequencyMS" : 2000, // long
"maintenanceInitialDelayMS" : 500, // long
// Credentials / Auth
"username" : "john", // string
"password" : "passw0rd", // string
"authSource" : "some.db" // string
// Auth mechanism
"authMechanism" : "GSSAPI", // string
"gssapiServiceName" : "myservicename", // string
// Socket Settings
"connectTimeoutMS" : 300000, // int
"socketTimeoutMS" : 100000, // int
"sendBufferSize" : 8192, // int
"receiveBufferSize" : 8192, // int
"keepAlive" : true // boolean
// Heartbeat socket settings
"heartbeat.socket" : {
"connectTimeoutMS" : 300000, // int
"socketTimeoutMS" : 100000, // int
"sendBufferSize" : 8192, // int
"receiveBufferSize" : 8192, // int
"keepAlive" : true // boolean
}
// Server Settings
"heartbeatFrequencyMS" : 1000 // long
"minHeartbeatFrequencyMS" : 500 // long
}
Driver option descriptions
host
The host the MongoDB instance is running. Defaults to 127.0.0.1
. This is ignored if hosts
is specified
port
The port the MongoDB instance is listening on. Defaults to 27017
. This is ignored if hosts
is specified
hosts
An array representing the hosts and ports to support a MongoDB cluster (sharding / replication)
host
A host in the cluster
port
The port a host in the cluster is listening on
replicaSet
The name of the replica set, if the MongoDB instance is a member of a replica set
serverSelectionTimeoutMS
The time in milliseconds that the mongo driver will wait to select a server for an operation before raising an error.
maxPoolSize
The maximum number of connections in the connection pool. The default value is 100
minPoolSize
The minimum number of connections in the connection pool. The default value is 0
maxIdleTimeMS
The maximum idle time of a pooled connection. The default value is 0
which means there is no limit
maxLifeTimeMS
The maximum time a pooled connection can live for. The default value is 0
which means there is no limit
waitQueueMultiple
The maximum number of waiters for a connection to become available from the pool. Default value is 500
waitQueueTimeoutMS
The maximum time that a thread may wait for a connection to become available. Default value is 120000
(2 minutes)
maintenanceFrequencyMS
The time period between runs of the maintenance job. Default is 0
.
maintenanceInitialDelayMS
The period of time to wait before running the first maintenance job on the connection pool. Default is 0
.
username
The username to authenticate. Default is null
(meaning no authentication required)
password
The password to use to authenticate.
authSource
The database name associated with the user’s credentials. Default value is the db_name
value.
authMechanism
The authentication mechanism to use. See [Authentication](http://docs.mongodb.org/manual/core/authentication/) for more details.
gssapiServiceName
The Kerberos service name if GSSAPI
is specified as the authMechanism
.
connectTimeoutMS
The time in milliseconds to attempt a connection before timing out. Default is 10000
(10 seconds)
socketTimeoutMS
The time in milliseconds to attempt a send or receive on a socket before the attempt times out. Default is 0
meaning there is no timeout
sendBufferSize
Sets the send buffer size (SO_SNDBUF) for the socket. Default is 0
, meaning it will use the OS default for this option.
receiveBufferSize
Sets the receive buffer size (SO_RCVBUF) for the socket. Default is 0
, meaning it will use the OS default for this option.
keepAlive
Sets the keep alive (SO_KEEPALIVE) for the socket. Default is false
heartbeat.socket
Configures the socket settings for the cluster monitor of the MongoDB java driver.
heartbeatFrequencyMS
The frequency that the cluster monitor attempts to reach each server. Default is 5000
(5 seconds)
minHeartbeatFrequencyMS
The minimum heartbeat frequency. The default value is 1000
(1 second)
Note
|
Most of the default values listed above use the default values of the MongoDB Java Driver. Please consult the driver documentation for up to date information. |