Introduction
MongoDB might be new to you. It certainly was new to us, so when we took some time to experiment with IBM SmartCloud Monitoring - Application Insight of a MongoDB, our first hurdle was setting up MongoDB! It should be noted that stand-alone DB instances are not currently supported by the monitoring agent, so we needed to build up a clustered/replicated MongoDB configuration. This is common in a production environment, but a bit more than we expected to need in the lab.
MongoDB Setup Overview
Our MongoDB setup was accomplished using a single virtual machine. While this isn't a recommended setup for a production environment, it is perfectly fine for this testing. Our setup included the following overall steps:
A special thanks to the following website for providing some demo database data and a nice presentation on MongoDB and Ruby apps. Also thanks to the MongoDB docs for creating a cluster.
You may ask, why 3 databases, aren't 2 enough? This is because the recommended MINIMUM number of databases for a Replica Set is 3. If your setup only has 2 databases, Application Insight will flag the Global MongoDB Status as a Warning.
- Create 3 databases, each configured to use unique paths and ports (ports: 37017, 37018, 37019)
- Connect to the "soon-to-be" primary database (port 37017) and configure the replica set.
- Add second (port 37018) and third (port 37019) databases to the replica set.
- Installed a demo database using JSON import
- Started a Configuration Server (port 37020)
- Started mongos, pointing at Configuration Server and listening on port 37021
- Connected to mongos (port 37021)
- Added replica set to Shard
- Enabled Sharding on our demodb
- Enabled Sharding on the collection (aka table)
A special thanks to the following website for providing some demo database data and a nice presentation on MongoDB and Ruby apps. Also thanks to the MongoDB docs for creating a cluster.
You may ask, why 3 databases, aren't 2 enough? This is because the recommended MINIMUM number of databases for a Replica Set is 3. If your setup only has 2 databases, Application Insight will flag the Global MongoDB Status as a Warning.
MongoDB Setup
VM hostname is "openpulse".
Replica Set is named "rs1".
Download MongoDB (we used 64-bit Linux) from here.
Untar file. Location of files will be referred to as $MEDIA
The remaining commands assume the $MEDIA/bin directory are in your path.
mkdir -p /srv/mongodb/rs1-0 /srv/mongodb/rs1-1 /srv/mongodb/rs1-2
# Start up 3 databases
Replica Set is named "rs1".
Download MongoDB (we used 64-bit Linux) from here.
Untar file. Location of files will be referred to as $MEDIA
The remaining commands assume the $MEDIA/bin directory are in your path.
mkdir -p /srv/mongodb/rs1-0 /srv/mongodb/rs1-1 /srv/mongodb/rs1-2
# Start up 3 databases
mongod --port 37017 --dbpath /srv/mongodb/rs1-0 --replSet rs1 --smallfiles --oplogSize 128
mongod --port 37018 --dbpath /srv/mongodb/rs1-1 --replSet rs1 --smallfiles --oplogSize 128
mongod --port 37019 --dbpath /srv/mongodb/rs1-2 --replSet rs1 --smallfiles --oplogSize 128
# Connect to to-be-primary
# Connect to to-be-primary
mongo --port 37017
#Give Primary a configuration for a replication set
rsconf = {
_id: "rs1",
members: [
{
_id: 0,
host: "openpulse:37017"
}
]
}
#Initiates the replica set
rs.initiate( rsconf )
#Displays Replication set
rs.conf()
#Add 2nd db - NOTE: you might need to wait for prompt to change to PRIMARY before continuing.
rs.add("openpulse:37018")
#Add 3rd db
rs.add("openpulse:37019")
#Display Replication set, confirm three are listed
rs.conf()
#Install DemoDB
curl -L http://c.spf13.com/OSCON/venuesImport.json | mongoimport --port 37017 -d demodb -c venues
#Start a config server
mkdir /data/configdb2
mongod --configsvr --dbpath /data/configdb2 --port 37020
#Start mongos pointing at config server
mongos --port 37021 --configdb openpulse:37020
#Connect to mongos
mongo --host openpulse --port 37021
#Add replica set to shard
sh.addShard( "rs1/openpulse:37017" )
#Shows replica set and other stats
db._adminCommand("connPoolStats");
#Connect to mongos
mongo --host openpulse --port 37021
#Enable sharding for the DB
sh.enableSharding("demodb")
#switch DB
use demodb
#Set index for sharding
db.venues.ensureIndex( { _id : "hashed" } )
db.venues.getIndexes()
#Setup sharding of the collection (table)
sh.shardCollection("demodb.venues", { "_id": "hashed" })
MongoDB Agent Setup
Reference IBM documentation here.
Our monitoring instance is named "Pulse21"
mongodb-agent.sh install
mongodb-agent.sh config pulse21
Agent configuration started...
Edit "Monitoring Agent for MongoDB" settings? [ 1=Yes, 2=No ] (default is: 1):
Agent Configuration :
Configuration variables for agent start up
The directory where the Java runtime is located. This value is optional. The agent will use the built in Java by default. Use this to over ride the Java to be used.
Java (default is: ):
Allows the user to say whether these system is a cluster or single replication set. This value is optional. By default the agent will monitor a cluster.
Type [ 1=Cluster, 2=Single Set ] (default is: 1):
Port Number for the router of a MongoDB cluster or a mongod instance of a MongoDB replication set being monitored. This value is optional. The agent will automatically discover the cluster location if only one is present on the system. This is used to over ride discovery of a cluster or explicitly monitor a replication set.
Port (default is: ): 37021 (Note: this is the port of the mongos process, not the individual databases)
The ip address for the host system of the router or the mongod instance. This value is optional. The agent will automatically discover the default ip address. This is used to select a particular interface on a system where MongoDB is bound to only one of several addresses.
Host (default is: ):
Agent configuration completed...
As a reminder, you should restart appropriate instance(s) for new configuration settings to take effect.
./mongodb-agent.sh start pulse21
Processing. Please wait...
Starting Monitoring Agent for MongoDB ...
Monitoring Agent for MongoDB started
Application Insight Screenshots
Thanks,
Anthony Segelhorst and Jamie Carl