Sunday 14 April 2013

A quick peek at nodeunit

I've been hacking around with node for a while now (as many people have), but because much of it has been just that, hacking, and very little of the code base grows beyond a couple of hundred lines of code, I've not been writing unit tests. tisk tisk.

I'm not really planning anything big in the near future, but if I embark on building something that is more than a prototype or a hack I'm going have to unit test. So tonight I'm going to have a quick poke around to see what folks are using for unit testing node application.

I'm not sure what the de facto framework is, but I'm going to follow the pattern set over the years of _something_Unit. So, I'm going to punt at there being a NodeUnit. Google "nodeunit" -> Bang!

There is already a good article on getting started, so I'm not going to add any value here tbh. But I'll carry on and record my concise notes, just in case I need to refer to them myself in the future.

I'm on Ubuntu, though it shouldn't matter which platform you're on. (if truth be known, I've had some issues with code being cross-platform compatible with node)

$ sudo npm install nodeunit -g

I thought I'd knock up something that is at least vaguely security related. I'm not getting into arguments around random passwords vs memorable passwords, nor am I starting on hashing, salting or key stretching; this is simply something that allows me to test drive nodeunit quickly and simply.

The example is a simple, old-skool password policy. Stuff like:
  • Minimum length
  • Upper case
  • Lower case
  • Numeric
  • Special characters
I'm keeping it simple. Ensure the above criteria are validated. So, here's the code. (there's nothing clever here)

The tests are simple too. I want to ensure the policy is enforced and that nodeunit can handle the tests I want to exercise.
  • If minimum length is not met, throw an error
  • If it is not a string, throw an error
  • If the criteria is not met, fail the test
I added the random generation of test passwords - I thought about using Markov Chains to generate the passwords, but then thought, that's a good subject for a different post.

As an aside, I kinda like the fact that the validator is validating the test utility that generates passwords to be used in the tests, code validating the validation tests...

Anyhoo; running the tests...

$ nodeunit test_validator.js

The code ran first time, surprising since I wasn't using an IDE, it was the old fave TextPad. More surprising though was how quickly it ran; I upped the random generating of passwords to 5,000,000 and it took only 15 seconds to execute. Cool.

So, to summarise my fleeting introduction to nodeunit.
  • It's fast
  • Easy to install
  • Quick to install
  • Easy to understand
  • Fast
I'll kick the tyres of nodeunit a little more in the future should I get more serious with node, and if there are any notable observations I'll add them as a comment to this post.

There are mode frameworks out there too, but from experience, you tend to stick with what works until it doesn't work. So, for me, I think nodeunit will be my de facto node...unit.

Tuesday 19 March 2013

CTF/DTF idea

I had a brief chat earlier today about your standard CTF experience at xyzSecCon. It's always been a bit biased towards breaking-in rather than defending, and biased towards networks/services/apps/etc rather than on just the app layer.

Some initial thoughts for a potentially interesting and new(er) format are thus:

There are teams of attackers
There are teams of defenders

Each defending team is given the same application to defend
* The application is riddled with issues, you name it, it's there

Each defending team has the same codebase with which to start from

Each defending team is given a branch within the SCM repo
* the build pipeline is in place
* dependencies, project files and imports to different IDEs (eclipse, intellij) are all there
* VMs are auto provisioned and releases are deployed to those VMs
* essentially all the defenders have to think about is the code, everything else is in place

Each branch and build track has the same tool set,
* functional and non-functional unit tests including some security centric unit tests (xUnit, etc),
* functional and non-functional user tests including some security centric user tests (Selenium or casper js, etc),
* static code analysis (Sonar, Findbugs, Coverity, Fortify, etc),
* automated web scanning (acunetix, burp, skipfish, etc)

Each attacking team is given prior knowledge of all the flaws within said application
* well in advance if required/to make it more fun

Each attacking team can use any tool they see fit - this is about attack the app, not the network/service/OS
* In all cases the attacking teams can add their own tools to the tool set

Each defending team must deploy at least once an hour

Each deployment goes to a new environment for that team - so if the CTF/DTF lasts over 2 (10 hour) days, then each team needs 20 different virtual envs.
* This is so that any long running attacks have time to complete against any given 'release'

The application contains your standard web functionality
* Anonymous/Authenticated browsing
* Registration
* Login/Logout
* Forgotten password
* Common issues such as Captcha/Username already registered/etc
* Register CC
* Calls out to payment providers/etc
* Shopping cart
* Checkout
* RSS feeds
* Social network integration
* etc etc, this is just a quick list

One (of many) responses to consider
* If there is a leakage of passwords - for instance - the defending team need to think about how they flick a switch so that all new logins are forced to change a password

Obviously the attacking teams will find most vulns early on, so the idea is that the defending teams reduce the number of exploitable vulns over time. Attacking teams can attack all the deployments for all the defending teams, so the scoring is recorded in a matrix

There can even be a set of dumb users; they can be socially engineered on an automagic basis - ie links in emails coming into email inboxes are automagically clicked

Anyway, I've thought about this for a total of an hour or so, but I think there is something different in this, something different to your normal run-of-the-mill CTF

It's usually easier to break stuff than it is to fix it. The initial challenge is in fixing rather than breaking.

Once the defending teams have gone through several release iterations, it will be harder to break stuff than it is to fix it. The longer challenge is in breaking rather than fixing.

Points will be awarded for the number of vulnerabilities detected; there will be a sliding scale for the level to which the vulnerability is exploited. For instance, displaying alert(1) on an attackers browser is 5 points, but harvesting the application user base session ids would be worth 50 points.

Different attacking teams will adopt different strategies. Do they go after the low hanging fruit that the defending teams will likely implement defences for in the first few releases; thus picking up some early points to add to their total but potentially wasting time developing exploits for the trickier issues that the defending team will not implement fixes for until release 10?

Tuesday 12 February 2013

JMeter multiline tooltip hack

I found something a bit annoying with mongometer; it doesn't handle tooltips very well.

Let me explain.

I wanted to show the full text for each of the settings for MongoOptions.

Take the autoConnectRetry.shortDescription property for example.

autoConnectRetry.shortDescription=If true, the driver will keep trying to connect to the same server in case that the socket cannot be established. There is maximum amount of time to keep retrying, which is 15s by default. This can be useful to avoid some exceptions being thrown when a server is down temporarily by blocking the operations. It also can be useful to smooth the transition to a new master (so that a new master is elected within the retry time). Note that when using this flag: - for a replica set, the driver will trying to connect to the old master for that time, instead of failing over to the new one right away - this does not prevent exception from being thrown in read/write operations on the socket, which must be handled by application Even if this flag is false, the driver already has mechanisms to automatically recreate broken connections and retry the read operations. Default is false.

Now this may be seen as being overly verbose. Fair enough, but I wanted to show all of this text so that users don't keep having to refer to the API docs.

The problem is, because there is so much text on a single line, it extends off both ends of the screen and causes unwanted side effects when multiple virtual desktops are in use.



No problem I thought, as I cast my mind back to my Swing development days. I'll just wrap the text in html tags and create a mutliline tooltip.

autoConnectRetry.shortDescription=<html><b>autoConnectRetry</b><br><br>If true, the driver will keep trying to connect to the same server in case that the socket cannot be established.<br>There is maximum amount of time to keep retrying, which is 15s by default.<br>This can be useful to avoid some exceptions being thrown when a server is down temporarily by blocking the operations.<br>It also can be useful to smooth the transition to a new master (so that a new master is elected within the retry <br>Note that when using this flag: - for a replica set, the driver will trying to connect to the old master for that time, instead of failing over to the new one right away - this does not prevent exception from being thrown in read/write operations on the socket, which must be handled by application Even if this flag is false, the driver already has mechanisms to automatically recreate broken connections and retry the read <operations. <br><br>Default is false.</html>



Oh dear. It seems it isn't going to be that easy. The tooltip now simple displays the html tags.

Time to don my deerstalker and get out the magnifying glass. Let's work backwards.

I'm extending BeanInfoSupport and as such I'm depending on GenericTestBeanCustomizer to render the GUI. Let's look to see what is happening during the rendering process.

core\org\apache\jmeter\testbeans\gui\GenericTestBeanCustomizer.java(597)

text = propertyToolTipMessage.format(new Object[] { desc.getName(), desc.getShortDescription() });


OK. So we're passing the name and short description to the MessageFormat instance.

If we scroll up a bit we'll see the pattern that is being applied to this MessageFormat.

core\org\apache\jmeter\testbeans\gui\GenericTestBeanCustomizer.java(283)

propertyToolTipMessage = new MessageFormat(JMeterUtils.getResString("property_tool_tip")); //$NON-NLS-1$


Not quite. We have a little more work to do here. JMeterUtils is being used to fetch resources. Let's have a quick peek in there.

core\org\apache\jmeter\util\JMeterUtils.java(371)

ResourceBundle resBund = ResourceBundle.getBundle("org.apache.jmeter.resources.messages", loc); // $NON-NLS-1$


OK. We're finally there.

core\org\apache\jmeter\resources\messages.properties(696)

property_tool_tip={0}\: {1}


Let's update this to handle html. I don't want it to break anything else, so I'm going to apply the minimum I can get away with.

core\org\apache\jmeter\resources\messages.properties(696)

property_tool_tip=<html><b>{0}</b><br><br>{1}</html>


OK. I told a teenie weenie lie; I didn't need to add the bold tag and the two new lines, but I couldn't help it. Now the properties file looks like this:

autoConnectRetry.shortDescription=If true, the driver will keep trying to connect to the same server in case that the socket cannot be established.<br>There is maximum amount of time to keep retrying, which is 15s by default.<br>This can be useful to avoid some exceptions being thrown when a server is down temporarily by blocking the operations.<br>It also can be useful to smooth the transition to a new master (so that a new master is elected within the retry <br>Note that when using this flag: - for a replica set, the driver will trying to connect to the old master for that time, instead of failing over to the new one right away - this does not prevent exception from being thrown in read/write operations on the socket, which must be handled by application Even if this flag is false, the driver already has mechanisms to automatically recreate broken connections and retry the read <operations. <br><br>Default is false.

I've removed the opening and closing html tags, along with the bolding of the property name and the two new lines, as they are all now present in the format pattern which will be applied to all tooltips across the application. All we need to do when we want any markup within tooltips is simply add it to the property value and it will automagically be rendered. If you don't want markup, then don't add any.



This looks so much better and has the added benefit of addressing the unwanted side effects when using multiple virtual desktops.

I'd imagine this isn't the best way to implement this, but maybe it is; I didn't spend much time on this and I wouldn't be surprised if there is a well documented/clever-er way of achieving the same result without having to hack a global message format pattern.

If you don't like the bold and new lines, then the change to the pattern should be as follows:

core\org\apache\jmeter\resources\messages.properties(696)

property_tool_tip=<html>{0}\: {1}</html>


Saturday 2 February 2013

MongoDB Authentication

I recently updated mongometer to make it a bit more flexible. Shortly after releasing the new version, one of the users fed back an issue via a comment on the post. I booted up my machine, opened up my IDE, found the issue and had pushed the fix out to github within half-an-hour.

This isn't a quick turn-around, success story post. It quickly dawned on me that if I was going to do anything in the future with mongometer, I should really know a little more about how a user authenticates against a database within MongoDB. (I don't want to spend more than an hour or so on this as I've just cracked open a bottle of Nyetimber Classic Cuvee - I'm also cooking a chicken pie (ping me if you want the recipe) and I'd rather be finished this post before I finish the bottle.) Before diving into any documentation that may exist around MongoDB Security, I'll start with a few observations. So in typical man style, let's kick the tyres and then if required, RTFM.

Start up a mongod instance.

$ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --fork --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log
$ ./mongo --port 27001


Create an admin user

> use admin
> db.addUser("mongouser","mongopass")
1


Restart mongod

$ sudo kill -15 $(ps -ef | grep mongo | grep -v grep | cut -f8 -d" ")
$ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --fork --auth --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log
$ ./mongo --port 27001


Authenticate to admin

> use admin
switched to db admin
> db.aut("mongouser","mongopass")
Thu Jan 31 13:53:31.271 javascript execution failed (shell):1 TypeError: Property 'aut' of object admin is not a function
db.aut("mongouser","mongopass")
^

> db.aut("mongouser","mongopass")


Ooops. Fat-fingered it. Hang on, I think I've found Issue #1

Issue #1
If an admin user mistypes the auth command and not the credentials, then the actual credentials stay in the shell history, which persists across sessions. Any other user could potentially come along and view the shell history and pick the credentials up.

On the other hand, if the command is correct and either the username or password or both are incorrect, or indeed if the authentication attempts succeeds, then the command is not kept in the history. (The command history for the mongo shell is available in the same way as on a linux box - using the up arrow)

> db.auth("mongouser","mongopass0")
{ ok: 0.0, errmsg: "auth fails" }
0
> db.auth("mongouser0","mongopass0")
{ ok: 0.0, errmsg: "auth fails" }
0
> db.auth("mongouser0","mongopass")
{ ok: 0.0, errmsg: "auth fails" }
0


Ok. Let's authenticate against admin and continue.

> use admin
switched to db admin
> db.auth("mongouser","mongopass")
1


Oooops. I almost missed one there.

Issue #2
Until the mongod instance is restarted, any user can...

> use admin
switched to db admin
> db.system.users.find()
{ "_id" : ObjectId("510a58c6de50e136190f9ed7"), "user" : "mongouser", "readOnly" : false, "pwd" : "c49caa1cb6b287ff6b1deaeeb8f4d149" }


...grab the usernames and hashes.

So, now that I've restarted the mongod instance, any user is going to have to authenticate against admin to be able to view the contents of system.users.

Now, continuing on from entering incorrect credentials, I'm going to launch a dictionary attack and see what happens. Oh dear. Found another issue.

Issue #3
There is no lock-out. I wrote a quick hack to connect to the mongod instance, to switch over to admin and attempt to log in. Using a rather large dictionary (with "mongopass" tacked on at the end) I attempted to log in over a million times. This was only a crude single-threaded attempt that took around 17 seconds to complete, but it shows that there is no account lock out. I'm confident I could put together a multi-threaded brute-forcer if required. I'll need to look into this further to see if there is any brute forcing/dictionary attack alerting that can be configured or whether there is a lock-out policy that can be applied. I'm not ready to RTFM just yet.

Let's take a closer look at the format of the password in system.users.

c49caa1cb6b287ff6b1deaeeb8f4d149

That looks like an MD5 to me. Let's take a look in the code, which is available to cruise on github.

Wow! I got luck straight off-the-bat. db.js has the following method:

function _hashPassword(username, password) {
    return hex_md5(username + ":mongo:" + password);
}


With hex_md5 then referencing native_hex_md5 within utils.cpp:

void installGlobalUtils( Scope& scope ) {
    scope.injectNative( "hex_md5" , native_hex_md5 );
    scope.injectNative( "version" , native_version );
    scope.injectNative( "sleep" , native_sleep );
    installBenchmarkSystem( scope );
}

static BSONObj native_hex_md5( const BSONObj& args, void* data ) {
    uassert( 10261, "hex_md5 takes a single string argument -- hex_md5(string)",
    args.nFields() == 1 && args.firstElement().type() == String );
    const char * s = args.firstElement().valuestrsafe();

    md5digest d;
    md5_state_t st;
    md5_init(&st);
    md5_append( &st , (const md5_byte_t*)s , strlen( s ) );
    md5_finish(&st, d);

    return BSON( "" << digestToString( d ) );
}


Time for a quick recap. Just in case you missed anything:
  1. the hashing algorithm is MD5; my least favourite hashing algorithm.
  2. the string to be hashed is in the form username + ":mongo:" + password; using the same "salt" is non-optimal...
  3. the string :mongo: is global; I'm not really sure why it's there at all tbh.
I think this is probably enough to go with for now, else this will turn into a tl;dr and I may exceed my self imposed time constraints.

Thinking back to any discussions I had with regards to MongoDB, the same statements always arose within the context of Security.
  1. Authentication is off by default.
  2. MongoDB was always meant to be deployed in a trusted environment
I have to say that even with authentication on, we still have some gnarly issues. Further, I don't think a trusted environment exists.

Right then, time to RTFM with regards to Security. I'm hoping to find a roadmap defined that will deal with the issues stated above or there are already some mitigating steps that can be taken.

So, there are some Authentication features coming out in the near future. It looks like the new authentication features are only available under the MongoDB Subscriber Edition, I'm not sure what that means tbh... I also came across this know issue, which forms the basis for...

Issue #4
"if a user has the same password in multiple databases, the hash will be the same on all database. A malicious user could exploit this to gain access on a second database use a different users’ credentials." [sic]

Let's break that down.

"if a user has the same password in multiple databases, the hash will be the same on all database."

Yes. Correct. Same username, same password and same "salt" (ie the ":mongo:" string") equals same hash. OK, cool, let's move on.

"A malicious user could exploit this to gain access on a second database use a different users’ credentials." [sic]

A malicious user could exploit this if, and only if they have a non-readonly user on both databases involved.

If they only have readonly access, then they cannot list the system.users collection. In which case they will never see that the hashes are the same across different databases in the first place.

If they are not readonly, then they could list the system.users collection and take the hashed passwords offline to crack.

You're going to have to move into cracking territory if the hashes don't match across databases, in summary:
  1. the user attribute would have be the same. The odds of different users on different databases having the user could be high.
  2. the pwd attribute would have be the same. The odds of different users creating the same pwd is probably quite high.
  3. the "salt" is the same, so it has no real relevance here.
So the problem here is that a user (that is not readonly) can pull all the password hashes for a given database and take them offline to crack. The malicious user already has the user name and the "salt", all they have to find is the password.

Conclusions

Issue #1
This one is a bit of a pain tbh. When the command is entered correctly (ignoring whether the credentials are correct or not) the command is not shown in the history. When the command is not entered correctly, then it is difficult to know what to exclude from the command history. I guess you could retrospectively remove commands that resulted in errors (ie invalid commands) that preceded the authentication. That is not a solution...

Issue #2
There may be an argument that once the admin user is created in system.users in the admin database that a restart should be forced.

Issue #3
A no-brainer. I've written password policies on multiple occasions (what a fun life I live, eh?), account lock-out is password 101.

Issue #4
It seems that creating a "salt" (":mongo:") per database would resolve the issue. Looking at the code, it looks like the implementation is a doddle, a quick and easy win. Adding the option to manually set it would be grand. Implementing a unique "salt" under the covers such that users didn't have to think about it would be equally grand.

So, Nyetimber finished, post finished.

I'm not saying that there is anything in this post that is new or clever, it's a cursory glance. I'm not having a go; everything I've mentioned is merely observation. I install mongo on almost a daily basis because it's a great product, I do however like having a balanced view and identifying any elephants in the room. I'd be interested in any feedback.

Monday 21 January 2013

SpiderMonkey to V8, mongometer and the Aggregation Framework

I previously posted a comparison that covered running some simple queries against versions 2.2.2 and 2.3.2 of MongoDB. They were pretty basic examples, I just wanted to demonstrate one of the uses of mongometer; comparing the relative performance of MongoDB releases and the MongoDB scripts you write to run on them.

Now I'm going to knock the complexity up a notch and perform a comparison between the different releases of MongoDB, their underlying JavaScript engines using another relatively new feature (added in version 2.1), the Aggregation Framework. I'm going to use the example they have in the documentation, mainly so I don't have to make something up.

{
title : "this is my title" ,
author : "bob" ,
posted : new Date () ,
pageViews : 5 ,
tags : [ "fun" , "good" , "fun" ] ,
comments : [
{ author :"joe" , text : "this is cool" } ,
{ author :"sam" , text : "this is bad" }
],
other : { foo : 5 }
}


db.articles.aggregate(
{ $project : {
author : 1,
tags : 1,
} },
{ $unwind : "$tags" },
{ $group : {
_id : { tags : "$tags" },
authors : { $addToSet : "$author" }
} }
);


I'm populating the collection with data in the form as described above. The only thing I'm adding is that I'm using the JMeter CSV Data Set to populate the author attribute.

Version 2.2.2
So let's make sure we're starting from a clean slate.

$ /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log

$ ps -ef | grep mongo
4974 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log

$ ./mongo --port 27000

> show dbs
local 0.078125GB
test (empty)




> show dbs
aggregation 0.203125GB
local 0.078125GB
test (empty)

> use aggregation
switched to db aggregation
> db.dropDatabase()
{ "dropped" : "aggregation", "ok" : 1 }

$ sudo kill -15 4974


Version 2.3.2
Let's ensure we have that same clean slate as with Version 2.2.2

$ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log

$ ps -ef | grep mongo
1463 /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log

$ ./mongo --port 27001

> show dbs
local 0.078125GB
test (empty)




> show dbs
aggregation 0.203125GB
local 0.078125GB
test (empty)

> use aggregation
switched to db aggregation
> db.dropDatabase()
{ "dropped" : "aggregation", "ok" : 1 }

$ sudo kill -15 1463


Conclusions
I ran this a few times and the results were consistent. I'll knock it up another notch over time, and hopefully draw out some useful conclusions, until then, you can draw your own.

Suggestions and comments welcome.

Sunday 20 January 2013

SpiderMonkey to V8 and mongometer

With 10gen switching the default JavaScript engine for MongoDB 2.3/2.4 from SpiderMonkey to V8 I thought I'd take the opportunity to compare the relative performances of the releases using mongometer. Being a Security bod, I really should have looked at the Additional Authentication Features first... Hey ho.

I'll document the steps taken during the comparison, including the set up, so this can be repeated and validated - just in case anyone is interested - but mainly so I can remind myself of what I did; memory, sieve.

The set up
I'm going to install 2.2.2 and 2.3.2 side-by-side on a dedicated machine. I'll then use the latest version of the Java driver with mongometer.

$ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.3.2.tgz
$ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.3.2.tgz.md5

I got a 403 response for this request...

$ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.2.2.tgz
$ wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.2.2.tgz.md5

$ md5sum -c mongodb-linux-x86_64-2.2.2.tgz.md5
md5sum: mongodb-linux-x86_64-2.2.2.tgz.md5: no properly formatted MD5 checksum lines found


Grrr. An md5 file is supposed to be the checksum (then x2 spaces) and then the filename of the file being checksummed. I'll have to eyeball them instead, well, eyeball the one that I could actually download...

$ md5sum mongodb-linux-x86_64-2.2.2.tgz
be0f5969b0ca23a0a383e4ca2ce50a39 mongodb-linux-x86_64-2.2.2.tgz

$ cat mongodb-linux-x86_64-2.2.2.tgz.md5
be0f5969b0ca23a0a383e4ca2ce50a39


Configure
$ tar -zxvf ~/mongodb-linux-x86_64-2.2.2.tgz
$ sudo mkdir -p /usr/lib/mongodb/2.2.2
$ sudo mv mongodb-linux-x86_64-2.2.2/* /usr/lib/mongodb/2.2.2/
$ rm -r mongodb-linux-x86_64-2.2.2
$ sudo mkdir -p /data/db/2.2.2
$ sudo chown `id -un` /data/db/2.2.2
$ /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log

$ tar -zxvf ~/mongodb-linux-x86_64-2.3.2.tgz
$ sudo mkdir -p /usr/lib/mongodb/2.3.2
$ sudo mv mongodb-linux-x86_64-2.3.2/* /usr/lib/mongodb/2.3.2/
$ rm -r mongodb-linux-x86_64-2.3.2
$ sudo mkdir -p /data/db/2.3.2
$ sudo chown `id -un` /data/db/2.3.2
$ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log

Let's check they are running.

$ ps -ef | grep mongod
1795 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log
2059 /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log


Now, let's kill one (gracefully) and move on to the interesting stuff.

$ sudo kill -15 2059
$ ps -ef | grep mongod
1795 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log


Now I'm jumping on to another box.

$ wget https://github.com/downloads/mongodb/mongo-java-driver/mongo-2.10.1.jar
$ cp mongo-2.10.1.jar /usr/lib/jmeter/2.8/lib/ext
$ cp ~/IdeaProjects/mongometer/out/artifacts/mongometer_jar/mongometer.jar /usr/lib/jmeter/2.8/lib/ext
$ /usr/lib/jmeter/2.8/bin/jmeter.sh


The tests
The tests are really rather basic; I'll perform an insert into two different databases, and perform finds against those databases.

Version 2.2.2

> show dbs
local 0.078125GB













> show dbs
jmeter 0.203125GB
jmeter2 0.203125GB
local 0.078125GB

> use jmeter
> db.jmeter.find().count()
1000
> db.dropDatabase()

> use jmeter2
> db.jmeter.find().count()
1000
> db.dropDatabase()

$ ps -ef | grep mongo
2690 /usr/lib/mongodb/2.2.2/bin/mongod --port 27000 --dbpath /data/db/2.2.2 --logpath /data/db/2.2.2/mongod.log

$ sudo kill -15 2690
$ ps -ef | grep mongo

Nothing. Let's get the 2.3.2 instance up and running.

$ /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log

$ ps -ef | grep mongo 2947 /usr/lib/mongodb/2.3.2/bin/mongod --port 27001 --dbpath /data/db/2.3.2 --logpath /data/db/2.3.2/mongod.log


Version 2.3.2

> show dbs
local 0.078125GB













> show dbs
jmeter 0.203125GB
jmeter2 0.203125GB
local 0.078125GB

> use jmeter
> db.jmeter.find().count()
1000
> db.dropDatabase()

> use jmeter2
> db.jmeter.find().count()
1000
> db.dropDatabase()


Conclusions
I guess you should draw your own. I ran this a couple of times and am considering scripting it so the environments are cleaned down prior to each run, I could probably add more complex queries too. Perhaps if I find some time next weekend then I will.

If you have any suggestions, please leave a comment.

Saturday 19 January 2013

mongometer v2.0

A while back I knocked up mongometer to compare the relative performance of MongoDB scripts. I then made some minor changes, and since then - and only recently - made additional changes based on feedback.

I've now made slightly more significant changes. I'll now cover them briefly.

The minimum JMeter version is now 2.8, this is because it is dependent on a change introduce in JMeter 2.8.

I've created a MongoSourceElement which is accessible from the context menu; Add -> Config Element -> MongoDB Source Config

This pulls all the MongoDB connection details up a level and allows you to share it between multiple MongoScriptSamplers. This means you only need to define the connection once and associate a source name with the instance. When you create a MongoScriptSampler you can then reference the source name as defined in MongoSourceElement.

MongoScriptSampler now only includes the fields that you need. It allows you to specify the mongo source, the database, the database credentials and of course the script to run.

The way I'm currently using it is to create a MongoSourceElement.
Under that I create a ThreadGroup.
And under that I add the MongoDB Script v2.0, View Results Tree, and Graph Results.

If I have multiple script to run, I use the same grouping of, MongoDB Script v2.0, View Results Tree, and Graph Results. See the image below for an example.



There is the added MongoUtils class which has a static method which allows you to specify multiple hosts (if you want) and breaks it out into each host and port, returning an ServerAddress, which to be honest, I'm surprised isn't available under the Java MongoDB driver.

The other semi-useful addition is the QuickEnvironment. It basically allows you to start up mongod, mongo, jmeter and a tail of the jmeter log. You could do this in a script, but hey ho, I did it in a helper class.

The latest version is available on github.

Please have a play, try and break it, make suggestions, give me feedback.