Tuesday, 13 December 2011

Unsupported major.minor version 51.0

I've been away from the coalface, as it were, for a couple of years, only returning to it in the past few months. And I have to say, not much as changed and some things never change.  One of those things is the denvercoder9 moment, and that is the reason I started to blog; I feel the pain so you don't have to.

Let me set the scene briefly. I have full visibility of, and total control over my development environment; happy days. However, when it comes to UAT and Production environments I'm flying completely blind and at times I feel like I'm performing keyhole surgery with only a sledge hammer and a forklift, oh, and no keyhole.

I spent some time putting together a stack that I could use as a template, a reusable pattern where, from application to application I could recycle the initial pipework and pretty much only change the data model. What I needed was something completely stateless, all I really need was CRUD, so I opted for a REST based approach; and for me that meant reference implementation, Jersey. The other thing was that I strived to make it as simple to deploy as possible; essentially a WAR to be dropped into the application server that could then update the persistence layer when the application server started up.

Everything went swimmingly. Well, that's a bit of a lie, I found that some browsers still do not support PUT and DELETE.

RESTful Browser Verb Check

BrowserGETPOSTPUTDELETE
FF7YesYesYesYes
Chrome14YesYesYesYes
IE8YesYesConverts to GETConverts to GET


Anyhoo, I readied myself to deploy to the UAT environment. This consisted of listing everything component and detailing every step of configuration. As this was the bedding-in phase I needed the entire environment configured, which included the installation of the JRE, application server, database and the application. This could have been smoother if I had access to the boxes, but nevertheless we got there in the end.

So now I have my environment set up and my application deployed, let's kick the tyres. To my horror I got a 500 back from the service. Dang. I didn't have access to the logs and the only person with access was on leave that day. So I waited.

After a bit more waiting, I finally got a hold of the logs. And found the culprit, the little blighter was sticking out like a strawberry in a bowl of peas.


I've seen you before, I thought, and fired off a mail to the box admin (who was WFH; I'd ideally walk round and discuss face-to-face) to verify the JRE running on the UAT box. Some time later it was confirmed UAT was running JDK-Y. I was using JDK-X, so the UAT environment was running an earlier version than that which I build the WAR with - makes perfect sense and matches exactly to the exception. Fine:

  • Download/Install JDK-Y
  • Change JDK-X to JDK-X (Preferences -> Java -> Compiler)
  • Clean project
  • Rebuild
  • Redeploy locally
  • Redeploy UAT
Same 500 came back. Pants. Scratch of the head.

  • Let's change the Change JRE System Libraries (Right-click project -> Properties -> Java Build Path -> JRE System Library -- double-click and change to JDK-Y)
  • Clean project
  • Rebuild
  • Redeploy locally
  • Redeploy UAT
Same 500 came back. Starting to get a little annoyed now.

I Googled it. Every post said the same thing about the runtime and compile time JDKs being different, but I know that and I've told Eclipse. I checked that the Jersey binaries - built with Java SE Y. I've told Eclipse I want to use JDK-Y. How many times do I have to tell it? OK, I thought, everywhere I see JDK-X, I'll change it to JDK-Y, even if it seems irrelevant.
  • Change Target Platform (Window -> Preferences -> Plug-in Development -> Target Platform -- double-click and change JRE name under the Environment tab
  • Change Project Facet (Right-click project -> Properties -> Project Facets -> Java -> Select JDK-Y from drop down
  • Clean project
  • Rebuild
  • Redeploy locally
  • Redeploy UAT

SUCCESS

The moral of the story is, well I don't really know. What I do know is, it is annoying that you have to tell Eclipse four times in four totally different locations that you want to use a different JRE, and it's difficult to debug an application in an environment when you don't have access to said environment.

Monday, 12 December 2011

Testing your document structure for inconsistencies within MongoDB - Part II

In a previous post we looked at how to validate your collection for structural consistency. We did this by creating a DBObject with the 'template' document structure and comparing this against the relevant collection. Testing your document structure for inconsistencies within MongoDB
Now, it's not exactly a leap of faith to extend that and rather than code the template we can define this in a json document. Now all we need to do it to pass the json document and the collection name as parameters saving a recompile for each and every test. Happy days.




RFC 6455

There is a fair bit of chat going around regarding RFC 6455, better known as WebSockets, http://tools.ietf.org/html/rfc6455.  Wikipedia has already updated it's page to reflect the publication http://en.wikipedia.org/wiki/WebSocket so there is little point going into to it too much and as such we'll just focus on the key connection upgrade differences.

Connection Upgrade
Sec-WebSocket-Key1 and Sec-WebSocket-Key2, two 8 byte random tokens used by the server to prove to the client it has read the client request, have been replaced with; Sec-WebSocket-Key, a single 16 byte random token which is now base64 encoded prior to sending.
The corresponding change from the server is that it now use that base64 string and concatenates the WebSocket GUID. I'm not sure where exactly this is defined tbh, it's not in the RFC linked to from this RFC - anyhoo, it's not that important where it is, but it is a constant - 258EAFA5-E914-47DA-95CA-C5AB0DC85B11. Wikipedia calls this a "magic string", there is nothing magic about it - it's just a SALT essentially. Now you have the concatenated string, you now need to hash it (current choice is SHA-1 a change in hashing from MD5), base 64 encode and reply to request with status code of 101 - Switching Protocols.

Friday, 9 December 2011

jQuery and dot notation

I'm still a relative noob when it comes to jQuery, so I'm picking bits up all the time. The latest nugget is related to a previous post on converting an HTML to JSON which included the parsing of nested structures.
When it comes to populating a form with a nested structure there is potential to fall into a rabbit hole. Thankfully in this case the foxes had scared off the rabbits before they had the chance to dig too deeply.
The original issue manifested itself upon requesting a JSON resource with nested structure from a service and populating a form. Almost the opposite of the post linked to above. I went through the process of ensuring the JSON was valid, the path in the document matched the dot notation of the form and that the field had a value associated with it. The source presented below is a simplified test case designed to isolate the issue. What you see here is a form that is populated when the button is clicked. It looks rather innocent, however the "meh.feh" field doesn't get populated; exactly what I had seen in the original issue.

Simplied Example:

jQuery is treating the "." as CSS notation not as a literal. The solution? Escape them with a double backslash.

Simple fix:

For more information check the jQuery FAQ

Thursday, 8 December 2011

Updating a document using the default ObjectId

Here are a few facts about keys and the way MongoDB deals with them.
  • Documents in MongoDB require a key, _id, which uniquely identifies them.  
  • This is a 12-byte binary value, read more from the source of truth, ObjectId.
  • When inserting a new document, MongoDB will generate an ObjectId where one is not specified.
  • There is a 'reasonable' chance that this will be unique at the time of creation.  
  • All of the officially-supported MongoDB drivers use this type by default for _id values.

An annoyance I had with MongoDB, when I first started to use it, was the explicit update required on the default _id key/value pair.  To illusrate this here is a concrete example of what I mean.

I wanted to do this:
But I had to do this:
In this case, json is a JSON document passed from an HTML form via HTTP PUT, eventually ending up here at the DAO.  I was using the default _id implementation and thus would have expected save to do the figure that out and do the necessary work for me.

I just can't live those extraneous lines of code, it's too messy and too much to type every time.  I'll pull it out into a fromJson call, as I can see the pattern emerging and the odds of this reoccurring high.

I must have missed something in the API Docs as this cannot be an unusual requirement.  I would of at least expect to see it in a Util class...

Factory snippet:
Usage snippet:

I get to do what I wanted to now, but it still feels wrong.  It feels a little bit dirty.  It feels like a hack-o-la.  How can I do it better?

Wednesday, 7 December 2011

Tomcat and MongoDB

When I started with MongoDB I wanted to use it with everybody's old favourite application server, Tomcat.
There is no point reinventing the wheel so I hunted for usage examples.  I struggled to find any best practice implementation, so I decided to hack my own together.

A requirement I had was I had was I wanted to be able to deploy the web app to a range of different environments, so I needed the service to be flexible.  The simplest way I could think of was pulling the config from the application's web.xml file.


The Mongo object instance represents a pool of connections to the database so you will only need one object of class Mongo even with multiple threads.


web.xml snippet:


MongoService snippet:


Then from the data access object you can specify and authenticate to the relevant DB.

XxxDAO snippet:


This has been working like a dream for me proving to be an extremely convenient way to easily connect to Mongo.

Tuesday, 6 December 2011

Convert HTML form to JSON and POST using jQuery

Reinventing the wheel is a pet hate of mine; I could list quite a few actually, but that's not the point of this post.  Sometimes it takes me so long to look for a wheel that I think has or should have already been invented that it would have been easier just to invent the wheel myself.

Recently I was playing around with jquery, pulling fields from a form and posting them as a JSON object to a set of services.  Nothing fancy, nothing difficult there.  However, when the data model started to mature beyond the most basic structure I soon realised I need an easy, repeatable way to pull structured/nested data from a a form and convert it into a JSON object prior to posting.

To create the following structure:


You will need to create the following references:


This is a complete working example that will parse the form, create the JSON obbject, POST it to a service and display the returned object in a table.



The form above will produce the following JSON.


Jersey Annotation

Is your application server is throwing up the following warning at startup?
WARNING: A sub-resource method, public java.lang.String PATH_TO_METHOD, with URI template, "/", is treated as a resource method
Then the most likely cause of is that you have already defined a @Path("/blah") for the class. There is no need to specify @Path("/"), using @Get at the method level is enough to tell Jersey that it is default method for entire class.

Testing your document structure for inconsistencies within MongoDB

One of the advantages with schema-less design is that it works well for prototyping; you can have a collection of documents with each of the documents of variable structure. You can modify the document structure for one, some or all documents within the collection all without requiring a schema for the collection or each and every document.
However, this is also a disadvantage during prototyping; there are no constraints to stop documents within the same collection having variable structure. Deliberate updates to a document succeed silently as do accidental updates; ie when you update a document with a subdocument hanging off the wrong node. So when you assume you have consistency across all the documents, within a collection, but don't, you will run into some issues. You could also argue here that you're not coding defensively enough if you're not checking consistency at the time of execution; I'm not going to go into that right now though.

That exact structure inconsistency happened to me, and I ended up going down a rabbit hole. The smart thing to do was to blat the DB and recreate it during each test, but there were reasons that I didn't do that, which again I'm not going to go into here. Additionally the error that was coming back from performing an operation on the inconsistent structure wasn't obvious and didn't indicate to me that there was document structure inconsistencies, but that's another story.
Anyhoo, I didn't want this to happen again, so to verify the structural consistency of a collection I now pull in a json template; an example of the structure of the document I'm expecting to find within the collection I'm working with, and compare it to the collection in the DB. You can define your template in a json/txt file or you can manually create the DBObject. Simply put, I perform a symmetrical diff on the documents that are contained in the collection(s) I'm working with, and report on any additional field not defined in the template, I also report on any field that is defined in the template but not the document.

This example creates a DBObject with a few NVPs at the root, a couple as subdocument NVPs and finally an array.



We use this 'template' to compare against the documents within the tests collection.

This is all very lightweight but as a method to verify crude consistency it is very handy, for me anyway.


Monday, 5 December 2011

Security and MongoDB

The MongoDB Security Model has some scope for improvement.
  1. As default, Authentication is off. MongoDB is designed to run in a 'trusted' environment, depending on the network configuration to ensure the security of the environment.
  2. Pre v2 authentication is not supported within a sharded deployment.
  3. Once authenticated a user has full read-write to the entire DB. There is no concept of roles or groups or such like.
Now, I wanted to deploy with Security turned up to the max, so here I will present a practical example.
This is a typical set-up I'd use for my development environment; separate machines each running mongod.

Firstly, let's set up the replicaset.
On the primary we can define and run the replicaset config:

Nothing fancy here, all standard stuff. After a few minutes the dust settles and the primary and secondary identify themselves. You can check the status of the replicaset with:
Now, on each box in the replicaset you'll need to create a key (must be valid Base64 and 1K or less):
You can now bring down both instances. Either hit ctrl+c or do it the right way:

Now we can start each of the mongod instances with the keyFile:
Word of warning here. I spent a significant amount of time trying to set this up on v2.0.0 - yes I know it is a bit dumb to go with a x.0.0 version, but you'd think that something as basic/fundamental as this would be thoroughly tested. Well, suffice to say I ended up moving to the latest binary to get this basic functionality to work. It was annoying at the time, but it certainly made me read the available documentation multiple times.
So now on the primary we can create the admin user aka root, super...
As we have a replicaset these users will existing in admin and mydb on both instances.

This whole process should take about 5 minutes to configure provided you're using a stable, well tested version.
I got unlucky and wasted hours on a buggy version grappling with errors that I couldn't decipher, feeling a bit denvercoder9 (http://xkcd.com/979/).
I went through the pain so you don't have to...

Sunday, 4 December 2011

A simple guide to finding distinct array values in a MongoDB collection

So you want to find unique values within an array within a document in a collection? A reasonable request.
In ANSI SQL you'll be using DISTINCT, JOINS and GROUP BYs, stuff you're used to, but in the NoSQL realm your best bet is mapreduce.
It might seem a little bit like hard work, and probably a little intimidating at first, but it is certainly worth it; mapreduce is an extremely powerful tool.

Set up the collection, the map and reduce functions, and execute the mapreduce command:



But now you want to find unique values within an array within each document in an entire collection.


The map function in this example iterates over each of the items and emits the key/value of each array element.
The reduce function aggregates the key/value from each of the emits from the map function. In this example we're looking at unique keys and maintaining a count of the unique keys.
If you're looking to find the distinct array elements for a single document, simply specify the document index. For the entire collection, just leave the query out. *simples*

Saturday, 3 December 2011

Simple mongodb equivalents

After making a noob error today, I realised that some of the documentation could use a boost.

By default mongo returns the entire document.




All fields for the matched document will be returned

And in java




To return only certain fields, you need to let mongo know which ones you want



The will only return the fields meh and feh from the matched document

And in Java



You can also say 'all fields except for...'




This will return all the fields in the matched document expect for meh

And in Java