Cloud databases: why a new approach will improve performance

Advice Nick Booth Jan 21, 2013

Relational databases don't do the cloud. But a new model of cloud data management system might change that

The virtualisation of processing power and memory might be a long established concept that dates back to the mainframe, but the cloud has a problem with data.

The relational database was defined in 1970 for a different world, when data was comparatively standardised, transactions were all pretty straightforward and the size of the database meant that everyone could afford to wait in line as changes to each record was processed in turn.

In the 1980s a gigabit sized database was the stuff of science fiction. Just as an IBM executive once predicted that the world wold only need five computers, Tandem's Jim Gray wrote a white paper in 1985 predicting that no computer would ever need to do more than 1000 transactions per second. Facebook already needs to crunch out a million a second. How do you scale up when your database foundation is so rigid?

Needless to say, the relational databases built on a basic design that stems from that era are never going to be particularly cloud friendly. Today's vendors seem to be more intent on working around the limitations of the relational database (RDB), rather than biting the bullet and creating a new foundation, according to analyst Clive Longbottom,

The databases in most data centres are big but they're not clever, says Longbottom. “They are not particularly cloud friendly or designed for modern data needs. This is why we have Oracle doing stuff with Exadata, IBM with PureData and EMC with GreenPlum,” he says. They are all missing the point, he says. The clue is in the name: database. This is something that the NoSQL brigade (such as MongoDB, Cassandra and CouchDB) and the Hadoop/Hana crowd have missed, to a certain extent.

Relational databases built on a basic design that stems from that era are never going to be particularly cloud friendly

Cloud providers need to cater for the five signs of data: volume, variety, veracity, value and velocity and that calls for a fundamentally different approach to how records are updated and processed. At the moment, that means creating a mixed SQL/NoSQL environment to meet these needs. The Hadoop/Hana approach is to create massively powerful machines, that use expensive resources such as in memory processing, to plough their way through a huge amount of data.

Now, one of the original architects of RDB claims to have created a new database technology (NuoDB) that can handle large volumes of data at high velocity in a much more efficient way, which means it costs a fraction of the price of previous attempts to launch data into the cloud. Jim Starkey has been involved in the Internet since working on the Datacomputer project on the fledgling ARPAnet (what some might describe as the daddy of the Internet).

Having created both the DEC Standard Relational Interface architecture, its first RDB products (Rdb/ELN) and been the software architect for DEC's database machine group, he knows the limitations of RDB from the ground up. He later tried to fix them with the launch of Interbase Software, which produced the first commercial implementations of heterogeneous networking, blobs, triggers, two phase commit and database events.

But Starkey concluded the only idea was to start again. Which is why Starkey created the NuoDB Emergent Architecture which, says co-founder Barry Morris, will overcome the limitations of traditional databases by creating a new foundation for cloud data management. It's been beta tested by 3,500 companies for the last year and was officially launched on 15 January.