Skip to main content

Riak vs MongoDB vs MySQL Performance tests 1



Environment 1 
Intel(R) Xeon(R) CPU           X3430  @ 2.40GHz
Cores: 4
8 MB Cache
Memory : 6GB
ulimit : 1024
OS : CentOS 5.4

Riak : 1.1
MySQL 5.1 (Highly tuned system)

Test data :
Object : username, email, id

Both test's were run from Java clients

To make this test little meaningful I've send each read/write command to mysql in its own request.

MySQL Single Node (time in ms)

 inserts : 50, time : 871
 inserts : 100, time : 324
 inserts : 200, time : 835
 inserts : 500, time : 1936
 inserts : 1000, time : 3275


 gets : 50, time : 55
 gets : 100, time : 60
 gets : 200, time : 119
 gets : 500, time : 304
 gets : 1000, time : 582


Riak Single Node (Recommended is 3 Nodes)

 inserts : 50, time : 461
 inserts : 100, time : 486
 inserts : 200, time : 1473
 inserts : 500, time : 3609
 inserts : 1000, time : 6442


 gets : 50, time : 2296
 gets : 100, time : 4501
 gets : 200, time : 9028
 gets : 500, time : 22496
 gets : 1000, time : 44981



Environment 2
Intel(R) Core(TM)2 Duo CPU     P8600  @ 2.40GHz
Cores: 2
Memory : 4GBulimit : 1024
OS : MacOS 10.6


Riak : 1.1
MySQL 5.1 (Highly tuned system)


Test data :
Object : username, email, id (No constraints, No indexes)

To make this test little meaningful I've send each read/write command to mysql in its own request.

MySQL Single Node (time in ms)
 inserts : 50, time : 1606
 inserts : 100, time : 288
 inserts : 200, time : 515
 inserts : 500, time : 1277
 inserts : 1000, time : 2399


 gets : 50, time : 46
 gets : 100, time : 38
 gets : 200, time : 69
 gets : 500, time : 245
 gets : 1000, time : 753



RIAK

inserts : 50, time : 1032 
inserts : 100, time : 1389 
inserts : 200, time : 2555 
inserts : 500, time : 6408 
inserts : 1000, time : 10777

 gets : 50, time : 654 

gets : 100, time : 1308 
gets : 200, time : 2157 
gets : 500, time : 5062 
gets : 1000, time : 8722 


MongoDB
inserts : 150 time: 165
inserts : 10000 time: 2623
inserts : 20000 time: 2128
inserts : 50000 time: 2926
inserts : 100000 time: 5833
inserts : 200000 time: 16862
inserts : 500000 time: 29535
inserts : 1000000 time: 69929 


Database size and count
database  size :  0.953125GB
Object count : 20,83,700


get : 150 time: 58
get : 1100 time: 109
get : 2200 time: 193
get : 3500 time: 768
get : 5000 time: 1977




Test code
https://github.com/intesar/Riak-Perf-1
https://github.com/intesar/MySQL-Perf1 
https://github.com/intesar/MongoDB-Perf1 



Comments

Findings said…
Hi Intesar,

This kind of technology performance comparison are really valuable...

If you allow me just a cosmetic suggestion: Presenting the results in table format would allow visualizing them better.

It might be time-consuming. I know
Thank you
jD @ http://pragmatikroo.blogspot.com
Anonymous said…
By using a single Riak node, you're still sharding x3 ... just on a single piece of hardware.

Thus, your numbers are 3x higher using Riak than they should be.

When doing tests, make sure you're taking a 'tuned' vs 'tuned' and not making huge architectural errors or it skews your results, such as this blog entry.
Unknown said…
I wonder in what mode mongo was running - by default it saved data to disk once per minute, while mysql and riak save immediately. I suspect that it was the default setting, so the data you "saved" there was not really persisted in any way...
Anonymous said…
Unreadable. It should be visual information.
Anonymous said…
I'd like to see how Riak performs with 3 nodes.

Popular posts from this blog

JPA 2 new feature @ElementCollection explained

@ElementCollection is new annotation introduced in JPA 2.0, This will help us get rid of One-Many and Many-One shitty syntax. Example 1: Stores list of Strings in an Entity @Entity public class Users implements Serializable {     private static final long serialVersionUID = 1L;     @Id     @GeneratedValue(strategy = GenerationType.AUTO)     private Long id;     @ElementCollection     private List<String> certifications = new ArrayList <String> ();     public Long getId() {         return id;     }     public void setId(Long id) {         this.id = id;     }     public List <String> getCertifications() {         return certifications;     }     pub...

Validating CSV Files

What is CsvValidator ?   A Java framework which validates any CSV files something similar to XML validation using XSD. Why should I use this ?   You don't have to use this and in fact its easy to write something your own and also checkout its source code for reference. Why did I write this ?   Some of our projects integrate with third party application which exchanges information in CSV files so I thought of writing a generic validator which can be hooked in multiple projects or can be used by QA for integration testing. What is the license clause ?   GNU GPL v2 Are there any JUnit test cases for me checkout ?  Yes,  source How to integrate in my existing project ? Just add the Jar which can be downloaded from here  CsvValidator.jar  and you are good. Instantiate  CsvValidator c onstructor which takes these 3 arguements          // filename is the the file to be validated and here ...

Reuse JPA Entities as DTO

Note : Major design advantages of JPA Entities are they can detached and used across tiers and networks and later can by merged. Checkout this new way of querying entities in JPA 2.0 String ql = " SELECT new prepclass2.Employee (e.firstname, e.lastname) FROM Employee e "; List<Employee> dtos = em.createQuery(ql).getResultList(); The above query loads all Employee entities but with subset of data i.e. firstname, lastname. Employee entity looks like this. @Entity @Table(name="emp") public class Employee implements Serializable {     private static final long serialVersionUID = 1L;     @Id     @GeneratedValue(strategy = GenerationType.AUTO)     private Long id;     @Column     private String firstname;     @Column     private String lastname;     @Column     private String username;     @Column ...