Faster alternative to MySQL Delete Table Contents / Truncate

Published on March 30, 2015 by in General

From time to time you might have some rather big tables that you want to delete all the data quickly and start afresh.

You have a few options at this point. The first you’ll probably look at is:

DELETE FROM `table_name`

The benefit of doing this is DELETE provides the ability to Rollback if all hell breaks loose, but it also means that it will take longer because it requires more memory to store all this additional data

Options 2 is to just truncate the table:

TRUNCATE `table_name`

This is the quicker of the 2 options so far and will merely delete all rows as is

Option 3 is to clone the table structure and then rename tables:

CREATE TABLE table_name2 LIKE table_name;
RENAME TABLE table_name TO table_name3;
RENAME TABLE table_name2 TO table_name;

This last option is the quickest and can be performed on very large tables with lightning speed results

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Track Javascript Errors on your site Automatically

Published on March 20, 2015 by in General

We have added a new section to recently. The new section is called


and it can be found under any website you have added to your account, from the left hand navigation. now logs all Javascript errors that occur on your website so that you can find the problem spots and fix them as quickly as possible in order to maintain a great user experience among all of your website visitors.

This section will see some real growth in the next few months, so it’s a good one to keep a watch of on a daily basis.

Remember that if you have any feature requests or feature enhancement requests, they are always welcome.

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

ORDER BY RAND() – Faster Alternative

Published on March 18, 2015 by in General

MySQL’s ORDER BY RAND() function can be so useful for returning random items from a table, infact, we have used it a million times over the years.

The problem comes when your database tables start getting really big.

We found a very nice alternative to using it and thought it useful to post here for everyone else to use and/or provide feedback on.

Say you have a SQL query as follows: (slow on big tables)

SELECT id, title, desc FROM your_table ORDER BY RAND() LIMIT 38

Try this alternative instead: (much faster!)

SELECT id, title, desc FROM your_table ORDER BY 38*(UNIX_TIMESTAMP() ^ id) & 0xffff LIMIT 38

Mix the bites from the id and then take the lowest 16 only.



in this case is the same number that we are using to



 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Moving a MySQL Database without downtime

Published on March 16, 2015 by in General

At Statvoo we found ourselves in the position where we needed to move our master MySQL database without ANY downtime and for anyone who’s tried to do this, you will know how hard this can be if it is not done exactly right.

Below I will run through the steps to get the job done efficiently and with no downtime (unless you mess it up that is).

First you need to configure the master’s /etc/mysql/my.cnf and add the following lines in the [mysqld] section:

server-id = 1 binlog-format = mixed log-bin = mysql-bin datadir = /var/lib/mysql innodb_flush_log_at_trx_commit = 1 sync_binlog = 1

Now you will need to restart the master mysql server and create a replication user that the slave server will use to connect with. (Make sure to choose a strong password (max of 32 chars))


Now you will want to create a backup file with the binlog position. (Don’t worry about what that means if you’re unsure, this follow the instructions as below)

At this point your server’s performance may be impacted(a little bit), but no table locking will occur. This is because binlog actually writes to the filesystem as well, so the IOPS will just be a bit more than usual, but nothing to worry about really..

mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A > ~/sqldump.sql

You will need to find out some super interesting information(not really) that will help you later on. Take note of the values of MASTER_LOG_FILE and MASTER_LOG_POS (A pen and paper is quite handy right about now)

head sqdump.sql -n80 | grep "MASTER_LOG_POS"

Due to you not wanted to be affected by any downtime probably means you have a fair amount of traffic and this database is pretty big(or you’re paranoid about losing any potentially traffic, have you tried 😉 ), that means it will take a while to transfer the sqldump file, so why not gzip it!?

gzip ~/sqldump.sql

The time has come to transfer the sqldump gzipped file over to the slave server.

There are a few ways you can do this, but I like to use scp (Secure copy)

scp sqldump.sql.gz root@:/tmp

And yes I did just use root user to copy the file!

While this is all happening(will probably take quite a while as you’re capped to the connects on your MySQL servers) you can go ahead and edit the /etc/mysql/my.cnf file on the slave server, be sure to add the following lines.

server-id = 101 binlog-format = mixed log_bin = mysql-bin relay-log = mysql-relay-bin log-slave-updates = 1 read-only = 1

Restart the MySQL slave and import the sqldump file.

cd /tmp gunzip ~/sqldump.sql.gz mysql -u root -p

Log into the mysql CLI on the slave server and run the following commands to get replication on the go.


It’s always good to check and see what the progress of the slave is


If everything went well and you’re feeling proud of yourself, do make sure to confirm that Last_Error is blank and Slave_IO_State says something like “Waiting for master to send event”.

It’s always healthy to stare at Seconds_Behind_Master for a while to find out how far behind things are.

If you were a copy paste ninja with completing the above(and I had no typos, and you didn’t forget the password or that thing I told you to write down), the slave will catch up pretty quickly.

Once you are sure the slave has caught up you simply point your application sql connection string to the new server, make sure to have a permitted user/pass and you’re away.

Gracefully reload the mysql server and you’re done! (You don’t really have to do this..)

If for some silly reason you changed some data on the slave, that means replication won’t go so well. To fix things you can use the following command.


I found it healthy to keep the previous master around (turned off) for a few hours/day for my own sanity incase I found any problems later on where I needed to quickly switch the application back or export some missing data. (Which I didn’t, but it did make me feel safer.)

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Goals get a makeover

Published on February 27, 2015 by in General

We’ve always thought that goals should have the ability to be sent via various methods such as email or webhook add they are triggered.
This is note a reality with Statvoo.
Go take a look!

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

We started 2015 off by sprinting

Published on January 15, 2015 by in News

We have been planning a new infrastructure for some time now and have finally decided to set it all into action.

So what do you gain by this you ask. Well, what you gain is a higher level of stability and the ability to throw a lot more traffic at us without us blinking an eyelid.

Other than that, it also allows us to introduce our new (coming soon) server analytics tools to your account(s). We promise to not let you down and to keep the nerds among you (us) happy and interested in all the lovely data filters that are about to hit you in the first quarter of this year.

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

End of 2014

Published on January 14, 2015 by in General

2014 saw a huge shift in how Statvoo was used, notably the move of individuals using it to companies trying it out for the first time.
We have received a lot of great feedback which helped push some fantastic new features out.

We are really looking forward to the new year as we have so much planned and some great deals on the table which will most definitely change things once again.

Stay tuned for some great software!

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

A slightly newer layout

Published on January 14, 2015 by in General

Recently we decided to mix up the layout a bit, this was to because the older look didn’t cater so well for multiple sites (over 7-ish).

Let us know if you are experiencing any issues with it. We do have a major overhaul planned that will be coming out in the next few months. So stay tuned..

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Announcing the new Analytics Panel

Published on January 14, 2015 by in General

Hello new Analytics Panel!

For months now we have been thinking of re-inventing the Trends Panel.

We decided to go ahead with it, it looks great, works great and provides a lot more information in the same space.
Tell us if we missed anything during the move and if there are any improvements you would like to see.

This means that the older Trends Panel has been dropped, so if you’re looking around for it, or if you’ve followed an old link to it, it won’t be around anymore.

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Date ranges are the key to a better index

Published on January 14, 2015 by in General

Since Statvoo originally went live in February 2013, we have been doing focused around individual time based reporting, it worked well and we were able to sort through indexes without doing full table scans.

After much feedback from our users as well as data usage patterns we made the decision to move to date ranges.

This replaced our entire retrieval algorithm and therefore copious amounts of work had to go into regression of backward compatibility while introducing the new way of moving forward with the overall system.

We feel everything went well and our User Interface currently uses the ‘new and improved’ version.

As always, we welcome feedback, whether it be positive or negative, so give it a try and tell us what you think.

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

How to use Google Analytics in AngularJS

Published on January 14, 2015 by in General

It seems to be quite a popular question with not as many answers to hold the weight.

Google Analytics has always been more of a pageviews type tracking use case and up until a few years ago that was all anyone ever really did anyways. However,  the web has gotten a lot more complicated with the introduction of javascript and heavy ajax based sites and web applications.

This has caused quite a change in how Analytics Tools are used and how they are expected to gather statistics.

Luckily there are ways to achieve this and Arnaldo Capo has already written about it over at his blog. We have included his code below for clarity as well as in case the site is down for some reason in the future.

So let’s get into the gory details.

Basically it’s all about setIntervals and dom listeners, or in AngularJS, state changes.

On the script provided by google remove the last line as shown in the following example.

(function(i,s,o,g,r,a,m){i[‘GoogleAnalyticsObject’]=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,‘script’,‘//’,‘ga’);
ga(‘create’, ‘UA-XXXXXXXX-X’, ‘’); //ga(‘send’, ‘pageview’);

Then in your app bootstrap, register a $rootScope event named $stateChangeSuccess. $stateChangeSuccess will fire every time a state changes. See code below.

angular.module(‘yourApp’, []) .run([‘$rootScope’, ‘$location’, ‘$window’, function($rootScope, $location, $window){ $rootScope .$on(‘$stateChangeSuccess’, function(event){ if (!$ return; $‘send’, ‘pageview’, { page: $location.path() }); }); }]);

If all this is just too much for you, then you might be quite pleased to learn that Statvoo takes the hassle away of tracking javascript sites by handling it all by itself with zero additional implementation required.

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 

Track your Alexa Rank History

Published on January 14, 2015 by in General

The Alexa Rank is a way of determining a website’s popularity. are in the game of Internet popularity ranking and for quite a long time now have been doing a very good job of it.

They have a system where the most visited website on the planet is rated as #1 and the second is #2 and so on.

If your site is within the top 100,000 alexa rank it is watched more closely and general usage statistics are calculated about it.

The Alexa Ranking therefore comes in rather handy when it comes to tracking your website’s overall performance over time,  whether that be against competitors or as a whole.

If your site is added to Statvoo, this data is collected daily to build up a ranking history over time. It is particularly useful for sites that aren’t within the Alexa 100,000 watch list range, but can be as useful to sites that have campaigns running against them in one way it another.

Add your site today to start automatically collecting data. Did we mention it’s free?

 Share on Facebook Share on Twitter Share on Reddit Share on LinkedIn
No Comments  comments 
© Copyright © 2012 · Web Development London UK · All rights reserved · Site hosted on the network
By using the site and/or making use of any of our services you agree to our Terms and Conditions