Archive for the ‘Blog’ Category

Browserbite Recorder private beta

Posted by

Our users have been pushing us towards releasing a tool that would help them mix user interactions with our computer vision difference detection. For now, our tool has only provided capability to test pages that have their own unique URL. Get ready for something different in the video below.  Our goal with this tool is to get out-of-the-way of “normal” testing of new features and give a really fast way to send your actions for cross browser testing. We hope to reduce an hour’s work to mere minutes. The beta will support IE8, IE9, Chrome and Firefox on Windows. We will add more platforms once we are happy with the performance of the “core” browsers.

Most importantly, let us know, what do you think is a fair and simple pricing model for this kind of service? We’re looking for something that works for solo coders and well established businesses alike.


How to treat the color blind on your website?

Posted by

How does color blindness relate to sales

Most people probably have found it out on their own. Statistics shows that about 8% of males and 0,5% of females are colorblind. Why does that matter in cross browser testing and conversion in general? Quite simple – if you use color combinations that are not distinguished for colorblind people, you lose conversion. Making 9% more in sales may mean the difference of being broke or brake-even for some companies.

Here’s a quick test to see if you’re color blind

How to test the web for the challenged?

First obvious reference would be the WAI standard.  It covers not just the color blindness aspect but nearly every other accessibility guideline. When just focusing on the color(blindness) aspect then zoom to Guideline 1.4 that talks about making your page distinguishable.

The color blind checklist

We try to keep it simple here – just 3 points to follow. (more…)

Featured user: Gabriele Renzi

Posted by

Gabriele Renzi - Friend for Cross Browser Testing ToolWe’re starting a new series in our blog to give our users a little showroom and share how they use Browserbite for their means.

Meet Gabriele Renzi, an Italian, living in Budapest, Hungary. Gabriele was also the lucky winner of our year-end raffle to get a full year of Browserbite usage for free.

So, how did you find Browserbite?

Well, I don’t remember actually, but probably read it from an article and then tried out the service myself.

I consider myself more of a back-end developer, but have done quite a bit of front-end things as well. But I don’t feel that strong in it. This is where Browserbite comes in for me – when I make changes in the front end then I can quickly check that I did not mess anything up in the layout. Cross browser testing is really boring and can mean long hours of setup. Sure, you can have your own virtual machines and emulators, but the install time could be used otherwise.

So what are you working on right now?

Currently at which is a social network type of web app that helps you to be informed about the things and hobbies of your life. Let’s say your favourite band is coming to play in a city near you – you’ll get a notification if that band is in  your profile.

What is your background in general?

I come from near Rome, studied Computer Science nearby and worked mostly as a web developer. I did my master thesis on semantic web while working with the team on Linked Data discovery and extraction. That was a really interesting project, though the output wasn’t very usable for common people.

After meeting my girlfriend I followed her to Budapest and started working on

Favourite development tools?

I would use Ruby on Rails for web stack. Whatever the world out there is saying about Ruby being slow, I’ve rarely found this to be a constraint. Usually the bottleneck has been the database. I also appreciate Heroku, since it takes a lot of sysop challenge off our back (for a price, of course).

Why do you like Browserbite?

Mostly because it makes the differences really visual right on the first page of the test results. That is really intuitive and sets Browserbite apart from other cross browser testing tools.

Your biggest feature request?

Scripting support so that I can test web apps and user flows.

Great – we’re working on that already! Any last words?

Just this:

Gabriele Renzi loves Browserbite Cross Browser Testing Service

Interviewed by Kaspar Loog


Winner of the user survey lottery

Posted by

We started a user survey to improve our service and listen to our customers earlier in December. We got tons of feedback. Thank you to all responders! We will make a separate post on what the users wanted.

We are committed to creating the mos easy-to-use web testing service in the world and user feedback is the only way to go here.

Part of the package was that we will give out one full year package to one of the responders. We are keeping our promise.

The winner is: (more…)

Swedish patient portal fails to support IE9 – to save money on cross browser testing

Posted by

Cross Browser Testing could have helped the patient portal from distresWe recently stumbled on an article on Computer Sweden thanks to our friend.

NPO, the Swedish National Medical History portal does not work in Internet Explorer 9 – the default browser for most Windows 7 users.  The reason was quite simple: vendor of the web application was only contracted to support 3 browsers: IE 7, IE 8 and Firefox 3. The choice was set from the buyer of the system to avoid costs on cross browser testing and compatibility.

The incident happened due to the Gotland commune workers’ system upgrade that resulted everyone’s browsers to be updated to Internet Explorer 9. Since the portal is supplied from the national side, there was no knowledge that an upgrade would result in loss of service.

Obviously the users are up in arms and it has made it to the nationwide media. The representative of the vendor of the system, Inera, says that “it’s all about budgets” and they are right. Manual cross browser testing is expensive and supporting a variety of browsers means extra development time expecially on legacy systems.

Read the original (in Swedish, so use Google translate if you don’t understand Swedish) here:

HTML5 Definition is out – but cross browser testing will not go anywhere

Posted by

HTML5 makes cross browser testing more challengingThe W3C has announced that the HTML5 Definition is now complete so that browser vendors (and all other vendors that use HTML for their purposes) can go on with implementation. The Consortium expects world-wide adoption in 2014.

However, the release also mentiones that “device fragmentation” still remains a big issue. Here, at Browserbite, we still believe that the standard is a step towards the right direction. However, the industry is always one step ahead of a standards committee. HTML4 was supposed to bring relief to the mess that HTML3 was about. That did not stop Flash, Silverlight and many other technologies from popping up and changing our browser experience.

Cross Browser Testing will not go anywhere, since history has shown that there’s still a lot of legacy around and by the time HTML5 gets proper adoption, there will be new innovation. On one hand it’s sad that the industry cannot deliver its promise. On the other, innovation is about bringing new products to the market. Even if it disrupts the existing standards.

What do we at Browserbite think that is going to happen? First, Opera, Chrome and later Firefox will announce support in the first half of 2013. But our usual suspect Internet Explorer and Safari will lag behind. Since tablets and smartphones are quickly taking over the browser experience then the fragmentation will get worse, not better. For the web developer it means that you still need to have fallback methods in place for older browsers for minimum of 2 years. So cross browser testing and development will give plenty of hours of work to many of us! We’re just here to ease that.

Please find the full release at the following link:

Cross browser testing bookmarklet!

Posted by

It’s holiday season in the western world and we like giving out gifts!

When you’re doing cross browser testing then nothing is more cumbersome than copy-pasting addresses from the address bar to the testing tool. And we have a solution for that.

Drag the link below to your bookmark bar and surf to your website under test. Click the Test ✔ Browserbite button on your bookmark bar and voila! you can submit a test to work immediately. Just make sure you’re still logged on to Browserbite before submitting the click.

Hint: Drag the button to your bookmarks to test any page you’re viewing immediately at Browserbite. No need to copy-paste anymore! Test ✔ Browserbite

Merry Christmas and happy cross browser testing! We’re planning some more good news for the new year.

The whole Browserbite team.

Manual cross browser testing? – prepare to miss 30% of bugs!

Posted by

While we are developing our computer vision technology to be more awesome there is always a need to benchmark it. How well does it  compare to a real person, test team or other alternatives?Man vs machne cross browser testing While we were running some smaller experiments with testers from Knowit (a Scandinavian consultancy) we also got a chance for an independent review by a student Anne-Liis Tamm who wrote her Bachelor thesis on benchmarking cross browser testing tools against manual testing. And that research shed some interesting light where people actually spend their time (and how inefficient is manual cross browser testing). Please note that the research paper focused on layout bugs (which are usually strongly connected with functionality issues).

If you need to test your site in various browsers then you need to do the following things:

  1. Select and set up environments (most companies opt for virtual machines and stick with around 4-5 environments)
  2. Test the site manually (or using exploratory testing techniques)
  3. Test cross browser checking on the other browsers

Anne-Liis focused on the third point in her empirical analysis. How well do people detect layout bugs and how do the tools compare? It is a classic man vs machine experiment. From the human side she used a 5-member team from Playtech that were doing cross browser testing on a daily basis. On the tool side she chose went for Browserbite. The goal of the experiment was to identify what approach finds most bugs and what is the false negative and false positive rate of the testing. She also set clear criterias on “what is a bug” in order to do evaluation of the results.

The experiment was quite a clear cut: 2 browsers, 50 URL-s and the screenshots were already made (just imagine that a service like browsershots had been used). No tester was allowed to spend more than 4 minutes per comparison. The groups were as following:

  1. Average tester – how well an average tester alone (without the help of the peers) performs. That is the normal use case in business.
  2. Machine – Browserbite tool without human follow-up
  3. Group of testers – that is the combined result of the work of 5 people. If 2 people found 2 different defects it would add up as 2 defects found
  4. Tool + tester – Browserbite tool with human review.
Before Anne-Liis started her tests, we were having some expectations. Namely, we expected our algorithm to perform inferior to people, but not much. Secondly, we guessed that humans would detect very few false negatives (or differences that are not bugs), but the computer vision algorithm will have a higher spam rate. Some of these expectations were to be proven wrong.

Detection rate aka quality of work

The first thing that comes to any man vs machine comparison is the intelligence. Machines that do a man’s work are usually deemed to be basic or simply dull. We think that layout testing is a boring job and actually people are not fit for it. Think about working on a conveyer line for a whole day trying to spot  cracked packages, for instance. After 15 minutes your attention will drop. Layout comparison fits this bill quite well – it is something that “only people can do intelligently but hate doing”.
But here are the results split across positional and visual defects. Positional defects are where the element is present in the browser under scrutiny, but positioned out of bounds compared to the agreed thresholds. Visual defect means that the element is either missing, distorted or otherwise not even matching.
Detection rate for manual and computer vision

False positives and false negatives

One of the big caveats of automated test tools is the false positives and false negatives. False positive = defect was detected, but proved not to be a defect. False negative = a defect that was present was missed. As you can see at the following graph – the surprising part is that the tool marks nearly just as much false positive as humans. And the tool skips a few more bugs than an average tester. The latter is a more surprising result since our practical note has always been that Browserbite detects usually more issues than humans.

Speed comparison

There’s no graph, but the reality is that while the experiment was carried out then an average tester spent 2.5 hours to complete the test set. Browserbite spent a little less than 15 minutes to do that with the infrastructure we had back then. The good news? Browserbite can do the test set with 15 browsers at the same time with no added time penalty.


Browserbite is nearly on par with a professional layout tester. Compared to other informal experiments that we have done, we can clearly say that the professional layout testers are professional in the way that they detect nearly 2x as much compared to an “average” tester. Speedwise – there is no match. So the best of both worlds seems to be the combination of two worlds – a tool along with a review by a human.

Introducing paid plans

Posted by

We’re glad to announce that we have now entered a new stage of development at Browserbite by introducing paid plans. We will still keep the free service out with the newest browsers until further notice. The computer vision will still detect the differences for you even in the free plan.

What’s in the pipeline?

First up is performance tuning – after the nice accidental publicity thanks to ArcticStartup – we need to upgrade our frontend to be lean, mean and fast! We’re proud that Browserbite is quite a fast application right now in delivering the test results, but we think it should be that way also when serving hundreds of simultaneous users.

Then we will focus our attention to get the full breadth of platforms on Browserbite – including a decent range of iPhone, iPad and Android devices. If you have a particular interest in some platform then please drop us a note at

When we’re happy with the platform coverage we’ll continue to develop a “flow testing support” since that has been a feature request by many. We will announce the private beta stage of the “flow test support” in a separate message where you can opt in to participate.

Thank you all for your support so far!