We all have come to expect that websites and web applications simply work on any device in any browser. This has come to the extreme that I often find myself pressing the back button on my phone as soon as I encounter a page not responsive or optimized for mobile. But while the user takes this for granted it’s often a result of hard work done by the development and QA team standing behind the website or application. But what makes this easy sounding goal so very hard to achieve? Let me introduce you to the five biggest challenges anyone making websites will encounter while doing cross browser testing:
#1 Testing All Combinations Is Impossible
In theory creating a web application is write once – run everywhere. If it works in one browser it should work in all, HTML and CSS is standardized after all! Nope, this is absolutely not how it works. Browsers have different quirks, bugs, feature sets and on top of this they often support different features and render differently depending on the operating system.
Let’s say you are targeting Internet Explorer 10, Internet Explorer 11, Chrome, Firefox, Opera and Safari on Linux, OS X and Windows. That sounds like a lot of work but still doable as this will come to 12 different combinations you’d have to test.
Windows: 5 Browsers OS X: 4 Browsers Linux: 3 Browsers ---------------------- Sum: 12 Browsers
But while it is fair to assume that users always run the latest version of modern browsers like Chrome and Firefox he is much less likely to always run the latest version of the operating system. So let’s further our net of combinations to include different os versions. Now we are at 33 different combinations.
Windows 7: 5 Browsers Windows 8: 5 Browsers Windows 8.1: 5 Browsers OS X Yosemite: 4 Browsers OS X Mavericks: 4 Browsers OS X Mountain Lion: 4 Browsers Ubuntu 12: 3 Browsers Ubuntu 14: 3 Browsers ------------------------------- Sum: 33 Combinations
You can already see where this is going. We could further our combinations by including both 32 and 64 bit operating systems, different flavors of Linux, installed plugins and so on.
So I think it is save to say that most of us will never be able to test all combinations the users are using. Assuming the website can be fully tested in roughly an hour it would take 33 hours for every complete test run not even including any bug fixing. This would draw out the testing process quite a while but luckily you don’t have to retest again and again if you don’t change the code, do you?
#2 Browsers Are Moving Targets
Thanks to auto updates and the rapid release cycle of Chrome and Firefox browsers are not frozen in time. Roughly every 6 to 8 weeks a new browser version will be released and most existing users will auto update (sometimes even without their knowledge) adding new features, bugs and quirks. So theoretical you’ll need to retest every 6 to 8 weeks to make sure that nothing is broken.
This sounds like a nightmare for every testing manager and it is. More than once I’ve worked on year long projects where the browsers to test where simply frozen in time. So after six months you would still be testing with Firefox 32 while Firefox 37 was already out in the wild.
So how can we work around this? There are 2 different solutions that come to mind. You can either choose to ignore new versions until some customer reports a bug or you could try to automate the testing to try and keep up with new browser versions.
This reminds me of a story that happened to me not long ago:
I once worked with a small but successful eCommerce store to build a new site. Roughly 6 months after launch the sales and revenue came to a abrupt halt. Nobody knew why and shortly thereafter the owner had to let the first people go as he could no longer afford there salary. The reason? Chrome had removed SSL 3.0 to protect it’s users from the Poodle attacks. Users who visited the eCommerce site where greeted with a scary “this page is not secure” dialog and understandably assumed the worst and did not buy. We were able to fix this with updating the server in less than an hour but the damage was done.
Now automation sounds much more appealing, doesn’t it? But that’s not so easy either…
#3 Testing Is Hard To Automate
To everybody who has never tried to fully automate cross browser testing this sounds like an easy solution to a hard problem. We’ll simply automate the testing process and will be able to test early, often and every combination under the sun. But if you look a little closer you see that it’s not so easy.
If we are speaking of testing automation most times we really mean testing the page functionality using automation. While this can be achieved using tools like Selenium, DalekJS or BrowserStage it’s not trivial to do right and most times need’s quite a bit of development work to pull off.
But what about layout test automation? While this can be done by detecting changes in layout using screenshots it’s rather more complicated as it seems. Screenshots are depend on the resolution they where taken and the different UI elements any browsers has. Yes tools that can do this do exists (Google Quality Bot, ImageMagick) this is not a trivial problem to have.
As you can cross browser testing is still not a solved problem in 2015 and I would very much to see a tool that fully automates this process without me doing more than entering a website URL. Do you know of such a fine thing? Please let me know in the comments, or better yet build one for me to use!
Photo by Patrik Theander