Hello and welcome everybody to today’s webinar about how you can jump-start your automated testing efforts with the Browser Mar record and play feature. Before we get started, just as a quick reminder, you will get an email with the recording of the webinar after this session, and of course, you can ask any questions you might have by using the Go To Webinar questions UI, normally to the right of the screen to enter any questions during the webinar and if there are any, we will do a Q&A session at the end of this webinar. Our goal for today is to take a look at how you can get started with automated cross-browser testing without having to learn a lot of stuff like browser automation with Selenium, or really any coding skills. With the Browser Mar record and replay feature, we can create automated tests and randomly run them later on manually, or through the command interface on a CI server, or something like that without any coding skills necessary.
So, let’s get started right away. Here, you can see my screen, we have the cross-browser testing to browse them all open already. This tool runs on Windows or Mac OS, or Linux. I will show it to you on Windows, but the feature set is the same for any platform. Browser Mar contains a feature which is called the test recorder, and the test recorder makes it possible to record test script in an existing browser as well. Later on, record and run them again on any browser that’s supported by Browser Mar, which means any popular desktop browser like Chrome, Mozilla Firefox, Internet Explorer, Microsoft Edge, and Safari.
Now let’s start right away by creating a new test case. For this example, we will assume that we need a few different test cases to test the search engine of our block. After we take on our test recorder here at the top navigation, we can record a new test. Keep in mind this is a Dessel application, everything we can do here, is running directly on my local machine, which means we could record test scripts not only for pages that are available to the public internet, also for test environments like an internet staging environment or even a local page, which runs on my local development server and is not available to the public internet at all.
Here we will select recording of the test, give the test a quick name. Let’s assume we want to create a test case for the block search engine. We will need to tell Browser Mal which URL the test should start. Just an example, we use our home page. Now if I click on start recording, we can see that Google Chrome is opening, automatically navigating to the page. We want it to test, Browser Mal will now record all user interactions we do with this browser. We can also see at the bottom right, there is an injected Browser Mal recorder element which helps us create validations for our tests.
We don’t want to only execute our steps, we also want to validate that it actually worked as expected. If you take a look at the recorder, we will see that you can also instruct the test script to take a screenshot at any point you might need it. If you run the test again and then something fails, you can inspect the screenshot to see what the error might be, missing bottom for example. Okay, now we will just browse through the block here in question, fire off a search. Let’s use Selenium as a search keyword. We can clearly see here that it works. We now have lots of different block posts which all contain the word “Selenium”, somewhere either in the title or in the content of the posts as well.
If we, later on, run this test script in an automated fashion, we don’t want to sit in front of the screen and manually validating that this has worked. Mainly because the test will happen so fast, that you will most likely not be able to validate this correctly, and of course we don’t want to tie up the testers time while running automated tests. We want to validate that our search results here include the word “selenium”. We can do this, either way, using to validate text function. If I click on it, it will tell me that I now have to select the text we want to validate. In this example, it’s selenium here, and if we took a look back at Browser Mal, we can see here the last recorded steps. We enter Selenium into the search field, we clicked on search submit which is the button to submit the search, then we instruct it to test script to verify the text “Selenium” on the page.
We can, of course, go on a step further and click on the first search result, which will load up the post in question. Again, we could validate the text here, we could validate also an element. Let’s say for example we want to validate that there is a picture here. We can just click on validate element and then click on the element we want to validate. This could be some text element, or here the picture element. The last step for our test script we also want to take a screenshot. So if this test fails, later on, we can absolutely take a look at the screenshot and we will see, for example, the image is missing. Something along those lines.
I would say, we can at this point consider our test script done. It’s always a nice idea to not to create one really big test script as tests, all functionality of our block. Instead, we want to create small and contained test scripts, that only test a specific feature of the page, so that if our test fails, later on, the title of our test script will already give away what feature might not work correctly. In this example, we have named the block search, so if the block search test fails, later on, we can assume that most likely there’s something wrong with the search, and not any unrelated feature to that.
If I hit stop recording now, Browser Mal with creating in your test script, also create our first test here which we called “block search”. Even gave it a descriptive name. Of course, we can go ahead and record another test. Maybe we want to also test the block archive functionality. Again, we will use our home page as a starting point. Recording another test, of course, works the same as before.
We get a Google Chrome instance, and all user interaction will be recorded right away. Go to our block again, here to the right we see the archive, we will first validate that the archive is actually available in the sidebar. Then we will click on the first archive message. Then we can validate that there should be at least one block post in our archive, for this example May. We can validate this by telling it to validate element on the page. Our test script should also take a screenshot so that we can inspect it if later on, the test fails.
It stopped recording again, now we can see where two different tests in our suite, and I didn’t save the name before. Because now if we click here on the save, we can save our test. Save for later, rerunning, we can also take a look at our file because this is nothing else as a set of Jason files, so we named it ” example suite” and if we take a look here, this is just Jason file which saves the name and which test cases we have. Here, for example, we have two different test cases, search, and archive. Take a look at the test cases themselves, it’s now a Jason file, which contains all the test steps we need to navigate to our page. Then we click on an element, we select some text, which is the validation of text, and so on, and so on.
Okay, now these Jason files are great because they’re very small and contained, so we could save them in our, for example, in our repository, our GIT repository, Mercurial or whatever you’re using. So that every developer has access to these tests, and can run them on their own. You can also if you already have an existing test suite, you can select open test, suite, and select your test suite so you get them back from your repository for example. Okay, now what we can do here is, of course, run these tests roots again. You can either run all tests in our test suite, or you can run an individual test.
For just an example we will run all the tests, we can select all the different browsers once our test is running. Here, for example, We have selected Google Chrome and Firefox in the next version. There are many more browsers available, it takes some time to run all these tests. We will limit ourselves to two browsers here. If we start this, we will now see the different browsers opening and closing, and executing our tests, as we define them. If for example, it was the archive, we have another test that tests the categories page, which is similar to the archives page, you can see here.
Of course, we have the third test case which tests our search. You can see on the right that Selenium was entered. That’s the reason we don’t want to sit around and wait for the test to execute because it’s not that fun to watch and it goes quite fast. You might not see all the things you need to validate manually. We can see here Firefox works as well. If you run your tests in, let’s say for example Internet Explorer, it’s a little less snappy but it’s still fast enough. We can use it without any problems. Just a second and we’ll be finished. Alright, so all tests is executed. I’ve created a problem with one of the tests in Firefox that we can actually see how it looks if a test fails.
Here we can see with Chrome, all tests have succeeded, in Firefox one test did not succeed. If we click on the fail test then we can see why. Here, Firefox was unable to find the search ID field. This of course is, introduce [inaudible 00:14:37] in. In the real world testing scenario, this is what it would look like if a test actually failed. Now we can take a look at the screenshot for example, and see what does not work in Firefox, or we could go ahead and test it manually in Firefox to reproduce the issue and use the developer tools in Firefox to fix it.
Okay, another thing you can do here is if you want to, later on, start creating more sophisticated tests, or you get somebody on a team who knows a lot about Selenium and prefers to create your tests in Selenium and code directly. You don’t need to scrap all your tests and restart from scratch. You can actually export all your test cases to Selenium code, click on the export button, select the programming language you want to use. So here we have C#, we have java, we have Python and we have Ruby. This way we can execute our tests to code, and then run them against integrated, sorry Selenium grid. Which is also provided by BrowserEmAll here.
Okay, so this looks nice and easy. What can we do to automate this even further? We might not want to have a tester open Browser Mal on their machine. Selecting the test suite in question and then wait until the test execution is finished. It’s advisable to not use your machine during the test execution as new browser windows are popping up and closing again. You will not get anything done. You could create a dedicated testing machine like a CI server or a dedicated browser. Testing server to do this in the background manually. For example, you could run your test suite every time somebody commits to your repository or whenever you deem necessary every night, or for every new release, whatever works for you [inaudible 00:16:59]. This is, of course, a plausible. We can use a command line, The command line interface of Browser Mal to execute one or more specific tests automatically. Then Browser Mal will automatically create the necessary results files so that we can interpret the results in an automated fashion, on DCI server, and only get notified if any tests fail. For this, the command line is quite easy. We give Browser Mal the run tests, command, parameter, and of course, we will tell it the path to our test suite. This can, of course, if the test suite lives in your GIT repository, you can reference this year as a file, so that always the updated tests would just run.
We also need to create, tell BrowseEmAll, with the browsers flack, which browsers we want our tests to run. Again, here we have Google Chrome and Firefox in the latest version. Then we fire this off, we’ll see Browser Mal starting automatically and executing our tests. As we’ve seen before manually, the difference is that for one, we will get an XML file at the end that tells us which tests failed and which didn’t. The application will automatically close again after the tests have been executed. We can see again that it works quite nicely on Google Chrome and tests our search or archives, and our categories page.
Including taking screenshots, validating elements on the page, stuff like that. Here is Firefox, which will execute our tests as well. Where we can get a glimpse of what if not work with your block search test. See, categories work, archives work, and now the block search. Actually, it did work just fine. Okay. After the test execution, we have a result XML file, which would be integrated by your CI server automatically. Take a look at how it looks. Just like that we can see here are the different test cases, all have been executed, all have the result success. You can even see how much time it took to execute each individual test case and the overall time, just over a minute. It will also tell us how many asserts are in each test case because a test case that doesn’t validate or assert anything, most likely is not very effective.
The page would be broken and the test might not even recognize this. In addition to this, we can also see the screenshots for the executed pages. Here the screenshots have been tested, have been saved automatically. Here f, for example,s the screenshot for the page with the post details. If anything would be wrong with the, for example, the validated image, and if the image would be missing, we would see on the screenshot right away. Okay, with this simple process and just a few steps, you can get a lot of automated tests done already, and ready to execute without having to hire a test of that sophisticated Selenium, for example, Java or any other programming language. Even though this of course helps to later see what’s wrong on the actual page. Creating the tests itself is really easy to do. I’m already at the end of our demonstration for today, does anybody have any questions? Then please used to go to Webinar questions UI to enter them. Otherwise, thank you very much for attending.