Automated responsive website testing

Automated responsive website testing with Galen Framework

Introduction

In this post I outline how to implement automated responsive website testing.

But from my point of view it is also important to test a website with real devices, to feel the behavior and whether it looks good on the devices. This can be conducted as part of a bug hunt, where different tester with various devices like smart phones, tablets, or desktop browser tested the website.

This said, I will focus now on the automated responsive website testing. These were the reasons why I looked for a tool:

  • For this project we wanted to implement Continuous Delivery (CD) – therefore we needed something to get fast feedback after every deployment on the different stages.
  • The pages of the website consist of multiple fragments, e.g. the header and footer are provided by other services. We needed also a test to verify whether all fragments are loaded and displayed.

After some research we found the Galen Framework (http://galenframework.com). With Galen Framework it is possible to test the location of objects relatively to each other and you can also check whether certain elements are displayed (or not) on different browser sizes. Furthermore with Galen Framework it is possible to check texts on the website or to compare screenshots. I will have a look at the compare screenshot feature sometime and probably update my post Compare Screenshots with Selenium WebDriver.

Implementation

Most of all with the Galen Framework it is very easy to implement automated tests. First of all you only need a little html/css know how and the Galen Framework provides a good documentation on their website.

The website I had to test had different layouts depending on the browser size. I wanted to test each layout and I wanted to test the different pages of this website. I organized the tests with following files:

Test Files

Different tests can be stored in a test file with the suffix .gspec. It starts with the object definition where you give the html elements needed for the test a name and you specify the locator (id, css, or xpath) of each object. This section starts with the @objects keyword. The examples below refer to this testandwin.net website and are stored in the file taw.gspec.

@objects
	header     	div.header-image
	content    	div.entry-content
	headline	//h1
	sub-headline	//h2
	menu		#menu-toggle

It is also possible to import the object definitions in case you need them more than once.

To keep the tests clear, they can be divided into different sections (each starting and ending with a =). In a section you can check the position of html elements to each other (as in section ‘site set-up’), check whether a certain text is displayed or not (‘content’), check whether an element is present or not (‘menu’), check the height or width of an element, and so on.

= site set-up =
	headline:
		below header
	content:
		below headline
 
= content =
	headline:
		text is "Software Testing"
 
	sub-headline:
		text is "About us"
 
= menu =
	@on desktop
		menu:
			absent
 
	@on mobile
		header:
			below menu
 
= failing =
	@on desktop
		header:
			below headline

In case a certain check should only be executed on a certain condition (e.g. browser size), the keyword @on with the name of the condition can be used.

In addition I have added in the section ‘failing’ a test which shoud fail because I would like to demonstrate the report feature.

Furthermore there are a lot more options and functions in Galen Framework, please refer to the online documentation.

With the following command the test can be started.

galen.bat check taw.gspec --url "http://testandwin.net" --size "1024x800" --include "desktop"

Test Suites

Another good feature in the Galen Framework is to define test suites. In a test suite the pages to be checked can specified and also the options for the check. The example below shows a test suite, stored in the file taw.test. In this example the website testandwin.net is tested with two different browser sizes.

@@ parameterized
    | viewport 	| size     |
    | mobile	| 360x640  |
    | desktop	| 1024x800 |
 
TestAndWin on ${viewport} viewport
    http://testandwin.net ${size}
    	check taw.gspec --include "${viewport}"

A test suite can be started with the following command:

galen.bat test taw.test --htmlreport "."

Add the command line argument htmlreport to store an html report of the test in the specified directory.

There are also more command line arguments available, e.g. to specify the number of threads to run the tests in parallel.

Reporting

Galen Framework also provides different reports. When running in a Continuous Integration/Delivery environment JUnit or TestNG reports could be helpful to visualize the test result directly in the CI/CD environment or the let the build fail. Galen Framework also provides an HTML report giving a good overview about the test and test failures.

Automated responsive website testing - Galen Test Report

For each test the executed checks are displayed and the difference in case of a failing test.

Automated responsive website testing - Galen Test Report

 

Finally the Heat Map shows the misaligned elements.

Automated responsive website testing - Galen Heat Map

Summary

As a result I can say that using the Galen Framework is a good tool to do automated responsive website testing. It is very easy developing tests to check the structure of the website. For example, whether certain elements are displayed in a view port or not, how the elements are positioned to each other, and what content these elements have. Also I found it optimal for testing a static website. For functional tests, I would rather use other tools like Selenium / WebDriver.

Another plus in Galen is that you need very little programming knowledge for the development of tests, only some HTML / CSS knowledge.

Nevertheless in addition to the automated responsive website testing, it is also important to me to test a website with real devices to experience the look and feel. However, Galen Framework can help automate regression tests, which is very important in the context of CD.

 

Acceptance testing with SpecFlow and Selenium WebDriver

Acceptance testing with SpecFlow and Selenium WebDriver

Introduction

Today I write about my experiences with the introduction of a solution for developing and performing acceptance testing for a web application using SpecFlow.

The test scenarios should be developed textually in the language of the stakeholders and not in a programming language. Another requirement of the solution was that the tests could be created with a minimum of support from the developers.

The testers, also the developers and other project participants must support the solution.

Tool Selection & Set-Up

The web application is developed in .NET. For the selection of the tools it was important for me to rely on tools that can be used with .NET too. The reason for this is that the developers of the project could support better, because they can use their familiar development environment and the existing Continuous Integration environment could be used.

As tool SpecFlow (website) was chosen because it allows to separate the development of the tests of the technical implementation. The format SpecFlow uses to describe test scenarios is the Gherkin language. With this description language it is possible to implement test scenarios textually in the Given / When / Then format.
Selenium WebDriver implements the access to the browser and NUnit runs the tests. SpecFlow also provides a commercial test runner (SpecFlow+ Runner).

As IDE Visual Studio was used with the extensions SpecFlow and Nunit.

Implementation

First, I created a project of the type NUnit 3 Unit Test Project in Visual Studio and then added the packages as listed in the screenshot below to the project using the NuGet Package Manager to start implementing my acceptance testing:

Acceptance Testing with SpecFlow - NuGet Package ManagerPlease excuse if the source code (you will find it attached to this post) does not correspond to the .NET conventions, I come from the Java world. If existing solutions are available for what I mentioned here, I also ask for apologies.

BaseSteps.cs

The aim was that to be as little as possible implemented in a programming language in the creation of the different test scenarios.

For the web application I had to test I needed mainly steps to click buttons or links, to enter values into input fields and steps to verify the value of a field. I created generic steps for this which can be reused multiple times for different test scenarios.

BaseSteps.cs class (please see attachment) implements these steps. However in developing these generic steps it must kept in mind that the implementation keeps readable and maintainable.

BaseSteps.cs also contains methods to launch the browser automatically and to stop it and to automatically take a screen shot in case a test scenario fails.

I could image further generic steps in this BaseSteps.cs class, e.g. selecting an element from a Select Box, verify whether a field contains the required value, …

c# Attributes

Most of the methods in BaseSteps.cs contain declarative tags (called attribute) to associate run time information.

BeforeTestRun – This method is called before starting the first test scenario, e.g. to start the browser. AfterTestRun stops the browser.

AfterScenario – This method is called after execution of a test scenario. In my example I take in the method attached with this attribute a screenshot in case the test scenario has run into an error.

Given / When / Then – These attributes bind the step definition to the implementation.

Browser.cs

I had encapsulated the calls to access the browser (open, close, …) in the class Browser.cs.

App.config

In order to find HTML elements with Selenium WebDriver you need to specify a element locator.

When setting up the generic steps you could use the locators directly in the test scenarios. I did not prosecute this idea further, because from my point of view the test scenarios would become less readable and in case a locator changes, you have to adapt all the test scenarios using this locator.

In order to keep the test scenarios readable and maintainable I put them into a config file. This had the advantage, that a readable name could be used in the test scenario (I would suggest to use the same name as shown on the page) and the locator is maintained only at one place.

In the example below the key Search … maps to the css locator input.search-field.

 <appSettings>
   <add key="Search ..." value="cssselector:input.search-field"/>
   <add key="Search" value="cssselector:button.search-submit"/>
 </appSettings>

Test Scenarios

Below you will find two very simple test scenarios executing tests for this website, but this website had not been the web application I had to test ;-). The first one selects a link and checks the page content and the second one executes a search.

Feature: TestAndWin

Scenario: Click link and check page content
 Given I am on the page "http://testandwin.net"
 When I click the link "Compare Screenshots with Selenium WebDriver"
 Then the text "Compare screenshots implementation" is displayed

Scenario: Search
 Given I am on the page "http://testandwin.net"
 When I enter the value"screenshots" in "Search ..."
 And I click the button "Search"
 Then the text "useful to compare screenshots" is displayed

The screenshot below show the test run of the second test scenario.

Acceptance Testing with SpecFlow - Test Run

Conclusion

With this solution it was easily possible and with little effort to provide a good solution to do acceptance testing. Using the generic steps many test scenarios could be developed without repeatedly implementing source code. When developing generic steps it must be ensured that the test remains comprehensible what I would regard in this case as given.

With this procedure many tests have been developed for the web application. In some places, it was sensible to implement specific steps for the web application. In particular for the Given Steps to jump directly to a page, without having to call various steps beforehand.

The local Visual Studio installation executes the tests. In the next post I will describe the integration of the acceptance testing in the Microsoft Team Foundation Server.

Example files

 

Test Case Documentation

Motivation

From my point of view the test case documentation of automatic tests is important. It helps to maintain the tests in the future and the documentation is useful for co-workers if they would like to understand the tests or if they would like to gain more information about a test. It is also important that the documentation is easy available, e.g. from the test report.

We distinguish between log outputs that are written during test execution and the documentation of the test case itself. This post will outline how we cover both of these topics.

Environment

We implement our tests for our internet portal with Java, TestNG and Selenium WebDriver. We execute our automatic tests with the continuous integration system Jenkins (with testng-plugin).

Test case documentation

To document the test cases we are using Javadoc. The Javadoc for each test method contains a brief description what the test is doing and which steps are executed. The Javadoc class summary section outlines which area of the software is covered with the tests of the class.

One of the steps during building and executing the tests is to generate the Javadoc, which could be for example done with a Gradle task (or Ant, Maven).

Test execution documentation

I would recommend adding log output statements in the source code of the test. The log statements are written during the test run in a log file. This helps in analyzing why a test failed. The log statements can be accessed from the Jenkins TestNG Results view. This includes:

  • Log statements,
  • Output of the current URL when a test fails and
  • Generate a screenshot of the current page when a test fails

Listed below you will find some code snippets showing how we implemented this. The Java class containing all the snippets is attached to this post.

Log statements

The following example shows how to write log statements, this could be encapsulated in a utility method.

Reporter.log(new Date() + "&gt;&gt; Your log statement", true);

Output current URL and screenshot

In case of a test failure the current URL is written to the log file and also a screenshot is taken. In the example below you have to adapt the path of this image file to your environment. The link to the screenshot is also written to the log file, so that the screenshot can easily be accessed from the test results view. We implemented a method in our base test class with the TestNG annotation AfterMethod which is called after every test.

...
private static final String HREF = "Last URL: <a href="%s">%s</a>";
private static final String HREF_IMG = "&lt;a href="%s"&gt;&lt;img src="%s" alt="" width="100" height="100" /&gt;&lt;/a&gt;";
...
@AfterMethod (alwaysRun = true)
public void logResult(final ITestContext context, final ITestResult result)
 throws IOException {
  Reporter.setCurrentTestResult(result);
  if (!result.isSuccess()) {
    Reporter.log(String.format(HREF, driver.getCurrentUrl(),
      driver.getCurrentUrl()));
    File scrFile =
     (TakesScreenshot)driver).getScreenshotAs(OutputType.FILE);
    String imagePath = result.getName() + ".jpg";
    File file = new File(imagePath);
    FileUtils.copyFile(scrFile, file);
    Reporter.log(String.format(
      HREF_IMG, file.getName(), file.getName()));
  }
}

Linking test report with Javadoc

We include in our test logging also a hyperlink to the Javadoc at the beginning of the log output of each test. We find this helpful because it saves time to look up the Javadoc. In order to get this link for every test case we implement a method with the TestNG annotation BeforeMethod in our base test class. This method is called before the test is executed.

import java.lang.reflect.Method;// Adapt the javadoc path to your environment
private static final String JAVADOC_LINK = "<a href="build/docs/javadoc/%s/%s.html#%s">%s</a>";
 
@BeforeMethod (alwaysRun = true)
public void logTestStart(final ITestResult result, final Method method) {
  Reporter.setCurrentTestResult(result);
  Class&lt;?&gt; c = this.getClass();
  String javadoc = String.format(JAVADOC_LINK,
    c.getPackage().getName().replace(".", "/"),
    c.getSimpleName(), method.getName(), 
    c.getSimpleName() + ":" + method.getName());
  Reporter.log("&gt;&gt; Start test method: " + javadoc);
}

Testing websites on mobile devices with Selenium

Use case and motivation

Nowadays many people use mobile devices, for that reason it is important to optimize your websites for it. More devices imply more bugs and a bigger test effort. To address this topic effectively you should execute your automated tests also on mobile devices. To keep the effort low it is a good idea to use the framework Selendroid for Android devices and ios-driver for iOS devices. Selendroid is a test automation framework which drives off the UI of Android mobile web. As the Selendroid tests are written using Selenium, it is possible to use the same tests for desktop and mobile devices web tests. Ios-driver works with Selenium in the same way as Selendroid.

In this post we explain how to work with Selendroid. Ios-driver follows later.

Testing websites on Android devices

Prepare your system for Selendroid

To prepare your system for testing websites on mobile devices, you have to install and configure some programs. First you have to install Java SDK and to set the JAVA_HOME variable. After that you have to get the latest version of the Android SDK standalone. Please follow the steps on the Android website (http://developer.android.com/sdk/index.html) to install it and set the ANDROID_HOME variable. Now it is time to get the actual selendroid-standalone.jar from the Selendroid website (http://selendroid.io/). If you want to run the tests on a Selenium grid you have to prepare the grid node in the same way.

Connect devices

Selendroid can be used on emulators and on real devices. If you want to test on a real device you have to connect your devices with your local system or with the Selenium grid node system. For Samsung devices you should install Samsung Kies because it contains a special driver for your Samsung device (http://www.samsung.com/de/support/usefulsoftware/KIES/). First activate the developer tools on your Android device and enable USB debugging. Connect your device with your computer via USB (connect devices without a cable did not work for us with Selenium). After that change the connection from your devices from mtp to ptp. Now you are ready to test on your device.

Start Selendroid on local system

Open your command line or shell on your computer. To start Selendroid enter java -jar selendroid-standalone-0.14.0-with-dependencies.jar (you can change the port with parameter -port <portnummer>).

Start Selendroid on Selenium Grid

On Selenium grid you have to start the grid hub and the node. First put the selendroid-standalone-0.14.0-with-dependencies.jar and selendroid-grid-plugin-0.14.0.jar (http://selendroid.io/scale.html) on grid hub. Copy both jars in the same folder where the selenium-server-standalone.jar is stored. Open your command line or shell and start Selenium grid hub with Selendroid:

java -cp "libs/selendroid-grid-plugin-0.14.0.jar:libs/selendroid-standalone-0.14.0-with-dependencies.jar:libs/selenium-server-standalone-2.43.1.jar" org.openqa.grid.selenium.GridLauncher -capabilityMatcher io.selendroid.grid.SelendroidCapabilityMatcher -role hub > $LOG_DIR/$APP_NAME-console.log.

After starting Selendroid you have to configure and to start the grid node. Put both jars on grid node in the same folder selendroid-standalone-0.14.0-with-dependencies.jar and selendroid-grid-plugin-0.14.0.jar. Create a json file in the same folder with following node configuration:

{
"capabilities":
[{
"browserName": "android",
"maxInstances": 1,
"seleniumProtocol": "WebDriver"
}],
"configuration":{
"proxy": "io.selendroid.grid.SelendroidSessionProxy",
"maxSession": 1,
"register": true,
"hubPort": <hupPortnumber>,
"remoteHost": "http//:mynode:<portnumber>",
"hubHost": grid hup ip adress
}

After that register the node with following commands:
java -jar selendroid-standalone-0.14.0-with-dependencies.jar -port <portnumber>
curl -H "Content-Type: application/json" -X POST --data @selendroid-nodes-config.json http://mygridhubip/grid/register

Selendroid implementation

After you have prepared your system, you must configure your test to run your tests on Android devices. Selendroid can be integrated as a node into the Selenium Grid, so you need two different configuration for local and grid.  First you have to add the actual selendroid.jar to your project. You can download it from Maven central (http://search.maven.org/#search|ga|1|selendroid). Than you have to set capabilities and driver for Android.

Code snippet local configuration:

WebDriver driver = new SelendroidDriver(
    new URL("http://localhost:4444/wd/hub"), 
    SelendroidCapabilities.android());

Code snippet for grid configuration:

DesiredCapabilities capabilities = SelendroidCapabilities.android();
capabilities.setBrowserName("android");
WebDriver driver = new SelendroidDriver(
    new URL("http://myseleniumgridhub:port/wd/hub"), capabilities);

The following code snippet shows a Selendroid test example. You can use the same test for desktop browser if you configure the capabilities accordingly for a desktop browser.

@Test
public void myFirstSelendroidTest() {
  driver.get("http://testandwin.net");
  WebElement element = driver.findElement(
    By.xpath(".//*[@id='search-3']/form/label/input"));
  element.sendKeys("Test Automation");
}

Issues

In case Umlauts could not be set correctly when calling sendKeys, please have a look to at http://selendroid.io/advanced.html#syntheticEvents. You can use the following codesnipet which works great with special characters:

Configuration configurable = (Configuration) driver;
configurable.setConfiguration(DriverCommand.SEND_KEYS_TO_ELEMENT, "nativeEvents", false);

If you change your selendroid version to a new selendroid version it could happen that you get this error: android.util.AndroidException: INSTRUMENTATION_FAILED: io.selendroid.io.selendroid.androiddriver/io.selendroid.server.ServerInstrumentation. This error occur if there is a previous version installed on your device conflicting with new version. To fix this issue open command line step into the platform-tools folder of android-sdk and try uninstalling the package from your device with the following command:
adb shell pm uninstall io.selendroid.io.selendroid.androiddriver

Compare Screenshots with Selenium WebDriver

Use case and motivation

Sometimes it‘s necessary to test the design or the correct position of one or more elements on a website. In this case, it could be useful to compare screenshots of the tested website with a reference screenshot.

And it’s a good idea, to do this kind of test automatically.

Below we will show how to compare a taken screenshot of any website with a reference screenshot and which tools are recommendable to do this smart.

We will point some pitfalls and defiance with this. E.g. how the test reacts with changing elements like advertising media.

Compare screenshots implementation

The example below shows how an image comparison with ImageMagick® and im4java can be implemented. The source code is not following coding standards, it serves only to illustrate the image comparison.

ImageMagick can be used to create, edit, compose, or convert bitmap images. The functionality of ImageMagick is typically utilized from the command line. Im4java is a pure-java interface to the ImageMagick command line.

The first source code snippet shows the method to compare images which we are using later. The different metrics which can be used to compare images are explained on the ImageMagic website. When the images are not equal the compare command will throw an exception.

import org.im4java.core.CompareCmd;
import org.im4java.process.StandardStream;
import org.im4java.core.IMOperation;
...
boolean compareImages (String exp, String cur, String diff) {
  // This instance wraps the compare command
  CompareCmd compare = new CompareCmd();
 
  // For metric-output
  compare.setErrorConsumer(StandardStream.STDERR);
  IMOperation cmpOp = new IMOperation();
  // Set the compare metric
  cmpOp.metric("mae");
 
  // Add the expected image
  cmpOp.addImage(exp);
 
  // Add the current image
  cmpOp.addImage(cur);
 
  // This stores the difference
  cmpOp.addImage(diff);
 
  try {
    // Do the compare
    compare.run(cmpOp);
    return true;
  }
  catch (Exception ex) {
    return false;
  }
}

The next source code snippet is using the just introduced method to compare images. At first a webpage is opened and a screenshot is taken.

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
...
// Get the driver and open the page
WebDriver driver = new FirefoxDriver();
driver.get("http://testandwin.net");
 
// Take Screenshot
File scrFile = ((TakesScreenshot)driver).
getScreenshotAs(OutputType.FILE);
 
String current = "c:/temp/image.png";
FileUtils.copyFile(scrFile, new File(current));
 
// Compare the images
boolean compareSuccess =
  compareImages("c:/temp/expected.png", current, "c:/temp/difference.png");
 
// Close the driver
driver.close();

It is also possible to take a screenshot only from a certain web element. If you would like to do this you can include the following code snippet before the FileUtils.copyFile(…) call.

import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import org.openqa.selenium.Point;
...
WebElement webElement = ...;
BufferedImage image = ImageIO.read(scrFile);
Point point = webElement.getLocation();
BufferedImage elementImage = image.getSubimage(
      point.getX(), point.getY(), 
      webElement[0].getSize().getWidth(), webElement[0].getSize().getHeight());
ImageIO.write(elementImage, "png", scrFile);

How to deal withing changing parts

Most websites contain dynamic elements like advertising media, version numbers, dates, etc. These elements make it almost impossible to compare screenshots. The solution we are using is to hide those web elements with the method listed below.

hideElement(WebElement e, WebDriver d) {
  ((JavascriptExecutor)d).executeScript("arguments[0].style.visibility='hidden'", e);
}

Compare Image Size

When compare screenshots it could be useful to compare first the image size because when the image size is different the comparison will fail. For example image sizes can differ when taking screenshots on different machines.

import java.awt.image.BufferedImage;
...
BufferedImage image = ImageIO.read(new File(path));
String size = image.getWidth() + "x" + image.getHeight();

Prerequisites

Before using  these examples, you have to install ImageMagick on the machine which is running the tests. The installation is described on ImageMagick website.

Additionally you have to include the im4java jar in your classpath.

Example images

Below you will find one example of the screenshot comparison. The first one shows the current image, the second one the expected image and the third one is the difference image.
Compare Screenshots - Current Image

Compare Screenshots - Source Image

Compare Screenshots - Difference Image