As part of my talk in Heisenbug, I showed two examples of automation in virtual reality: Unium and Poco(Airtest). I feel the explanations weren’t very deep, so I decided to use this format to expand on them.
1) Preparing the APP to test
In the examples below I’m going to be using a VR app that I created as part of the VR foundations Udacity course. This app was done using Google VR Cardboard(which is currently sort of deprecated, but I believe it’s the best first step into VR development)
You can find the code here. It was build with Unity version 2017.1.0p4 and Google Cardboard version gvr-unity-sdk-1.0.3. I’m specifying this because, sometimes Unity is funny when switching versions and things go deprecated fast in Google VR, so if you use other versions it might not work straight away.
This application is a maze game. The user has to find a key and then a house to unlock the door with it. There are also coins scattered around the maze and the goal is to find them. Inside the house, there would be a board with the number of found keys.
Unit testing the application with test runner we can check some things. For example, we can check that the door ‘lock’ variable is on or off. However, we don’t see the real result as the user would, which could cover issues such as the lock variable does not cause anything to happen in the scene or the sound or particle system are not pleasant or the user cannot reach the objects or they sizes feel wrong…
To test this, you would have to actually go around the maze, get all the coins, get the key, go back to the door and check everything this way. The path is always the same and manual testing could get very time consuming. That’s the reason of looking for solution for automation of user behaviour, as we would see with Unium.
2) Installing and configuring Unium
Although there is a Github project available, I recommend you do this from the asset store in Unity.
Then we search for Unium and we download the asset.
A folder will appear then in the Assets’s folder of the project. Then we add a new empty object in the Scene of the project and we add the script of the folder from our Assets. What this script does is opening a local web server, with port 8342 by default (this can be changed), that we can use to interact with the application, retrieving data from it and sending data to it. If you are curious about how this could work, check out my post about API’s. This creates communication capabilities between the server and our Unity program.
3) Using Unium
Now we can do calls directly in a browser as if we were accessing a website, such as:
This would set the camera to a rotation of 180 in the x axe, 0 in the y axe and 0 in the z axe.
Movement of the camera is crucial for VR apps, and we can see three issues already with these functions. The first one is that for VR applications we usually move by clicking objects called “waypoints” or with handset functions. With Unium we could use waypoints as objects and click them as such:
Note that I have all waypoints under another object “Waypoints”, the exact waypoint to click is called “Waypoint_7” and then the code that has the click method is called “Waypoint”. The call to click on the door the is easier because we only have one door object and we are not reusing names:
If you are not sure of the exact name of the object you can use wildcards and queries.
The second issue is the use of EulerAngles. I was able to see the rotation with quaternions (using “rotation” instead of “eulerAngles” at the end of the call) but I was not able to set it up for some reason (maybe now there is support for it or I was doing something wrong at the time of the testing)
The last issue is the rotation of the camera: users would create a movement when rotating towards a point or in a direction, but the rotation done with the call is done in one setting. This would not really emulate the user behaviour but with Unium we could set up a script and emulate such behaviour or use the web socket capabilities to run a query with a chosen frequency.
Unium has many scripting languages which can be used to automate these calls, for example nUnit (which is the one we would use in C# for unit test). Here is an example of use of code to do this:
Unium is a powerful tool for automation of Unity projects. VR automation is possible with it, even emulating user’s behaviour. However, this sort of automation is not fully black boxed, as you need to add this code into the application and possible use some methods that are existing in the application. Ideally, a fully black boxed automation would simulate the behaviour of the controllers directly.
What’s the difference between automating any game VS automating a VR Game? The main difference is that instead of a movement of a player, you need to automate the camera movement and take into account the handsets (if present).
Sometimes, the transform (walking) is usually done by clicking waypoints and this could be done easily with Unium. Rotation can be emulated by rotating the camera as demonstrated before.
The next step would be to to automate the handsets. However, if we do this directly by calling functions on the controllers of the handset, we might enter the territory of testing that the functions by the controller’s provider are working fine. Instead of this, we might just want to make sure that our functions work fine, so maybe we should call the functions that we have created to control our application when the controller selects and clicks around. But that’s… well… another story…
I know, I know, this doesn’t seem like anything that has to do with testing. However, I found this concept to be very challenging and I think some people might be interested in knowing about it, especially if they are considering automating objects on VR.
You want to be able to rotate the main camera with Google VR. Google design has reached the conclusion that moving the camera in virtual reality should be something that the user does, not the developer (I think it was possible in earlier versions of Unity and Google VR sdk).
So I decided to create an empty object and assign the camera as a child object of it. To move the camera, I move this object. To undo the camera movement done by the user, I move the object the opposite way. Should be easy, shouldn’t it?
Well, sometimes… but other times it does not work. I spent a very long time searching online for other people that had this issue and it was either working for them or not, nobody would explain why this was the case. I believe the reason to be the effect of gimbal lock.
I know, that sounds like a made up word, but this is actually an important concept that you should be aware of, let me explain:
Gimbal lock is an effect in which the object loses one degree of rotation, which is common when you use three dimensional vectors. At some point, two of the rotation coordinates get lined up in a way that every time you move one, the other one moves as well. If you are a visual person, this video from GuerrillaCG (a Youtube channel with information about 3D modelling and animation) explains it quite clearly.
How did I understand what this mean and how did I figure this was the issue? I decided to encapsulate the camera object in another object, and then that object in another one. Then I assigned them some colored spheres and pointers and I run some tests. After a while, it became clear that movements were not coordinated and that the rotation variables were not working as expected.
The difference between Euler vectors (the usual 3 components vector that indicates a point in the 3D space) and quaternions (a 4 components vector) could help us avoid the Gimbal lock effect.
I know, it still sounds a bit gibberish… Basically (very basically), quaternions are a mathematical way of representing 3D objects by using 4 coordinates (representing them in 4D space instead), that an Irish mathematician (called William Rowan Hamilton) came up with and decided to literally set in stone, on a bridge.
They are difficult to understand and to visualise (as we don’t intuitively know how a 4D space should be visualised), but the maths applied to them are easier than to the 3D vectors. For example: if you want to rotate an object on an x plane, on the y plane and on the z plane, the object will not be in the same state if you perform the entire rotation on x and then on y and then on z, that if you do it in a different order (because each coordinate rotates based on the others too). However, with quaternions, you will always get the same result, as you can rotate two planes at once (which is particularly convenient when you are trying to undo a movement or automate a 3D object)
I am linking to a lot of articles below for you to deep dive, but two main points to remember are:
You should keep one of the values of the quaternions fixed (or set to 1) as this is the way of telling the quaternions that there has been a rotation.
Rotation angles in quaternions are half the expected in 3D rotation (so be careful when calculating the rotations in 4D). This is known as double-cover and it’s actually quite useful once you get the grasp on it.
If you are interested in knowing more about quaternions, you can check this and this videos from 3Blue1Brown (a Youtube channel full of interesting mathematical concepts with a very easy explanation, I really recommend it) Also, I enjoyed the explanations on this article. If you are considering working with 3D objects movement at some point, you should definitely watch these videos and play around with the simulators to understand everything better.
It is not my intention to go against the design of a system such as Google VR, and you should listen to the platform owners as, in general, it is wise not to tinker with the camera. However, sometimes I find it to be useful to undo a user’s movement, for example, for automation purposes or if the gyroscope is drifting on its own (more about phone’s sensors in this interesting article). In these cases, the use of quaternions is so fundamental, that it is definitely worth it to spend sometime learning about them. The next step after the automation of the camera and other 3D objects would be to automate the hands movements, but that’s…well…another story
Previously, I’ve written a couple of posts about how to get yourself started on VR in which I promised some stories about testing on this world.
Why do I call it world instead of application? Because the goal of virtual reality is to create realistic synthetic worlds that we can inspect and interact with. There are different ways of interacting in these worlds depending on the device type.
Testing an application in VR would be similar to testing any other application, while we would have to take into account some particularities about the environment. Later on we will see different kinds of testing and think about the additional steps for them in VR, but first let’s see the characteristics of the different types of devices.
Types of devices:
Phone devices (Google Cardboard or DayDream) – allows you to connect your phone (or tablet) on the device to be able to play a VR app in there.
This is possible because most of smartphones nowadays come with gyroscopes: a sensor which uses the Earth’s gravity to determine the orientation.
Some Cardboards (or other plastic versions) can have buttons or a separate trigger for actions on the screen (as it is the case for DayDream), but the click is usually not performed in the object. Instead, it is done anywhere in the screen while the sight is fixated on the object. If the device does not have a button or clicker, the developer have to rely on other information for interaction, such as entering and exiting objects or analyzing the length of time the user was on the object.
Computer connected devices (HTC, Oculus, Samsung VR…) that generally comes with (at least) a headset and handset, have an oled device with high resolution and supporting low persistence embedded in the headset, so you don’t need to connect it to a screen. They detect further movement, as it is not just about the movement of the head but also the movement on the room itself and the hand gestures. This is done differently depending on the device itself.
We have moved from being able to detect user head movement (with reasonable size devices), to use sounds, to use hand gestures… so now, testing VR applications is getting more complicated as it now require the test of multiple inputs. The handset devices usually have menu options as well.
Before going on, I’d like to mention AR. AR is about adding some virtual
elements in the real world, but with AR we do not create the world.
However, AR has a lot in common with VR, starting with the developing
systems. Therefore, the testing of the two platforms would be very
We have talked about the hardware devices in which the VR applications would run, but we should also talk about the software in which the applications are written.
Right now there are two main platforms for developing in VR: Unity and Unreal, and you can also find some VR Web apps. Most of things that are done with Unity use C# to control the program. Unreal feels a bit more drag and drop than unity.
Besides this, if you are considering to work in a VR application, you should also take into account the creation of the 3D objects, which is usually done with tools such as blender or you can find some already created onlin.e
But, what’s different in a VR application for testers?
Tests in VR applications:
VR applications have some specifics that we should be aware of when testing. A good general way of approaching testing on VR would be to think about what could make people uncomfortable or difficult.
For example, sounds could be very important, as they can create very realistic experiences when done appropriately, that make you look where the action is happening or help you find hidden objects.
Let’s explore each of the VR testing types and list the ways we can ensure quality in a virtual world. I am assuming you know what these testing are about and I’m not defining them deeply, but I will be giving examples and talking about the barriers in VR.
It ensures that the customer can use the system appropriately. There are additional measurements when testing in VR such as verifying that the user can see and reach the objects comfortably and these are aligned appropriately.
We are not all built in the same way, so maybe we should have some configuration before the application for the users to be able to interact properly with the objects. For example, the objects around us could be not seen or reached easily by all our users as our arms are not the same length.
You should also check that colors, lighting and scale are realistic and according with the specifications. This could not only affect quality, but change the experience completely. For example, maybe we want a scale to be bigger than the user to give the feeling of shrinking.
It is important to verify that the movement does not cause motion sickness. This is another particularly important concept for VR applications that occurs when what you see does not line up with what you feel, then you start feeling uncomfortable or dizzy. Everyone have a different threshold for this, so it is important to make sure the apps are not going to cause it if used for long time. For example, ensure the motions are slow or placing the users in a cabin area where things around them are static, or maintaining a high frame-rate and avoiding blurry objects.
If there is someone on your team that is particularly sensitive to motion sickness, this person would be the best one to take the tester role for the occasion. In my case, I asked for the help of my mother, who was not used at all to any similar experiences and was very confused about the entire functioning.
Is a subset of usability testing that ensures that the application being tested can be appropriately used by people with disabilities like hearing, color blindness, old age and other disadvantaged groups.
Accessibility is especially important in VR as there are more considerations to make than in other applications such: mobility, hearing, cognition, vision and even olfactory.
For mobility, think about height of the users, hand gestures, range of motion, ability to walk, duck, kneel, balance, speed, orientation…
To ensure the integration of users with hearing issues, inserting subtitles of the dialogs would be a must, and ensuring those are easily readable. The position of the dialogs should be able to tell the user where the sound is coming from. In terms of speech, when a VR experience require this, it would be nice if the user could also provide other terms of visual communication.
There are different degrees of blindness, so this should be something we want to take into account. It is important that the objects have a good contrast and that the user can zoom into them in case they are too far away. Sounds are also a very important part of the experience and it would be ideal that we can move around and interact with the objects based on sound.
I realized on how different the experience could be depending on the user just by asking my mother to help me test one of my apps. She usually wears glasses to read, so from the very beginning she could not see the text as clearly as I did.
I mentioned before that in VR it is possible to interact with object by focusing the camera on them for a period of time. This is a simple alternative to click without the need of hand gestures for people with difficulty using them.
There are many sources online about how to make fully accessible VR experiences, and I am sure you can come up with your own tests.
Its purpose is to ensure that the entire application functions as is should in the real world and meets all requirements and specifications.
To test a VR application, you need to know the appropriated hardware, user targeting and other design details that we would go through with the other type of testing.
Also, in VR everything is 360 degrees in 3 coordinates, so camera movement is crucial in order to automate tests.
Besides, there might be multiple object interaction around us that we would also need to verify, such collisions, visibility, sounds, bouncing…
There are currently some testing tools, some within Unity that could help us automate things in VR but most are thought from a developer’s perspective. That’s one more reason for us to ensure that the developers are writing good unit tests to the functions associated with the functionality and, when possible, with the objects and prefabs. In special, unit test should focus in three main aspects for testing: the code, the object interaction and the scenes. If the application is using some database, or some API to control things that then change in VR, we should still test them as usual. These tests would alleviate the integration tests phase.
Many things are rapidly changing on this area, as many people have understood the need for automation over VR. When I started with unity, I did not know the existence of the testing tools, and tested most of it manually, but there are some automated recording and playback tools around.
Is the process of determining the speed or effectiveness of a system.
In VR the scale, a high number of objects, different materials and textures and number and of lights and shadows can affect the system performance. The performance would vary between devices, so the best thing to do would be to check with the supported ones. This is a bit expensive to do, that’s why some apps would only focus in one platform.
Many of my first apps were running perfectly well in the computer but they would not even start on my phone.
It is important to have a good balance to have an attractive and responsive application. New technologies also make it important to have a good performance, so the experience is realistic and immersive. But sometimes in order to improve performance we have to give up on other things, such as quality of material or lights, which would also make the experience less realistic.
In the case of unity, the profiler tool would give you some idea of the performance, but there are many other tools you can also use. In VR, we need to be careful with the following data: CPU usage, GPU usage, rendering performance, memory usage, audio and physics. For more information on this, you can read this article.
Also, you can check for memory leaks, battery utilization, crash reports, network impact…. and use any other performance tools available on the different devices. Some of these get a screenshot of the performance by time and send it to a database to analyze or set up alerts if anything goes higher for you to get logs and investigate the issue, while others are directly installed on the device and run on demand.
Last but not least, VR applications can be multiplayer (that’s the case of the VRChat) and so we should verify how many users can connect at the same time and still share a pleasant experience.
Ensures that the system cannot be penetrated by any hacking way.
This sort of testing is also important in VR and as the platforms evolve, new attacks could come to live. Potential future threats might be virtual property robbery, especially with the evolution of cryptocurrency and with monetization of applications.
Localization testing: as with any other application, we should make sure that proper translations are available for the different markets and we are using appropriate wordings for them.
Safety testing: There are two main safety concerns with VR (although there might be others you could think of)
1. Can you easily know what’s happening around you?
Immersive applications are the goal of VR, but we are still living in a physical world. Objects around us could be harmful, and not being aware of alarms such as a fire or carbon monoxide could have catastrophic results. Being able to disconnect easily when an emergency occurs is vital on VR applications. We should make sure the user is aware of the immersion by ensuring we provide some reminder to remove objects nearby.
Every time I ask someone to give a test to some app with the mobile device, they start walking and I have to stop them otherwise they might hit something. And this is the mobile device, not the entire immersive experience.
Even I, being quite aware of my surrounding, give a test to some devices that include sound, and I hit my hand with several objects around me. Also, I could not hear what the person that handed over the device was telling me.
2. Virtual assaults in VR:
When you test an application in which users can interact with each-others in VR, the distance allowed between users could make them feel uncomfortable. We should think about this too when testing VR.
Luckily, I haven’t experienced any of these myself but I have read a lot of other people talking about this issue. Even some play through online on VR chat, you can see how people break through the comfort zone of the players.
Testing WITH VR: There are some tools being developed in VR to allow for many different purposes and technologies as emulation of real scenarios. Testing could be one of them, we could, for example, have a VR immersive tool to teach people how to test by examples. I have created a museum explaining the different types of testing, maybe next level could be to have a VR application with issues that the users have to find.
What about the future?
We have started to seen the first wireless headsets and some environment experiences such moving platforms and smelling sensations.
We shall expect the devices to get more and more accurate and complete and therefore there would be more things to test and that we should have into account. We are also expecting the devices to get more affordable with the time, which would increase the market.
One of the most common question I get when people reach out to me about virtual reality (VR) is: how could I get started? Even thought I already have written an article about this, but maybe I should talk about my own experience instead so you can get more specific examples about it. If you are not into VR, please bear with me, as what I’m about to tell you can help you getting into any topics you wish to get into.
My first experience with a VR device was during a hackathon in Microsoft, when one of the interns brought his Oculus Rift. Back then it was a very expensive device, so it was very interesting to be able to play around with one. But I found that it would still have some issues to solve, starting for adding hands gestures.
As life goes, sometimes you get stuck in what you are doing at work and don’t get the time to investigate interesting new things. In my case, I bought a house and there was a lot of stress related to this. It was not until years later that I actually got the chance to try again another device, this time it was over mobile on a meetup called “tech for good” in Dublin. In this meetup, they were using VR mobile devices to provide social impact. It was my first experience with phone VR and I thought: Ok, now this is something that anybody can use and get, therefore it is something that is going to need testing.
After that, another hackathon (this time an open Nasa hackathon) got my interest in VR and AR back. I highly recommend this hackathon as I made really good friends there and we had so much fun building a AR/VR experience to navigate a satellite. My team (who won the local People’s choice award in Dublin) created an application that simulate a satellite around the orbit (on AR) and translate to see the view from that satellite (VR). If you are interested, here is our project
When I found myself having more time, I started looking for information about VR. I found a Udacity course on VR and decided to take it on. Back when I started it, the course covered many topics, although they made the decision of separating the courses in different specialties, which makes much more sense. If you are interested in seeing some of the projects I made during this course, check my Github account.
After that, I got interested in open source projects on
AR and wanted to start doing testing there… However, life got in the
way again when I moved to China. It’s still on my to-do list.
I was lucky enough to start working for Netease Games in China right after, so I had then enough flexibility and hardware access to do some more research in VR including some automated testing with Google Cardboard, which it should be now integrated in Airtest project (I know, not many people are using Google Cardboard anymore but, hey, you need to start somewhere.. the other research is still ongoing)
I also was lucky to have the opportunity to attend to the second Sonar in Hong Kong, which is a music and technology festival, and it showcased some cool new technologies and devices in VR (including aroma experiences and snow-surfing)
Besides that, I started to think of plans and ways of testing VR applications too (as Netease was working in some projects like nostos, which I had the opportunity to try myself and really enjoyed it).
Around that time, I gave a talk in Selenium conference in India gathering all this gained knowledge (which I talked about on this post). In order to prepare for this talk I played around and created my own ‘conference simulator’ just to get prepared for it.
Another thing I do frequently to gain knowledge in VR is to watch playthroughs and online reviews, as you can learn a lot from watching others play and it could be very good to understand your potential users if you are working on a game. I also have read some books on the matter (shout out to packtpub which gives away free IT books everyday!)
Have you found a pattern?
I know you have, if you are a reader of this blog you surely are a clever Lynx by now, but just in case, I have highlighted it in bold letters: Attending to (and after a while starting) hackathons, meetups, talks and festivals, watching or reading online related content and books, and playing around in courses, open source projects, at work and on your own projects will get you into anything you are interested to get into.
It sure sounds like a lot of things to do, but the tools are already around you, and I’m talking about years worth of experience here. Just take one thing at a time and you will too become an expert of that thing you are into. The biggest difficulty is to pick which one to take at any given time and to be honest about yourself and how much time you can spend on that. (I regret a ton not having put more efforts on the AR open source project when I had the chance)
Of course, if you are not really into it, then it would sound like a lot of work, in which case it’s probably better to save yourself time and pick something else. I like to think of it as a test of passion, or on the words of ‘Randy Pausch’ from his talk ‘Achieving Your Childhood Dreams’: “brick walls”. (By the way, this is one of the best motivational talks I’ve ever watch, and I actually re-watch it at least once a year to keep me motivated. Also, it mentions VR too 🙂 )
As you would imagine, this is not the only subject I spent time with or I gave my attention to, another big one for me is artificial intelligence, but that’s…well…another story.
If you are a regular reader of my blog, you are likely to expect a testing story. This is not exactly such, but have some patience, as I will link it with testing in upcoming posts.
Virtual reality is a field I was always curious about, from when I was a little lynx, before it was even possible to bring it to the users (as the devices were not quite portable back then).
On the other hand, I was looking to do a nanodegree course from Udacity to learn something new and keep myself updated. When you are working as a developer for a while (especially developers in test) you need to keep up to date. I once heard a feedback about a candidate that was interviewed by some friends, that he did not really have 10 years of experience, but he had a repeated 1 year of experience in a loop of 10, and this can easily happen to anybody. To avoid this, what I do and I recommend doing is: keep learning new stuff.
And so, I decided to course the VR nanodegree course in Udacity.
Advises if you are considering VR nanodegree course:
The first thing you need to know about virtual reality is that you will need a device for it, otherwise you won’t be able to test anything you do.
The second thing you need to know about VR: if you want to work in multiple platforms, you also need multiple devices. This might seem obvious for you, but most of the current technologies (think about mobile, for example) have emulators to be able to deploy and test in different devices. However, for VR this is not there yet (as for the time I’m writing this article and my knowledge).
So, if you are planning in getting into the nanodegree course and into actual VR development, get ready to purchase an HTC vibe or an Oculus Rift, unless you are lucky enough to be able to borrow one or unless you prefer to take the speciality about cinematics and 360 recording. I ended up picking this speciality. Not that I did not want to spend the money on a VR cool device that would allow me to also play cool games, but I have recently moved countries (and continents, in fact) and I did not want to carry much stuff with me around.
One more thing to take into account: VR devices might come with minimum computer’s specifications, so you might also need to update your computer in order for the device to work properly.
Lucky for us, in VR we can also develop for mobile, which only requires a cheap device in which to connect your phone (and you could even build your own one). You can’t do as many things as if you also have hands controllers and body movement detectors, but you can still do some cool things. For the nanodegree first modules, this is all you need and the course provides a cardboard to the students (which is great because it has a clicking option and some other devices don’t have this one)
However, there is another thing that you could also get stuck with, although I think there are some workarounds but it would at least slow you down: you cannot directly develop for IOS phone from a windows device, you should do it from a MAC.
In terms of content, I would advise you to be interested in multimedia and gaming if you decide to go through the course.
Feelings about the course itself
I actually really enjoyed the course (except for the speciality that I was forced to course because of lack of devices). I think the content is quite good and the projects are challenging and open for creativity.
It’s also great to network with other people with interest in VR.
In terms of testing on VR, there is currently not a module about this, but they do explain many things about performance, VR and what a good VR application should look like, so I believe this content is covered across the course.
Where should I start if I want to learn more about VR?
First of all, I think you can do this course for free if you don’t mind not having the degree (you cannot access the evaluations). That could be a good starting point and you could always join for the assessments after, which might save you a bit of time and money.
However, I’d say that the best way to get started is to actually try a device and try some apps. In Android you can download the ‘cardboard’ app and expeditions. Also, you can look for VR apps or games in the phone store (whichever your phone’s OS). Another way could be checking in steam (with a more expensive device), youtube or even in github to see someone’s code. For example, you can check out mine.
Last but not least, you can also install unity as it has an emulator that might give you an idea on how the world would look like and try to start playing around. There is plenty of documentation about it. Another good tool to check out is unreal, you don’t need as much development skills with this one.
So, you have checkout some VR apps and devices. You might even have created some small apps. The next step would be to be able to tell if your apps (or someone else’s) are of good quality (this is a test focused blog, after all). For this, we should have in mind some new considerations for every type of testing, but that’s…well…another story.