Cardboard to Oculus Quest.

Victor Paz
5 min readJul 23, 2021

First of all, I am going to explain a term that I will most likely use throughout this writing for convenience and clarity of content. Porting, as wikipedia describes it: “is the process of adapting software for the purpose of achieving some form of execution in a computing environment that is different from the one that a given program”.

This is the case that I come to raise today, as the title expresses it, the hypothetical case of an application developed for use in a 3DOF device such as Google Cardboards, what the transformation and porting process would be like to a device like the oculus quest.

First, let’s understand the difference between these two platforms, on the one hand, the Google Cardboard, works through a mobile device, either android or iOS, to display the content through its screen, that is its main function, it also makes use of if gyroscope to distinguish changes in orientation, that is, if it was rotated on the horizontal axis, or tilted on the vertical axis, giving us the freedom of movement of up to 3DOF. The limitations of this is that it does not allow greater movement, and since the cell phone is inserted in the glasses, there is no greater interaction other than movement. Some glasses include a small controller that can expand the experience through the use of physical buttons, even joysticks, but, as it is not included in all devices, as is the case with Google cardboard themselves, it is not something that is usually taken into consideration at the time of development, to have a greater inclusion.

On the other hand, the Oculus quest, they are a totally different device, it is an autonomous viewer, with its integrated screen of up to 2K per eye in its latest model, that is to say that beyond having a better resolution, it does not depend on another device To function, it is standalone, it has a distribution of cameras on the outside that allow it to analyze and recognize the position of the player at any moment in space, thus giving greater movement, such as moving in space, in any direction, leaning, crouching, traversing virtual space in the same way as in the real one practically. this is known as 6DOF movement.

In addition to this, the oculus quest includes two controllers (one for each hand) that allow a variety of interactions, from a joystick to control either the movement of the player or their camera rotation, two buttons for actions, a trigger usually known as trigger and another side trigger usually known as the grip trigger, which, as the name says, serves to simulate the grip of objects, or to close the fist, since it also has sensors that serve to recognize the position of the thumb and finger. index finger without pressing buttons.

As you can see, these devices are quite different, with different capabilities and functionalities, however, as our goal is to move from a platform, let’s say “lower”, to one with greater capabilities, or “higher”, it will give us possibilities for improvements. and better adaptability, then let’s get right to the thesis of this blog.

The project that we are going to port from Google cardboard to Oculus quest, is an escape room prototype, which consists of a closed room, full of objects to interact with, a locked door and a numeric keyboard in which if you enter the correct code it unlocks the door.

In this first scenario, the operation of the game in google cardboard presents us with the following interaction options. By means of a crosshair located in the center of the screen, we can identify all the objects with which it is possible to interact, since its size stands out, for the objects that can be grasped, once the “action button” is pressed (which can be a tap on the screen, a click on an external controller, or the use of the magnetic button that includes the cardboard.) The object appears just below the crosshair and you can move with it until you release it, how do you release it? pressing the button again anywhere (remember that we only have one action button practically.)

For interactive but non-grippable objects, it just executes its default action, for example, the number pad is shown or hidden, the door is opened or closed.

finally, for movement, when the crosshair is focusing on a platform on which you can move, it will be highlighted, and if you click, you will automatically teleport to that position.

When porting to the Oculus quest, we would have to write and use new scripts since we will have new and more advanced interactions, but that gives us usability improvements, for example, we would no longer depend on the use of a crosshair in the middle of the screen, which we want or not, is immense, if not it would be enough with the emulation of virtual hands in the position of the controls, thus giving us a better immersion, since these controls also like ourselves in the viewfinder , they have a real position in the virtual world, in addition, the act of grasping objects, in addition to being more comfortable and educational, since it will be enough to use the grip trigger on the object to have it automatically in our hands.

With this, we can pass the object from hand to hand, bring it closer to our eyes, move it away, turn it over, inspect it, even drop it or throw it with force, all this simply with the option of grabbing objects, with interactive objects something similar will happen, To open the door we can use the real gesture of opening a door to execute this action, pressing the buttons of the numeric pad will be very similar to real life, since we will have to use the index finger to push the buttons, and finally to move , we can use two different types, teleportation movement, very similar to the one used in cardboard (This type of movement is also well known in 6DOF viewers as they reduce the sensation of motion sickness for new users) or we can also use free movement with analogs, but since we no longer have so much control over the user’s movements, we as developers will have to be more strict with the limits of the map and objects that we do not want the player to pass through, such as a closed door, a wall or a box.

Thanks to the analysis of the technical capabilities of the software and hardware of the Oculus quests raised on the UploadVR site, it is possible to understand the great differences in the development of applications with such an impressive technical leap, however, it makes the road more fun And always thinking about wanting to reach the largest number of users and devices while maintaining the essence and soul of our project will always be a goal to be achieved and achieved.
As you may have noticed, regardless of the limitations or benefits of the different devices, the basic, fundamental and necessary actions, we achieve them on both platforms. Of course, in the Oculus Quest we could have a better interaction, but in the cardboard, we could also do them in a more rudimentary way, that’s why the game on both platforms will work for us.

Thanks to everyone who stops to read this blog, I hope it will be of help, guide or inspiration …

We read soon at another point in life.

--

--

Victor Paz
0 Followers

I’m a full stack programmer in formation at Holberton School.