Day 45-58: Creating 3 Different Locomotion in VR

Josh Unity VR Development 2 Comments

Alright, it’s time to get the 100 Days of VR restarted!

Let’s not kid ourselves, I took a break. Too long of a break if you personally asked me, but hey, what can a guy do? The important part is that I finally got my motivation to get back into things now.

Continuing from where we were at before. We finished integrating our game to use the Google Daydream and looked a bit into playing with Google’s emulator.

The original plan after this point was to start working on another game (maybe a real game), however, I realized that at this point, there are still a lot of things that could be learned. Specifically, locomotion and arms controls.

Originally, I was going to just dive in and start working like we did before with the FPS, but I’d like to explore more common problems that have been solved in VR.

I was told about a great open source framework that’s trying to provide solutions to both problems across multiple VR frameworks. I’ve mentioned it before, it’s the Virtual Reality Toolkit. However, before I start just using the framework I’m going to implement some of their solutions for practice myself.

Why? Frameworks come and go, but the more we know about the underlying way something works (even if it’s just a crummy version) the better developers we’ll be. This is especially important if we plan to work in VR or game dev for the long term.

Eventually, we’ll have to roll out a custom solution. It might not be related to anything we’re doing now, but it might involve the things you learn from implementing solutions yourself. Knowing just how to use the frameworks will probably allow you to hack a game together, but knowing more about the underlying implementation details of the framework will help make you a better developer.

At the very least, if we learn nothing at all, we can certainly appreciate the effort that others have put into making the frameworks, so we don’t have to.

However, if you’re interested in starting to work with the VRTK, they have a lot of great Youtube video tutorials that you can go through that shows you how to setup and use VRTK.

Alongside that, if you download the toolkit (either from the Unity Asset Store or their Github Repository), they provide a lot of examples of using their toolkits.

Judging from the title, instead of posting on a day-by-day basis, I’m switching to a week-by-week basis. This week, I’m going to look at 4 different locomotion strategies that we can use.

  1. Teleportation
  2. Click and go
  3. Automatic walking mode
  4. Walking with

Let’s get started!

Step 1: Creating Our Work Environment

Step 1.1: Create the project

The first thing we need to do is create our project.

I’m going to assume at this point, we’re somewhat comfortable navigation around in Unity.

I’m going to call my project Locomotion Project, but you can call it whatever you want.

Now in our new project, the first thing we’re going to do is to create an environment.

Step 1.2: Creating the game environment

The first thing we need to do is to create a floor for us to roam around. We’re not going to do anything extravagant, we’re just going to create a simple flat surface and create a player for ourselves.

  1. Create a Plane game object in the hierarchy and call it Floor.
  2. Create an Empty Game Object called
  3. Set Player’s position to be (0, 1, 0), we’re going to need this to offset the annoying problem later where one of the Google SDK scripts forces our camera to be (0, 0, 0). This will be a problem for games where we have a model to represent our character, but we can address this later.
  4. Make the Main Camera the child

Next, we need to setup our VR environment. To save time for myself, I refer you to an older post about setting things up.

  1. Follow the instructions to configure your Unity settings and download the SDK all in Step 1 on Day 34

Now that we have the configuration and the SDK installed, let’s add the prefabs that we need so that we can get the game working.

  1. Add the GvrEditorEmulator prefab to our hierarchy. This prefab forces our Main Camera to be always at (0, 0, 0), forcing us to work around it. We’ll solve this problem later when we work on our walking code.
  2. Add the GvrControllerMain prefab to our hierarchy.
  3. Add the GvrReticlePointer prefab as a child of the Main Camera. Make sure that the component is set at position 0, 0, 0 or you might encounter problems.
  4. In the Main Camera add the Gvr Pointer Physics Raycaster script to it.
  5. (Optional) Change the RecticlePointer material attached to the GvrReticlePointer to red (or any other non-white color so it’ll be easier to see).

Note: we’re not going to use the GvrRecticlePointer or Unity’s event system to trigger events. The only purpose of this is to create a marker in the middle that we can use to aim.

When we’re done, here’s what we should have:

Here’s what the scene currently looks like:

Here’s what the game will look like when we play:

Remember, we use alt + mouse to emulate moving our head around the game.

With this finished, we have our game done and now we can move on to the actual locomotion that I want to play through today!

Locomotion 1: Moving with Teleportation

Step 1.1: Being able to teleport by gaze

The most basic of movement that I’m sure that all VR enthusiasts have seen is to select a point and teleport to it. Like so:

Let’s implement this and see how it’s done!

First, let’s create a new script to our Main Camera. I’m just going to call it Teleport. Here’s what it looks like:

using UnityEngine;

public class Teleport : MonoBehaviour
{
    private Camera _camera;

	void Start () {
		_camera = Camera.main;
	}
	
	void Update ()
	{
	    if (Input.GetButton("Fire1"))
	    {
            Shoot();
	    }
	}

    private void Shoot()
    {
        // shoot a raycast from the center of our screen
        Ray ray = _camera.ViewportPointToRay(new Vector3(0.5F, 0.5F, 0));
        RaycastHit hit; // output variable to get what we collided against
        if (Physics.Raycast(ray, out hit))
        {
            if (hit.transform != null)
            {
                // set our location to the point we hit
                Vector3 newLocation = new Vector3(hit.point.x, 1, hit.point.z);
                transform.position = newLocation;
            }
        }
    }
}

 Variables Used

There’s only one variable I used and that’s:

  1. _camera – A private instance of our Main Camera that we instantiate in Start().

Walking through the code

The first thing to take note is that instead of using the Event System from Unity, we’re going to use our normal raycasting to detect when we select something.

The only exception where we might not do this is when we need to detect when we’re gazing at a game object or with the UI.

Anyways, walking through the code:

  1. In Update(), we check to see if the user has pressed the input button Fire1 down and if it does, we call our shoot function. Fire1 changes depending on the platform we’re running. Left click for desktop and tap on the screen for mobile.
  2. In Shoot(), we fire a raycast from the middle of our camera (where the red dot is) and if we hit anything, we would take the position of the point that we hit and set our position to it.

One important note: If we changed our Y position to be the point we hit (0), we would end up teleporting into the ground, that’s why we set it to 1. As you might notice, this will be a problem if we’re trying to teleport to different Y positions. We’ll address this later.

Now that we have our teleportation code in, feel free to start playing around with being able to move around the floor.

Step 1.2: Adding Fading to fix headaches

If you have tried to deploy the game and test the app to your cardboard (instructions on Day 35), you’ll probably experience a bad headache from moving around. Let’s fix that. Typically, in a VR game, when we teleport, we want to:

  1. Fade in a black cover over our screen.
  2. Stay in the dark for a while.
  3. Move to our new location.
  4. Fade out our black over

Let’s make a simple implementation of this.

After looking around the internet on fading in and out, like using a GUI to cover the screen or an Image. Both options would work for a non-VR game, but because we’re in VR, we must work in World Space instead of Screen Space. Meaning we can’t take advantage of the UI being right in front of our screen.

Instead, I’ll take the Image example and use a giant cube that’s:

  • A child of our Main Camera so that no matter where we look, it’ll be in front of us
  • very close to the camera so that our whole view is always blocked no matter where we look

Before we do that though, let’s update our teleport script. Here’s the code:

using System;
using System.Collections;
using UnityEngine;

public class Teleport : MonoBehaviour
{
    public GameObject ScreenCover; // the object that's covering our screen

    private Camera _camera;
    private Renderer _coverRenderer; // our material that contains our color
    private float _fadeInTime; // time to fade in our screen cover
    private float _fadeOutTime; // time to fade out our screen cover
    private float _timeInDark; // time we wait before we fade back in
    private float _time; // keep track of time
    private Boolean _isFading; // track if we are currently fading

	void Start () {
		_camera = Camera.main;
	    _coverRenderer = ScreenCover.GetComponent<Renderer>();
	    _fadeInTime = 0.2f;
	    _fadeOutTime = 0.2f;
	    _timeInDark = 0.4f;
	    _time = 0f;
	    _isFading = false;

        // set our starting color to be be 0 or transparent
	    SetCoverAlpha(0f);
	}
	
	void Update ()
	{
        // Only shoot when we're not in the middle of teleporting
	    if (Input.GetButtonDown("Fire1") && !_isFading)
	    {
            Shoot();
	    }
	}

    private void Shoot()
    {
        // shoot a raycast from the center of our screen
        Ray ray = _camera.ViewportPointToRay(new Vector3(0.5F, 0.5F, 0));
        RaycastHit hit; // output variable to get what we collided against
        if (Physics.Raycast(ray, out hit))
        {
            if (hit.transform != null)
            {
                // we hit something, cover our screen and teleport to the location
                StartCoroutine(FadeIn(hit));
            }
        }
    }

    // Coroutine to cover the player's screen
    private IEnumerator FadeIn(RaycastHit hit)
    {
        _isFading = true;
        _time = 0f;
        while (_time < _fadeInTime)
        {    
            Fade(0, 1, _fadeInTime); // 1 is opaque, 0 is transparent
            yield return null; // wait until the next frame
        }
        // now that the screen is covered, set our location to the point we hit
        Vector3 newLocation = new Vector3(hit.point.x, 1, hit.point.z);
        transform.position = newLocation;

        yield return new WaitForSeconds(_timeInDark); // wait in the dark
        StartCoroutine(FadeOut()); // start fading away the cover
    }

    // Coroutine to remove the cover from the player's screen
    private IEnumerator FadeOut()
    {
        _time = 0f;
        while (_time < _fadeOutTime)
        {
            Fade(1, 0, _fadeOutTime); // 1 is opaque, 0 is transparent
            yield return null; // wait until the next frame
        }
        _isFading = false;
    }

    // Helper function to change the alpha of our screen cover
    private void Fade(float start, float end, float fadeTime)
    {
        _time += Time.deltaTime;
        float currentAlpha = Mathf.Lerp(start, end, _time / fadeTime);
        SetCoverAlpha(currentAlpha);
    }

    // Helper function to change the alpha of our cover material. We have to 
    // change the material directly, we can't hold a reference to the color variable
    private void SetCoverAlpha(float alpha)
    {
        Color color = _coverRenderer.material.color;
        color.a = alpha;
        _coverRenderer.material.color = color;
    }
}

Variables Used

It looks like there’s a lot of code added in (and you would be correct), but let’s walk through it.

First off, let’s look at the new fields that we introduced to our Teleport class. These new fields are used to help create the effect of teleporting with fading in and out.

  • public GameObject ScreenCover – The reference to the cube that is right in front of our camera
  • private Renderer _coverRenderer – As the name suggests, this is our cover’s renderer (think Mesh Renderer). This is the only way for us to directly affect the material’s color in code.
  • private float _fadeInTime – The duration it’ll take for us to fade in our cover
  • private float _fadeOutTime – The duration it’ll take for us to fade out our cover
  • private float _timeInDark – The duration we wait in the dark before we start to fade out our cover
  • private float _time – Used to keep track of how long time has passed for our “animation” effect
  • private Boolean _isFading – Used to track to see what state we’re in. Normally I would use enums, especially if we have more than 2 states, however, for now, we’ll just use a variable

Walking through the code

There’s a lot of new code that was introduced here, but everything is linear in that each function follows one after another.

Let’s start from the beginning.

  1. In Start() we initialize the fields that we use. Feel free to experiment with the fading times. The interesting part is when we try to change the alpha of our cube to be 0 so that the user doesn’t see it in SetCoverAlpha()
  2. In SetCoverAlpha() we change the alpha of our material by duplicating the color and setting and then changing the alpha.

There are 2 interesting things to note regarding SetCoverAlpha(). The first is that to change/access the color of our material, we have to change: gameobject.renderer.material.color. The second thing is that we can’t directly change the alpha from our color. We must assign our material’s color to be a new Color object, much like what was done.

  1. The rest of the code is almost the same. In Update(), I also added a check for _isFading with our getButtonDown so that we can only shoot when we’re not animating through anything.
  2. In Shoot(), instead of teleporting immediately when we collide we’ll fade out and then teleport. The first thing we do is start a coroutine with FadeIn()

Reminder: If you have forgotten, a coroutine is similar to Update() in that you run code across multiple frames (good for things like creating an animation effect)

  • You would immediately run the code in the function until you hit a yield. When that happens, you’ll pause the function until the next frame.
  • Meanwhile, you’ll continue running the remainder of your code that happens after the coroutine
  • When the next frame comes again, you’ll resume where you left off in your coroutine.

 

  1. In FadeIn() we set _isFading to be true to prevent user inputs. Then we enter a while loop that will keep running until we’ve stayed in the loop for the duration set by _fadeInTime where we update the alpha of our cover in Fade() and then yield to the next frame to continue our next iteration giving us our transition effect.
  2. After we finish the fade in effect, in FadeIn() we’ll wait in the dark for the time set by _waitInDark and then we start the last part of our transition effect and call FadeOut()
  3. In Fade() we receive our starting alpha, ending alpha, and the duration of our fading effect. We update our current time and with these numbers, we use lerp to give us a value that is between our starting and ending alpha off of the time that elapsed (currentTime / maxTime). With the current alpha value, we change our screen’s alpha in SetScreenAlpha() creating our transition effect.
  4. In FadeOut(), we follow the same logic that we have for FadeIn(), except we swap the alpha values we pass into Fade(). After we’re done with our transition, we set _isFading to be true so that we can resume getting user inputs

And that’s it! Hopefully looking at the code flow isn’t as bad as it seems now!

Setting up the game environment to use Teleport.cs

Now that we have everything we need to create our transition code-wise, it’s time to create the cover that we’ll use for our code. To do this, we need to create a new cube and the material to go with it.

  1. Create a cube that’s called Cover and set it as the child of our main camera.
  2. Change the scale of the cube to be (4, 4, 1) and the position to be (0, 0, 0.82), I played around with the numbers, but we want the cover to be close to the screen, but not too close to have weird stutter effects when we look around.
  3. Remove the Box Collider component, it’ll interfere with our raycasting.
  4. In the asset pane, create a new Material. Let’s call it ScreenBlack
  5. In ScreenBlack, change our Rendering Mode to be Fade so that our game object can be transparent and set Smoothness to be 0 to give us a pure black color.
  6. Back to Cover, in the MeshRenderer, set our material to be ScreenBlack and under lighting we want to change cash shadows to be Off to prevent seeing out of place shadows when fading.
  7. Finally, attach Cover to Screen Cover in our Teleport script on our Player game object.

When we’re done, here’s what our Cover looks like:

And here’s what our ScreenBlack material looks like:

If you were to play the game now, we’ll have our transition effect whenever we teleport around!

Cool right? Looks kind of like what other VR games do right?

Now there are some things that could have been improved. Right now, we have a giant cube in our scene. I’m not going to get rid of it, however, we could just have easily created a prefab of our cover and instantiated it in the beginning.

By doing it this way, we wouldn’t have such a messy scene, but I’m just going to leave it as is.

Step 1.3: Creating the curve arc that most that VR game uses

We have teleportation by gaze. Great! Now, what’s next?

Before moving on to the other types of locomotion available, I’d like to explore implementing a curve arc that shows us what we’re selecting, very similar to the ones that the Oculus and Vive has:

Like always, these are solved problems that we can use, but let’s see a naïve implementation. Who knows, something like this could be useful for future games!

There are a couple of things that we must think of:

  1. How do we draw a curve line?
  2. How do we detect if we hit something?

To be able to create this curve line, we need to use the LineRenderer component and some good old math to make our lines curved.

Looking at different implementation solutions

I found 2 ways we can try and attempt this:

  1. http://wiki.unity3d.com/index.php/Trajectory_Simulation
  2. https://developer.oculus.com/blog/teleport-curves-with-the-gear-vr-controller/

Option 1 involves breaking a line into multiple pieces and re-calculating where each of the individual pieces would be if we were to launch the line in a direction and applied gravity to it.

For each of the line segment, we can do collision detection by raycasting. That’s 1 raycast per line segment per frame… That can be computationally expensive.

Option 2 discusses the 3 approaches the Oculus team tried when creating their curves.

  • Approach 1 Bezier Curve: Create a Bezier curve using a starting location, an end location, and a control point (an in-between point raised up). By using these 3 points, we can create line segments that make a curve. Each of these line segments would have a collision detection to see if we hit anything
  • Approach 2 Parabola Curve: Is very similar to the first option. We would “fire” a line in a certain direction with a certain force with gravity weighing down on the object. We don’t know where it lands, but the path would be guaranteed to be a parabola. Like the previous approach, we would check for collision detections on all line segments.
  • Approach 3 Tall Raycast: On the high level, this approach has us fire a raycast above our player downward at a certain angle. Whatever we hit is the end point and we can create a Bezier curve from our current location to our endpoint. By doing it this way, we only fire one raycast per frame instead of once per line segment per frame.

The article by the Oculus is really informative, I highly recommend reading it and playing around with the tutorial projects!

Of these choices, I only tried the Parabola approach, because most of the code has already been provided for me in the first example. Let’s give this a shot!

Implementing Our Own Curve Line

We’ll be looking at the solution that was created in the trajectory simulation example.

There’s a C# version of the script that we can use. I’m going to make some modifications to the code, but everything else would be the same.

  1. For now, just create the script in the Assets directory and copy and paste the code from our example. Make sure that the class name is consistent with the filename. We’ll place them later.

There will be 2 scripts that we’re going to use. The first one PlayerFire.cs.

The class is just a container that allows us to store information regarding the launching speed for our curved line.

Here’s what it looks like:

using UnityEngine;

using UnityEngine;

public class PlayerFire : MonoBehaviour
{
    public float fireStrength = 500;
    public Color nextColor = Color.red;
}

The second script we’re going to use is TrajectorySimulator.cs.

This class has the bulk of the code where we break down a LineRenderer into multiple line segments and then change their position to factor in gravity to get the parabola effect that we want.

using UnityEngine;

public class TrajectorySimulator : MonoBehaviour {

    // Reference to the LineRenderer we will use to display the simulated path
    public LineRenderer sightLine;

    // Reference to a Component that holds information about fire strength, the location of cannon, etc.
    public PlayerFire playerFire;

    // Number of segments to calculate - more gives a smoother line
    public int segmentCount = 20;

    // Length scale for each segment
    public float segmentScale = 1;

    // gameobject we're actually pointing at (may be useful for highlighting a target, etc.)
    private Collider _hitObject;
    public Collider hitObject { get { return _hitObject; } }

    private Vector3 _hitVector;

    void FixedUpdate()
    {
        simulatePath();
    }

    /// 
<summary>
    /// Simulate the path of a launched ball.
    /// Slight errors are inherent in the numerical method used.
    /// </summary>

    void simulatePath()
    {
        Vector3[] segments = new Vector3[segmentCount];

        // The first line point is wherever the player's cannon, etc is
        segments[0] = playerFire.transform.position;

        // The initial velocity
        Vector3 segVelocity = playerFire.transform.up * playerFire.fireStrength * Time.deltaTime;

        // reset our hit object
        _hitObject = null;

        for (int i = 1; i < segmentCount; i++)
        {
            if (_hitObject != null)
            {
                segments[i] = _hitVector;
                continue;
            }

            // Time it takes to traverse one segment of length segScale (careful if velocity is zero)
            float segTime = (segVelocity.sqrMagnitude != 0) ? segmentScale / segVelocity.magnitude : 0;

            // Add velocity from gravity for this segment's timestep
            segVelocity = segVelocity + Physics.gravity * segTime;

            // Check to see if we're going to hit a physics object
            RaycastHit hit;
            if (Physics.Raycast(segments[i - 1], segVelocity, out hit, segmentScale))
            {
                // remember who we hit
                _hitObject = hit.collider;
                // set next position to the position where we hit the physics object
                segments[i] = segments[i - 1] + segVelocity.normalized * hit.distance;
                // correct ending velocity, since we didn't actually travel an entire segment
                segVelocity = segVelocity - Physics.gravity * (segmentScale - hit.distance) / segVelocity.magnitude;
                // flip the velocity to simulate a bounce
                segVelocity = Vector3.Reflect(segVelocity, hit.normal);

                /*
				 * Here you could check if the object hit by the Raycast had some property - was 
				 * sticky, would cause the ball to explode, or was another ball in the air for 
				 * instance. You could then end the simulation by setting all further points to 
				 * this last point and then breaking this for loop.
				 */
                _hitVector = segments[i];
            }
            // If our raycast hit no objects, then set the next position to the last one plus v*t
            else
            {
                segments[i] = segments[i - 1] + segVelocity * segTime;
            }
        }

        // At the end, apply our simulations to the LineRenderer

        // Set the colour of our path to the colour of the next ball
        Color startColor = playerFire.nextColor;
        Color endColor = startColor;
        startColor.a = 1;
        endColor.a = 0;
        sightLine.SetColors(startColor, endColor);

        sightLine.SetVertexCount(segmentCount);
        for (int i = 0; i < segmentCount; i++)
            sightLine.SetPosition(i, segments[i]);
    }
}

The code itself is well documented so I won’t go too deep into it, however I did add my own changes.

The problem that I found out when I was playing around with the script is Unity, we don’t stop calculating the next part of our line segment and the calculations will continue:

While this is great if we wanted to calculate the trajectory of a bouncing ball, it’s not so much if we want to stop at the first collision we hit.

The changes I made stopped the lines from progressing further. Like so:

Let’s dive a bit into the code.

Variables Added

I made one new variable to:

  • Vector3 _hitVector – this is used to keep track of the position of our line segment where we hit an object. After we hit the object, the rest of our line segments should be at that spot.

Walking through the changes

_hitVector is used in simulatePath() where we hit an object.

After we hit an object, we’ll save the segment position we’re at and continue to the rest of the loop.

Continuing the next iteration of our loop, taking advantage of _hitObject, we know that if it’s not null, we hit something. If that’s the case, all we need to do is set all future line segment positions to be the position that we hit a game object. By doing this, our line will only show up until we hit an object!

Setting up Game Environment

Now that we have our scripts set up and ready to go, let’s set up the game environment to use our new scripts.

The way the code works on the high level is that we:

  • Get the vector transform.up as our starting direction
  • Apply force to our vector
  • Apply gravity to the vector to have our line fall like a parabola
  • Detect if we hit anything per line segment

What this will give us is the trajectory that we’re looking for.

Of course, transform.up will just shoot our line up and then it’ll fall down.

If we want something more interesting like our images above, we should change the rotation of the game object that holds the script to be facing forward. Like we’re shooting cannonballs out of a cannon.

Let’s get started by creating a game object that will hold our script.

For this example, I’m just going to use a small rectangle to shoot our trajectory

  1. In the Hierarchy, create a new Cube called Controller
  2. Make Controller a child of Main Camera
  3. In Controller, change Position to (0.25, -0.25, 0.25) and Scale to (0.1, 0.1, 0.4), like always feel free to play around with the values
  4. Add a Line Renderer component to Controller. Set the Width to be 0.01
  5. Add the 2 scripts we wrote (PlayerFire and TrajectorySimulator) to Controller.
  6. In the PlayerFire script, change the Fire Strength to be 350. This will be the force we will use as a velocity to launch our line.
  7. In TrajectorySimulator, attach the Line Renderer component and our PlayerFire script into their appropriate slots.
  8. Still, in TrajectorySimulator I changed the Segment Count from 20 to 40 to lengthen our line and reduced the Segment Scale from 1 to 25. The smaller our segment is, the more we can make it look like a curve as opposed to a bunch of straight lines.

At this point, we need to do one more thing before we’re done. We need our line to shoot out at an angle, otherwise, it’ll only shoot up and then come back down.

There are 2 options that we can use to do this.

  1. Rotate the object so that the direction we’re launching is in a different direction
  2. In our code, change the direction of the vector that we’re using. Specifically, change our transform.up to fire in the direction, we want it to.

The first option is straightforward, and we can easily just rotate our game object, but we would have an oddly rotated game object that would shoot out our trajectory.

The solution could be to attach an empty game object to Controller and let that have all the scripts and then just rotate that game object to shoot our line out.

However, I took a more… educational approach. I applied the rotation to our direction vector.

Here’s what I did. In TrajectorySimulator.cs inside simulatePath(), I changed how the segVelocity was calculated:

        // Create the angle we're rotating our launch direction by
        Quaternion angle = Quaternion.AngleAxis(65, playerFire.transform.right);
        // Create our new laucnh direction with our rotation
        Vector3 fireDirection = angle * playerFire.transform.up;

        // The initial velocity
        Vector3 segVelocity = fireDirection * playerFire.fireStrength * Time.deltaTime;

Our goal here is to rotate our Vector by a certain by angle. And whenever we use the word rotate, the word Quarternion usually goes with it!

Searching around to see how we rotate a vector by an angle, I ran into Quaternion.AngleAxis which will create a Quarternion of an axis with the rotated value we want.

From experimenting with rotating the Controller game object to see where our line would shoot from, changing our X rotate value would move the line downwards.

With this information in mind, we can look at the Quaternion.AngleAxis documentation to find that transform.right is how we access the X-axis. Using transform.right as the axis and experimenting with some random values, I chose 65 as a good angle to use.

Once we have the angle, we multiply that with our launch vector to get the new direction and we can use our new vector as we normally would.

Make the changes to the script and play our game. Here’s what you should see now:

Like always, play around with the values to get something you like.

Step 1.4: Combining teleportation with our pointer

Now that we have our pointer implemented the last thing we need to do is to teleport using the pointer instead of whatever is in the center of our screen.

To do this, we’re going to make some simple change. In TrajectorySimulator we must expose the vector position our line collided with a game object so that we can access it from our teleporter code.

All we need to do is add a getter function to our class like the previous example before:

    public Vector3 HitVector { get { return _hitVector; } }

If this syntax looks new to you and is confusing don’t worry. This is essentially a short way of writing something like:

    public Vector3 GetHitVector()
    {
        return _hitVector;
    }

There’s also another keyword set, which is what we can use for assigning our _hitVector to be something we pass in, but let’s not worry about that.

Note: One important thing about this code is that we must check if we hit anything, Vector3 can never be a null type so when we do this, we’ll always return something. As a result, if we carelessly use this, we’re going to keep teleporting to the same location.

Now that we have exposed the location where we hit something, let’s use it in our teleportation script.

We had to do some major some changes in our code, specifically, we’re going to have to remove the raycasting part of our code and instead use our TrajectorySimulator class to get the information of what we have collided with.

Here are the changes we did with teleport:

using System;
using System.Collections;
using UnityEngine;

public class Teleport : MonoBehaviour
{
    public GameObject ScreenCover; // the object that's covering our screen
    public TrajectorySimulator TrajectorySimulator; // Calculator for where our pointer line is

    private Renderer _coverRenderer; // our material that contains our color
    private float _fadeInTime; // time to fade in our screen cover
    private float _fadeOutTime; // time to fade out our screen cover
    private float _timeInDark; // time we wait before we fade back in
    private float _time; // keep track of time
    private Boolean _isFading; // track if we are currently fading

    void Start () {
        _coverRenderer = ScreenCover.GetComponent<Renderer>();
        _fadeInTime = 0.2f;
        _fadeOutTime = 0.2f;
        _timeInDark = 0.4f;
        _time = 0f;
        _isFading = false;

        // set our starting color to be be 0 or transparent
        SetCoverAlpha(0f);
    }
    
    void Update ()
    {
        // Only shoot when we're not in the middle of teleporting
        if (Input.GetButtonDown("Fire1") && !_isFading)
        {
            Shoot();
        }
    }

    private void Shoot()
    {
        // shoot a raycast from the center of our screen
        if (TrajectorySimulator.hitObject != null)
        {
            // we hit something, cover our screen and teleport to the location
            StartCoroutine(FadeIn());
        }
    }

    // Coroutine to cover the player's screen
    private IEnumerator FadeIn()
    {
        _isFading = true;
        _time = 0f;
        while (_time < _fadeInTime)
        {    
            Fade(0, 1, _fadeInTime); // 1 is opaque, 0 is transparent
            yield return null; // wait until the next frame
        }

        // now that the screen is covered, set our location to the point we hit
        Vector3 hit = TrajectorySimulator.HitVector;
        Vector3 newLocation = new Vector3(hit.x, 1, hit.z);
        transform.position = newLocation;

        yield return new WaitForSeconds(_timeInDark); // wait in the dark
        StartCoroutine(FadeOut()); // start fading away the cover
    }

    // Coroutine to remove the cover from the player's screen
    private IEnumerator FadeOut()
    {
        _time = 0f;
        while (_time < _fadeOutTime)
        {
            Fade(1, 0, _fadeOutTime); // 1 is opaque, 0 is transparent
            yield return null; // wait until the next frame
        }
        _isFading = false;
    }

    // Helper function to change the alpha of our screen cover
    private void Fade(float start, float end, float fadeTime)
    {
        _time += Time.deltaTime;
        float currentAlpha = Mathf.Lerp(start, end, _time / fadeTime);
        SetCoverAlpha(currentAlpha);
    }

    // Helper function to change the alpha of our cover material. We have to 
    // change the material directly, we can't hold a reference to the color variable
    private void SetCoverAlpha(float alpha)
    {
        Color color = _coverRenderer.material.color;
        color.a = alpha;
        _coverRenderer.material.color = color;
    }
}

Variables Used

We only introduced one new variable:

  • public TrajectorySimulator TrajectorySimulator – this will be our instance of TrajectorSimulator this is being held in our Controller game object. This will act as our new raycaster now.

Walking Through the Code

With our new variable, we removed all instance of raycasting from our camera from our Teleport code. Instead, we’ll let TrajectorySimulator have the logic to deal with finding something to collide with and our teleport script will just use the results.

Here are the specific changes we made:

  1. In Shoot(), I got rid of all mention of raycasting and instead, I checked if our hitObject from our TrajectorSimulator is null. If it’s not, that means we’re currently hitting something and we’ll start our fading code.
  2. The final change is in FadeIn() after we have faded in our cover, we use the hitVector from TrajectorySimulator (where we hit the object) and teleport to that location.

That’s it! Now we have a working example of a pointer with a laser guide like in the Oculus SDK.

Of course, we could have circumvented this whole entire exercise by using a straight line as a line cursor, but now we know more about making curved lines!

Locomotion 2: Click to Walk To

Now that we have teleportation working, the next thing that I want to look at is selecting a spot to move our character over there.

To accomplish this, we’re going to re-use the TrajectorySimulator class as a selector, but instead of using our Teleporter script, we’re going to create a new script to handle this.

The first thing we’re going to do is we’re going to create a clone of our current scene.

  1. If you hadn’t saved our current scene already. Hit Ctrl + S. Save our current Scene as Teleport
  2. Next, hit Ctrl + Shift + S to create a scene with everything we already have. Let’s call this new Scene ClickToMove

From this point on, we’re going to work in the ClickToMove scene, implementing, drum rolls please: click to move!

Step 2.1 Creating the Moving Script

Next to teleporting another common means of transportation is selecting a location and have our character walk to that location.

There are many ways we can solve this problem we can:

  1. Create a coroutine that will go to the location that we clicked on that will move our player to the location using the speed that we set.
  2. Make the character a Nav Mesh Agent that we tell where to go and it’ll take care of the rest for us.

In this case, I’m going to use the Nav Mesh Agent! There are going to be some complications and we might end up doing more pathfinding than necessary for our player, but let’s just roll with it, okay?

Before, we used Nav Mesh Agents to move our enemies toward our player, but this time, we’re going to use them on our player to move them to the location that we have selected.

We need to do 2 things:

  1. Setup the Nav Mesh Agent to move our character
  2. Create the code necessary to move our character

Both tasks shouldn’t be hard as we have done them before in the past. The difference this time is that we’ll apply them to our main character.

Adding the Nav Mesh Agent

The first step is very straightforward. In our new Scene ClickToMove

  1. In the Hierarchy select Player
  2. Click Add Component and add a Nav Mesh Agent Change the Base Offset to be 1 so that our I’m not going to play around with the settings today and leave it as is.
  3. While we’re here, let’s remove the Teleport We’re going to make a new script that will use this

Here’s what we should have:

Now there’s one more task that’s required our Agent to work and that is to create a NavMesh for our Nav Mesh Agent to walk around.

Here’s Unity’s documentation on building a navmesh.

A quick refresher: A NavMesh is a class that you create over other terrains that allows the Nav Mesh Agent use to figure out how to get from point A to Point B.

Let’s create our Nav Mesh so our player character can walk around it.

You might recall that we can only create or bake a Nav Mesh on static game objects. In this case, we need to make our floor static.

  1. Select Floor from the hierarchy.
  2. In the top right corner in the inspector, there will be a checkbox that says Static, check it.

When we’re done, here’s what it should look like:

Now that we have our static variable, let’s create our NavMesh.

  1. In the Menu bar, select Window > Navigation
  2. In the new Navigation pane, select Bake
  3. Keep everything the same, if you’re curious about what each option does check out the Nav Mesh documentation.
  4. Click Bake to create our NavMesh
  5. After that, in our Scene, we should see a Navmesh Display pane in the bottom left corner. Select the Show NavMesh checkbox to see the NavMesh

Here’s what we should have now:

Great! So now that we have our NavMesh, we’re ready to write the script to move our character.

Creating our new ClickToMove script

Now that we have the NavMesh and a Nav Mesh Agent, it’s time to write a script that can use this!

  1. In Player, click Add Component and create a new script called ClickToMove

Our script is going to solve one big problem with the current configuration of our game. Right now we have these 2 things that are happening:

  1. GvrEditorEmulator forces Main Camera to be at 0, 0, 0
  2. Nav Mesh Agent forces Player to be grounded at 0, 0, 0

As a result, our camera is going to be stuck on the ground and we can’t see anything.

There are 3 solutions that we can use to solve the problem:

  1. Modify GvrEditorEmulator to not move our camera or make our own replacement for it.
  2. Change the Height Offset property in the
  3. In code, adjust the height of our code.

In this case, we’re going to use option 3 and adjust our Main Camera’s height to be the height of our character, which we set as 1.

This could become problematic in a game where we might have elevation. That could be easily solved, but I’m not going to solve it here.

Here’s what our ClickToMove script looks like:

using UnityEngine;
using UnityEngine.AI;

public class ClickToMove : MonoBehaviour
{
    public float HeightOffset = 1; // adjustment for Gvr Editor Simulator

    private NavMeshAgent _navMeshAgent;
    private Camera _camera;

    void Start()
    {
        _navMeshAgent = GetComponent<NavMeshAgent>();
        _camera = Camera.main;
    }

    void Update()
    {
        _camera.transform.Translate(new Vector3(0, HeightOffset, 0)); // Gvr Editor Simulator forces us to be at 0, 0, 0, we need to fix that adjustment
        
        if (Input.GetButton("Fire1"))
        {
            WalkTo();
        }
    }

    private void WalkTo()
    {
        // shoot a raycast from the center of our screen
        Ray ray = _camera.ViewportPointToRay(new Vector3(0.5F, 0.5F, 0));
        RaycastHit hit; // output variable to get what we collided against
        if (Physics.Raycast(ray, out hit))
        {
            // If we hit something, set our nav mesh to go to it
            if (hit.transform != null)
            {
                _navMeshAgent.SetDestination(hit.point);
            }
        }
    }
}

Variables Introduced

There are 3 variables that we used in our script:

  • public float HeightOffset = 1 – A public float that we use to set the height of our camera to offset the height problem that the GvrEditorSimulator forced on us.
  • private NavMeshAgent _navMeshAgent – The Nav Mesh Agent component that is attached to the current game object.
  • private Camera _camera – A reference to our Main Camera.

Walking through the code

If you’ve noticed, the code follows a very similar pattern to what we had with teleportation. We detect the user’s input and then we do our special action.

Anyways going through the code now:

  1. In Update() we first move our Main Camera up by our offset every frame to offset the fact that GvrEditorEmulator always puts us at 0, 0, 0
  2. Continuing on Update(), just like in Teleport if we detect the user triggers fire1 (tap on a mobile device, click on a desktop), we would call WalkTo() to move our character
  3. In MoveTo() we create a raycast that’s fired from the center of our screen and if we hit anything we’ll move our Nav Mesh Agent by setting it’s destination to the location that we hit.

That’s it with our code! Simple and similar to what was done in teleportation.

Adjusting our Nav Mesh Agent Component

Now at this point, everything should work as we intended, however, there is one problem remaining.

If we move forward everything is fine, but what happens we look to the left and tap? Our body would turn around as we walk towards that direction.

This makes the forward direction to be the direction that we’re facing. This might be a bit weird in Unity, but if we were to play it on a headset it would feel a little bit more natural (and probably a bit nauseating).

The other option is to make our Nav Mesh Agent never turn our parent container and we just move in whatever direction we’re looking at.

Option 2 where we move without rotating our character will be what we’re implementing.

To do that, let’s go back to our Nav Mesh Agent component in Player.

  1. In the Nave Mesh Agent component in Player, change Angular Speed to 0

Changing Angular Speed to 0 means that our Nav Mesh Agent would have no turning speed which means our character will never rotate.

When we’re done changing our Nav Mesh Agent we should have movement working!

Feel free to play around with the settings or maybe add some more terrain to see how it feels to play it in VR.

Here’s an example of what we’ll have:

Pretty neat right?!

Step 2.2: Narrowing the Field of Vision to Prevent Head

Before we move on to the last set of locomotion, there’s one interesting problem that we can try and solve.

When moving around (especially quickly) you might get a sense of nausea.

How can we solve it?

We can do it by narrowing our Field of Vision to the location that we’re walking to.

Looking at the Daydream Elements page, there’s a section on Tunneling for locomotion that involves movement in a first-person perspective like we’re doing right now.

Here’s what it looks like:

We won’t have a perfect solution, but let’s see if we can create a simple instance of our tunneling effect when moving! Hint: it’s going to be like how we did a screen cover for teleporting.

Creating FOV ScreenCover

The first thing we’re going to have to do is that we must create a screen cover for our game again.

As oppose to covering the whole screen, we want to only cover most of it. We’re not going to have the perfect copy of this, but our solution will be somewhat similar.

There isn’t an easy way for us to block out everything except for a certain spot in Unity.

The only way we can get something like this is to use an object from a 3D modeling software or asset store that covers the shape.

In our case, I’m just going to create a simple square FOV for us made up of cubes.

Here’s what it looks like in the scene and by our camera:

As you can see it’s 4 cubes stacked on top of each other to create this cover.

Here’s how we can build something like this

  1. In the Hierarchy, create an Empty Object called TunnelCover as a child of Main
  2. Create a Cube as a child to TunnelCover we’re going to configure this cube and then duplicate it.
  3. Change the Scale to be (0.25, 0.1, 0.1), it’s important to change Z to be 1 as otherwise, we’ll see some width on the cube when we stack them later.
  4. Remove the Cube Collider.
  5. In Mesh Renderer uncheck Receive Shadows and set Cast Shadows to
  6. Set our Material to be ScreenBlack, the same material that we used for our teleportation.
  7. Now hit Ctrl + D 3 times to duplicate our cube. We’re going to position our cubes now, but feel free to move them as you please.
  8. TunnelCover: Position: (0.85, 0.2, 0.5)
  9. Cube: Position (-0.25, 0.90, 0)
  10. Cube (1): Position (0.1, 0.9, 0)
  11. Cube (2): Position (0, 0.15, 0)
  12. Cube (3): Position (0, 1.6, 0)

Here’s what the hierarchy should look like when we’re done:

Great! So now we’re done with setting up our FOV cover.

At this point, you might notice that we have everything inside the TunnelCover. There’s a reason for this.

There are 2 main benefits:

  1. We keep everything together so if we ever had to move our cover we could just move our container object
  2. Like our previous cover code, we need to be able to make our cover fade, however this time we have 4 cubes instead of just 1. By having a parent object, we can make a script that will have access to all 4 cubes at once.

Point #2 is the largest reason. Before we had the fading code in our teleportation and while it was okay being there, let’s see if we can abstract that part of the code out to be something else that we can call.

To do this:

  1. In TunnelCover add a new script and let’s call it ScreenCoverRenderer

ScreenCoverRenderer will be very similar to the code we wrote in Teleport that faded things, however, there will be some key differences.

This will be the flow of our gameplay effect:

  1. We will grab all the renderers of our cube to make them disappear all at once instead of just one cube.
  2. We will show our cover when we start moving for a set amount of time.
  3. We won’t get rid of our cover until we stop moving.
  4. After we stop, we’ll start fading out our cover.

This is the working copy of the code, however getting to this point was a more iterative process where I first edited the ClickToMove Script, refactored, and refactored some more before I have reached the final script that you see right here. Don’t feel bad if you’re looking at this and wondering to yourself how I came up with this because I didn’t either, this was created overall an iterative process.

ScreenCoverRenderer.cs:

using System.Collections;
using UnityEngine;

public class ScreenCoverRenderer : MonoBehaviour
{
    public float FadeOutTime = 0.2f;
    public float FadeInTime = 0.2f;

    private Renderer[] _screenCoverRenders; // array that contains all of our screen covers
    private float _time; // keeps track of the fading of our covers
    private enum CoverState { FadingIn, FadingOut, None } // enum that we use to keep track of what state we're in
    private CoverState _gameState; // The current state we're in

    void Start()
    {
        _screenCoverRenders = GetComponentsInChildren<Renderer>(); // get all the renderer of our child game objects
        SetCoverAlpha(0);
        _gameState = CoverState.None;
    }

    // Returns true if we're in the middle of a coroutine animation. False otherwise.
    public bool IsAnimating()
    {
        return _gameState != CoverState.None;
    }

    // Starts the FadeOut coroutine if we're not already in the middle of a coroutine. Otherwise, nothing happens.
    public void StartFadeOut()
    {
        if (_gameState == CoverState.None)
        {
            _gameState = CoverState.FadingOut;
            StartCoroutine(FadeOut());
        }
    }


    // Starts the FadeIn coroutine if we're not already in the middle of a coroutine. Otherwise, nothing happens.
    public void StartFadeIn()
    {
        if (_gameState == CoverState.None)
        {
            _gameState = CoverState.FadingIn;
            StartCoroutine(FadeIn());
        }
    }

    // Coroutine to cover the player's screen
    private IEnumerator FadeIn()
    {
        _time = 0f;
        while (_time < FadeInTime)
        {
            Fade(0, 1, FadeInTime); // 1 is opaque, 0 is transparent
            yield return null; // wait until the next frame
        }
        _gameState = CoverState.None; // now that we're done, go back to the none state
    }

    // Coroutine to remove the cover from the player's screen
    private IEnumerator FadeOut()
    {
        _time = 0f;
        while (_time < FadeOutTime)
        {
            Fade(1, 0, FadeOutTime); // 1 is opaque, 0 is transparent
            yield return null; // wait until the next frame
        }
        _gameState = CoverState.None; // now that we're done, go back to the none state
    }

    // Helper function to change the alpha of our screen cover
    private void Fade(float start, float end, float fadeTime)
    {
        _time += Time.deltaTime;
        float currentAlpha = Mathf.Lerp(start, end, _time / fadeTime);
        SetCoverAlpha(currentAlpha);
    }

    // Helper function to change the alpha of all the cover game objects. We have to 
    // change the material directly, we can't hold a reference to the color variable
    private void SetCoverAlpha(float alpha)
    {
        foreach (Renderer screenCoverRender in _screenCoverRenders)
        {
            Color color = screenCoverRender.material.color;
            color.a = alpha;
            screenCoverRender.material.color = color;
        }
    }
}

Variables Used

Some of the variables used, I already used so I won’t talk about them again. As for the new ones, here’s what we’re working with:

  • private Renderer[] _screenCoverRenders – Because this script is attached to the container of our cubes that we want to fade, we have to hold an array of all 4 of our cubes
  • private enum CoverState { FadingIn, FadingOut, None } – Before we just had to know if our coroutine is running or not so we don’t start a new coroutine. While we could still make things work with Booleans, I’ve decided to use Enums to keep track of our state: FadingIn, FadingOut, and None
  • private CoverState _gameState – gameState will be used to keep track of what state we’re currently in.

Walking Through the Code

For the most part, our code here is mostly helper functions that something else (our ClickToMove code) will call to fade in our cover.

  1. In Start() we get all of our Cubes that we use to make a cover and we use SetCoverAlpha() to set their alpha to be 0 so the player can’t see it during runtime.
  2. SetCoverAlpha() has been changed to change the alpha value of all our materials at the same time as opposed to the one we originally
  3. In StartFadeIn() we run our FadeIn() coroutine if we’re not in the middle of another coroutine. The code works almost the same as what we had in teleport. The only major difference is that we fade in our cover and we don’t get rid of it until StartFadeOut() is called.
  4. StartFadeOut() is the same as StartFadeIn().
  5. IsAnimating() is a public function that we can call in our ClickToMove script to tell us if we’re in the middle of a coroutine.

That’s about it for our Cover Fading code. Hopefully not too different from what we have.

Updating our ClickToMove Script

Now lets at seeing how we can add our changes to ClickToMove.cs:

using System.Collections;
using UnityEngine;
using UnityEngine.AI;

public class ClickToMove : MonoBehaviour
{
    public float HeightOffset = 1; // adjustment for Gvr Editor Simulator
    public ScreenCoverRenderer ScreenCoverRenderer;

    private NavMeshAgent _navMeshAgent;
    private Camera _camera;
    private bool _startedMoving; // keeps track if we started moving so we can stop

    void Start()
    {
        _navMeshAgent = GetComponent<NavMeshAgent>();
        _camera = Camera.main;
        _startedMoving = false;
    }

    void Update()
    {
        _camera.transform.Translate(new Vector3(0, HeightOffset, 0)); // Gvr Editor Simulator forces us to be at 0, 0, 0, we need to fix that adjustment
        
        if (Input.GetButton("Fire1"))
        {
            WalkTo();
        }

        if (_navMeshAgent.velocity.magnitude <= 0.1f && !ScreenCoverRenderer.IsAnimating() && _startedMoving)
        {
            _startedMoving = false;
            ScreenCoverRenderer.StartFadeOut();
        }
    }

    private void WalkTo()
    {
        // shoot a raycast from the center of our screen
        Ray ray = _camera.ViewportPointToRay(new Vector3(0.5F, 0.5F, 0));
        RaycastHit hit; // output variable to get what we collided against
        if (Physics.Raycast(ray, out hit))
        {
            // If we hit something, set our nav mesh to go to it
            if (hit.transform != null)
            {
                // If we're already moving, we don't want to start another Fade In
                if (!_navMeshAgent.hasPath)
                {
                    ScreenCoverRenderer.StartFadeIn();
                    _startedMoving = true;
                }
                _navMeshAgent.SetDestination(hit.point);
            }
        }
    }
}

Variables Used

Here’s what we introduced to our code:

  • public ScreenCoverRenderer ScreenCoverRenderer – This will the reference to our script as an abstracted and bundled up way for us to easily make our cover fade away without knowing the details
  • private bool _startedMoving – This will be used later to tell us if we started moving so we’ll know when we can call our FadeOut code. More on this later.

Walking Through the Code

For the majority, we didn’t get rid of any code and we’re just adding code on top of what we already have.

  1. In Start(), we set _startMoving to be false.
  2. When the player clicks on a location and we start moving, we’ll start our cover fade into effect by calling StartFadeIn() from our ScreenCoverRenderer object and set _startMoving to be true. It’s important to note that if our player is already moving and we click another location, we don’t want to trigger the coroutine effect again unless.
  3. The tricky part is knowing when to make our cover fade away. In Update() I did a check to say if the magnitude of the velocity our Nav Mesh Agent is moving in is below a certain threshold. If that’s true, our ScreenCoverRenderer is done with their fading coroutine, and we started moving, we can fade away from our screen. It’s important to have these checks, otherwise, we’ll constantly start calling FadeOut because when we first start, we wouldn’t be animating anything yet!

On this note, I really like how we abstracted away our fading code to a different class. Now we have the more clean structure of how our code is organized!

Putting our Scripts together

We have our scripts updated and all in place, the last thing we need to do now is set up the public variables in our script.

  1. In the ClickToMove script component in Player, drag and drop TunnelCover into the Screen Cover Renderer slot to grab the script from our game object.

With all of this in place, here’s what we should see when we play the game.

Go ahead and try to play it on your phone and see how it feels! Not perfect, but it’s great to see how we’re building up on things that we previously did to do more!

Locomotion 3: Automatically Walk Forward

For the last thing to look at for locomotion, I want to implement auto walking.

To do this, we’re going to save our current scene and make a new scene to work with.

  1. Hit Ctrl + S to save any.
  2. In Unity, hit Ctrl + Shift + S and name our new scene AutoWalk.

In our new scene, let’s get rid of the old components that we don’t need anymore:

  1. In Player remove the Click To Move script component.
  2. Remove TunnelCover and all the child game objects that we’ve attached to it in the game hierarchy.

Now that we have a fresh new scene, let’s move on to working on the auto movement scene.

Step 3.1: Designing what we want to make

Auto movement, like the name suggests, is just the motion of moving forward without any user inputs.

What we want to implement is just a simple switch that we can turn on to move our character forward based off where we are looking at.

Just like when we were working with click to move, there are 2 ways that we can do this:

  1. Manually translate the player’s position forward
  2. Use the NavMeshAgent to set the destination of our player so that we can move forward.

Looking at this, we might have decided to do auto walk first and then moved to Click To Walk!

Like we discussed earlier, if we decided to use option 1, we would have to handle collision detection. I would rather not deal with that problem so I’m going to take advantage of the NavMesh Agent.

However, by using the Nav Mesh Agent, we must deal with the overhead of calculating the AI pathfinding for us in every frame in Update().

Luckily, since we’re always moving directly forward it won’t be as bad!

Step 3.2: Implementing our Designs

We know what we want to build, let’s build it!

Having done Click to Move already, this shouldn’t be that large of a challenge anymore.

We don’t need to setup our Nav Mesh Agent since we already have it from our previous scene that we copy and paste.

We just need to create a new script for our Player.

  1. In Player in the game hierarchy. Create a new script called AutoWalk

AutoWalk will be the code that will move us forward by using the NavMeshAgent. We only want to do this when we click a button.

Here is what AutoWalk will look like:

using UnityEngine;
using UnityEngine.AI;

public class AutoWalk : MonoBehaviour
{
    private NavMeshAgent _navMeshAgent;
    private Camera _camera;
    private bool _isWalking;

    void Start ()
    {
        _navMeshAgent = GetComponent<NavMeshAgent>();
        _camera = Camera.main;
        _isWalking = false;
    }
    
    void Update ()
    {
        _camera.transform.Translate(new Vector3(0, 1, 0)); // Gvr Editor Simulator forces us to be at 0, 0, 0, we need to fix that adjustment

        // switch the state we are in whenever we click
        if (Input.GetButtonDown("Fire1"))
        {
            _isWalking = !_isWalking;
        }

        // if we are walking, we want to set a direction for our Nav Mesh Agent
        if (_isWalking)
        {
            // set the direction to be our current location + whereever our camera is facing
            _navMeshAgent.SetDestination(transform.position + _camera.transform.forward);
        }
    }
}

Variables Used

Looking at the code, there shouldn’t be anything new or unexpected that we didn’t know about before.

To be consistent, here are the variables we use for our script:

  • private NavMeshAgent _navMeshAgent – the Nav Mesh Agent attached to our player that we use to move.
  • private Camera _camera – a saved instance of our camera for us to access.
  • private bool _isWalking – a boolean to tell us if we should move or not.

Walking Through the Code

Going through the code now….

  1. Like always, in Start() we instantialize our variables.
  2. In Update() we set our camera’s height because of the problem where the GvrEditorEmulator sets the camera at position: 0, 0, 0 and then our NavMeshAgent grounds our character to the ground.
  3. Continuing in Update() if the player clicks their mouse (or phone), we would set _isWalking to true (or false). When it is set to true, we would set the NavMeshAgent to go forward. We achieve this by giving it our current location added with the forward vector of our main camera.

And that’s it!

Really straightforward especially compared to what we have done already.

Here’s what the script will look like in action:

Note: At this point, we might add another screen cover like what we did for Click To Move. That should be even easier to do than Click To Move as we know when the user decides to stop moving. However, seeing as how similar it is, I’ll leave that as an exercise to the readers to figure out.

Conclusion

Phew, that was long! I intended this little write up to take only a week to do, but it ended up taking 2 weeks!

I intended to publish the article as I went along, but I got lazy and it ended up not happening… oh well!

I added the whole project to a Github Repositor: VRLocomtionExample. You can just download/clone everything and open it in Unity or you can just download VRLocomotionExample.unitypackage and in a Unity project import the package.

We have made a lot of progress with locomotion and how we might move around in VR. Of course, ideally, most of these solutions will already be “professionally” taken care of us in the form of a library or a framework, but now we understand a little bit more about the underlying code that’s involved with what we see today!

In the end, we learned to:

  1. Move by teleporting
  2. Move by clicking on a location to walk to it
  3. Move by going forward and looking at the location we want to go to

Not only that we learned how to implement some other features that are seen a lot in VR:

  1. Creating a curved line renderer that we can use as a pointer
  2. A screen cover that can help the player focus on only what’s ahead so that they don’t get a headache.

In the upcoming week, I’m going to move on from locomotion to focus on more complex topics: hand model and interactions!

We’re going to jump back into the DayDream and learn how we can use our controller to interact with other game objects.

Stay tuned to see what we’re going to work on next!

Subscribe To Our Weekly Newsletter!
Like these coding articles? Join my mailing list for the latest updates and influence the code that I write!
We hate spam. Your email address will not be sold or shared with anyone else.

Comments 2

  1. Geovanny

    Hello Josh,

    Your tutorials are awesome! I was wondering if you will now focus on using the Google Daydream. I personally do not own one, so I feel that I won’t be able to follow along anymore. Are you planning to focus your tutorials back to Google Cardboard? Or do your recommend me to get a Google Daydream and Google Daydream ready phone to follow along?

    Wanted to suggest to continue with tutorials on what is possible to do with Google Cardboard. From teleportation, object interaction to how to use the one button on the cardboard. Just a suggestion though. Thank you!

    1. Post
      Author
      Josh

      Hey Geovanny,

      Thanks for the suggestions! I’ll definitely keep it in mind for the next series of tutorials. Currently, I’m not really sure what I’ll focus on, there’s so much out there right now my goal is to learn, play, and share my discoveries in a tutorial form. Eventually I’ll probably settle down and specialize in something, but that’s TBD for now.

      At this point, I’ve covered a lot of content regarding setting up and using the Google Cardboard so at the very least I want to explore the Daydream controls. What I plan to do afterwards will depend a lot on what people are interested in.

      My current plan after the Daydream controls is to look into gameplays of other VR apps and then show how we might be able to implement something similar ourselves. I think there can be a lot of value from that. Now whether that’s with the Cardboard or the Daydream it’ll depend, but I’ll keep your suggestions in mind on what’s next.

Leave a Reply

Your email address will not be published. Required fields are marked *