First came Desktop, then came Web, then Mobile, and now we have Virtual Reality. I don’t know what the future will hold, however I’m going to make a bet: VR will explode in the next 3-5 years and I want to help contribute to this growth. As of the beginning of this writing, I don’t know much of Unity, however I’m going to document my progress to learn in what I call my 100 Days of Unity VR Development.
As any VR enthusiasts know, we’re not going to have an repeat of the 1990’s. We have better hardware devices ranging from the Google Cardboard to the Oculus Rift, we have large tech companies like Facebook, Google, and Microsoft taking bets on AR/VR, and finally, we have better development kits available for normal developers like you and me to create our dream applications!
I’m a software developer by that wants to become part of the VR revolution and help fill in the gap of missing applications in the app store. VR apps might not become the next Mobile apps, but it’s currently in the forefront of technology and it’s extremely exciting to see what the future will be.
So to prevent myself from losing motivation, what better way is there than to force myself to document my progress! Follow me in my 100 days of VR development! Armed only with my experience with programming, follow along as I learn how to create VR application with Unity.
Disclaimer: I have a day job and *sometimes* have a social life after work so most likely there won’t be consecutive days. My goals are simple:
- Show my learning path of what I learned in a useful consumable way to others who are interested in developing VR applications.
- Have at least 3 updates a week
- develop and release something presentable to the app store
Will I make it? Read on to find out!
Learning how to Develop in Unity
We start off Day 1 by getting everything setup. We installed Unity and our editor of choice: Visual Studio.
Next we start going through Unity’s roll a ball tutorial and learn how to navigate around Unity and some basic game development concepts like the Camera, Colliders, and GameObjects.
In Day 2, I move on to the next Unity tutorial, the Space Shooter where I learned how to import assets from the tutorial, animate a player ship and shoot projectiles!
In Day 3, we resumed what we were doing in Day 2 with cleaning up the lasers that we’ve created in the previous day, creating new waves of enemies, and then learning how to use Coroutines.
In Day 4, we finally finish the Unity Space Tutorial by creating more UI’s and adding in audio effects that are provided to us from the tutorial.
In Day 5, we start the final tutorial that I’m going to look at. We spent a lot of time setting up the environment for our character to interact with, creating some player animations, creating the enemy, and learning Unity’s powerful Nav Mesh Agent component and how it creates an AI for us to use.
In Day 6, we finish the Survival Shooter tutorial by implementing shooting enemies with raycasts, the UI, animations for the enemy, and creating different states for the game such as game over.
Creating a Simple First Person Shooter
In Day 7, we finally get started creating our own simple game features: a first person shooter. I intend to use this as a skeleton to practice what I learned and then add some VR capabilities to it before moving on to creating a real VR app. Today, we created a simple terrain for us to use in our game.
Day 8: Creating a Character in a First Person Shooter
In Day 8, we went on to create our first person character and added some controls to move our character around the terrain with our keyboard and mouse.
In Day 9, we create a simple gun for our player and then use a raycast to shoot bullets at other objects in our game.
In Day 10, we create a simple enemy cube and added the code that’s related to an enemy getting attacked.
In Day 11, we look through the Unity asset store to find assets to use are our enemy. We also learn the true power of the Mecanim system that allows us to use the same animations for different models! This is really interesting stuff here today!
In Day 12, we use Nav Mesh Agents to create an AI for our enemy knight so that it can chase after the player. We talk about how to bake and what’s necessary to get our AI working.
In Day 13, we re-added our shooting code to our player so now after we depleted our enemy knight’s HP, it’ll play it’s death animation.
In Day 14, we added a push back shooting mechanism for when the enemy knight gets hit. Then we went to implement walking sounds for the enemy that we found from the Unity Asset Store.
In Day 15, we added the remaining sound effects from shooting sounds, enemy getting hit, and the player walking.
In Day 16, we found a gun asset to be used in our game from the Unity Asset store. We added the animation for shooting, reloading, and idling. We also added the code needed to shoot from our gun and transitioning to the shooting state.
In Day 17, we finish exploring making a gun. Along with shooting a raycast directly to the middle of our screen, we create an illusion of firing bullets from our gun towards the center of the screen for a more realistic shooting effect.
Continuing developing our weapon system, today I decided to go and implement an ammo system and the UI that’s involved so we can start reloading our weapon.
After today, we can shoot our gun until we run out of bullets and then reload so we can shoot again.
After we finished creating our weapon, we move on to creating a health system for our player. Currently, our player doesn’t take damage, so we’re fixing that.
At the end of today, our player will have health and take damage. We will also have the UI to show our player’s health.
Now that we can take damage, we’re looping back to a strange problem: if we don’t move, our enemy won’t be able to damage us again! In day 20, we look more into colliders and why we’re not running them.
At the end of today, we will be able to get pushed around the enemy and take damage when they hit us.
After we created an health system for our player and made sure we can take damage properly, it’s finally time to create a game over state in our game. Today we look into making a game over panel UI and how to create animations to make it appear.
After today is over, we’ll have the game over UI to show to our player when they lose.
In the day before, we created a game over UI panel that stays in front of our player. Today we’re going to write code that will show our panel when the player loses.
Now we have a way to lose, we need to have a way to win our game. We’re going to do that by learning how to create an enemy wave spawning system.
At the end Day 23, we’ll have the code to create new Knight enemies into our game.
The day before, we added the code to create enemies, today we continue to make changes to our spawning system so that we can generate multiple waves for the players to challenge.
At the end of the article, we’ll have a complete wave system that creates new Knight enemies.
Now that we have a fully working Wave System, we can finally win the game. Today we’re going to implement a victory system by re-using our game over UI and code.
At the end of Day 25, we’ll have a “complete” game that allows us to lose and win.
In Day 26, we did some tidying up of things that we didn’t completely address.
By the end of today, we added some audio for when the enemies hit us, added a new crosshair UI, and we made it so that when the player loses, the enemy enters their idle state.
Currently in our game we have a enemy spawn system that only creates knight enemies. Boring! Today, we’re going to add 1 of 2 new enemies, the bandit.
When we’re done today, we’ll have a bandit enemy that’s faster but have less health than our Knight enemy. We’ll also learn about how easy it is for us to share animations with Unity’s animator controllers.
Next up in our series of adding new enemies, we added a new zombie enemy that’s slower, but have more health than our knight enemy.
Today we explored the powers of the animator override controller that allows us to use the same game states from other animators, but allow us to create new different animations for our character.
Now that we have 2 new enemies, in Day 29, we’ll add them into our our Spawn System so we’ll have a more fun game with different types of enemies each wave.
We also went to explore an interesting problem, where dead enemies (their colliders specifically), block our player from shooting other.
Now that we have working enemies, the next thing to do is to create a a score system.
Learn how we make a time base scoring system where players try to clear the game as quickly as possible. Today we’re going to create the code and UI for our scoring system.
After adding the time based score system for the game in day 30, we need to be able to stop our score and the rest of the game.
See how we stop our time score when the game is over and how we clean up all of our animations and sounds.
Before we move on to the final part of the game where we save our score, we will explore the pros and cons of 3 different options to save our progress: PlayerPrefs, Data Serialization, and a separate server.
We explored the different types of strategies that we can save in our game. Now it’s time for us to start implementing the high score system into our game. We’ll be using the simple approach by using PlayerPrefs.
Adding VR Into Our Simple First Person Shooter
Now that we’re starting to work in VR, I’ve made a GitHub repository of the final version of the Simple First Person Shooter so people who are only interested in VR can start here.
We’ve came a long way now. We completed our Simple FPS and now it’s time to start adding some VR mechanics into it. However, before we do that, we have to learn how to do that.
Today, we’re going to look into setting up the Google VR SDK into Unity and learn some of the prefabs and scripts Google provides that helps make developing VR apps easier.
After exploring the Google VR SDK and the sample app, it’s time to run the demo app on our phone. Today, we’re going to learn everything we need to do to be able to successfully run our VR application on our Android device.
Now that we know the high level prefabs needed to work with the Google VR SDK and how to build the sample app, today we’re going to do some clean up in our game and build our own simple FPS game to our phone with VR enabled.
Unfortunately, when building for Google Cardboard, we have to face long build times whenever we want to test the app on our phone.
When we play our app, we’ll encounter a lot of problems some technical problems and some problems that’s VR related. Don’t worry, we’ll resolve them all!
In the day before, we deployed our game in VR. There were a lot of things missing and broken, but being able to play a game we created ourselves in VR was really exciting!
Today, we went in to make some existing changes to make our gun work again by using Google’s implementation of Unity’s Event Trigger system.
Now that we have finally finished bringing everything we learned earlier about using the Google VR SDK, it’s time that we start to move to resolve problems that Google did not provide a solution for.
We went and changed the game to make our game work for Unity. Specifically, we moved the player to the middle of the game so the player will be surrounded by enemies. We also got rid of the ammo system, because the player only has one input and that’s to shoot.
In the next of our series of fixing our VR game, we’re going to fix the remainder of our gameplay bugs.
We look back at the Event Trigger System that we set up. The Event System Trigger only works if we click the input button over and over, but not when we hold down.
We also went in and fixed other problems like: weird pushing problems, badly colored enemies, and slowing enemies to make the game easier.
Now that we have added the gameplay back to where it was before, we went back to fix the final piece of our game: the UI. We learned that we can’t use a normal Screen Space Overlay for VR apps, we have to put our UI into the game world space.
We’ve also talked a bit about what good UI should be for VR.
Continuing fixing our UI, we added back in our game over panels along with the animations. We have a problem where the animations are appearing in the wrong location. We fixed that and learned how we can interact with UI elements in Unity.
It turns out we already have everything we needed from our original game!
With our UI finished, we finished transforming our game to support the Google Cardboard.
Now that we added Google Cardboard support to our FPS. We should do the same with the Daydream, specifically using the Daydream Controller.
Today, we looked at how to use the utility prefab in the Google VR SDK to get our game working with Google Daydream and their controller. It turns out that a lot we can leverage a lot of the work that we did with the Google Cardboard to make our Daydream implementation “just work”!
On Day 43, we finished adding support for the Google Daydream by adding the necessary scripts for our controller to interact with our UI in the game.
Then we looked into Spatial Audio for Unity, which allows Unity to adjust the audio of our environment based off our location, allowing us to be able to tell where the enemies are coming from based off of their footsteps.
Once we have completed this task, we have officially created a simple VR FPS. Congratulations! We have came a long way since we first started on Day 1!
GitHub repository: TBD