Saturday 8 February 2014

Shun Goku Satsu Assignment: Game Engines Part 1

Game Engines Part 1
Introduction
Everybody loves to play games and enjoy features like the story, art or even combat mechanisims, but then there’s others like to reflect on how well the game’s been produced in terms of graphics, sound and maybe even non-playable characters. In reality we are too consumed in the game itself to even dwell up the masterpiece of the game engine itself and how it can be considered the heart of the game?

The Purpose
So what exactly is a game engine, its purpose and how it differs from a ‘game’ itself? Basically a game engine is a system designed for the creation and development of video games. The leading game’s engines provide what’s called a software framework that developers use to create games for various platforms like consoles, computers and mobile devices. An engine is a set of tools and mechanisms prepared separately to add in eventually as the engine is being built, meaning that every engine’s code and scripting is unique and can’t be re-used- hence why game engines take a considerable amount of time to develop. Now the framework is a code base that handles important aspects of games such as hardware interfacing and input, always coming with a predetermined rule set. A game however is a piece of media entertainment that runs from a game engine, but includes other pieces as well like design/story/art design features and can be played as well.

For example we can’t start making a game engine until we actually focus on the concept (or more like the game design document/pre-production document) of what the game is going to be about. Until it’s decided and both documentations have been completed and cleared it’s then we can move onto modelling and assets and then onto building up the game engine itself, which is why there can be a huge debate on what could be considered as the heart or the most important piece of a game.


Difference between Engines
As we know gaming itself, has developed a lot over the years, which means the game engines themselves have come to produce more realistic effects and have branched out to other devices that’s been urbanized, e.g mobile phones, tablets.
One of the earliest game engines is NovaLogic’s Voxel space engine. It managed to combine volumetric and pixels, thus creating 3D bitmap images rather vectors. In the early 90’s it was used mainly for flight simulators, also managing to render command and conquer vechiles. A good example of a game that used this engine is games like Quebert and even Minecraft.

 In 1996 ID Tech produced the first amazing 3D engine that was brilliant at rendering methods, allowing pieces like brushes and z-buffering to be visible. It also used a certain scripting known as QuakeC, again another game that used this is Team Fortress. However the company decided to develop their engine further. In 1999 they introduced shaders, curved edges and network capabilities meaning that rendering itself became increasingly difficult but more of a masterpiece until 2004 when ID Tech 4 appeared. Back then it was considered to be a groundbreaking engine development, able to last through code overhaul and switching from C to C++. ID Tech has so far managed to bring out a number 5 of their series that allows volumetric lighting, post processing, virtual texturing and HDRR (High Dynamic Range Rendering). Upcoming games using this are Wolfenstein: The  New Order and Doom 4.
It also managed to add more realism to the graphics itself with the lightning and depth compared to an equally successful engine called GameByro (developed in 2003). GameByro was a successful multi-platform engine that had created a great dynamic collision detection and particle system- in which games like Fallout 3 and Oblivion took hold of.

In 2004 two powerful engines were produced known as Source and CryENGINE. Source became the follow on from GoldSRC( known in 1998 for its written C++  to make a cross platform features in Half-Life and Counter Strike), adding more shaders, dynamic lighting, reflective water and real time motion blur. It was though heavily modded meaning it had to be constantly updated. As the original Half-Life and Counter Strike used its predecessor, there’s no surprise as to why Source became the target engine for their sequels. CryENGINE compared to Source became a very demanding piece of technology reliant on high visuals and hardware, which added more beautiful graphics to players maybe being the reason as to why Far Cry and Crysis where big hits back in that time. Nowadays CryENGINE has become an even bigger success, unlike Unreal the engine is addictive adding more to the level design and world of the game rather than hindering it or simply taking away the ‘perfect’ effect.  CryENGINE 3 uses a WYSIWYP approach which is what most game interfaces use adding more of an advanced lighting, particle system, normal maps and modular AI system.
Even though CryENGINE has seemed to have won the battle of the better gaming engine, Unreal 3 was not something to be frowned upon in the 2007 era. Considered to be the most popular and well-known game engine, known for its multi-platform and its DirectX 10 compatibility games like the Mass Effect series, Devil May Cry and Batman Arkham City used this. Since then the Unreal engine series has a number 4 that promises to use real time global illumination and voxel cone tracing soon to be present in games like Fable Legends, various MMORPG’s, Project Awakened and Fortnite.

Speaking of famous titles if we look at Battlefield 3, Medal of Honour and Need for Speed,  we can see the engine company known as: Frostbite, uses a heavy shader engine making it graphically impressive giving off multiple shaders. It also allows players to witness destructible environments as well as being multi-platform compatible since 2008. Upcoming titles that plan to use this engine still are Dragon Age: Inquisitor, Mirror’s Edge 2 and a Mass Effect game. Compare that though to Anvil/Scimitar’s engine (famous for the Assassin’s Creed series) it gives the game that much more. One minor drawback is games that use this engine usually need to be built from the ground up, because of it being a multi-threading engine allowing dynamic world creation techniques. It also influences more middleware components that not many engines had during 2008. Some of these components are programs like Autodesk series, HumanIK and Euphoria animation mechanics.

 Looking back at most of these engines we can see even though they work for different devices they’re mainly console based engines. Unity game engine is the opposite and used mainly for games that rely on Web and Mobile platforms. Using interfaces and codes like Open GL, Open GLES, Direct X and proprietary API support it can accommodate for a number for platforms. It’s also dedicated to scripting language and a powerful toolset including middleware inclusion.
Functions in an Engine
Five key subjects are important to include in a game engine as it can be the difference between your game being a huge success, or an amazing flop to gamers and your investments.
Rendering:
Rendering is the process of creating a model from an image using computer aided programs such as the Autodesk Collection (Maya, 3D Max, Mudbox ect), giving more realistic detail where performance is paramount. It’s also the final part of the game and being one of the most important, allowing us to see characters, environments and even art designs being the first thing a player observes when the game is even being loaded.  Yet  there are a number of stages that people have to go through in order for the visibility to be perfect, meaning that it may limit the CPU/GPU usage which is where techniques like culling will come in. Rendering can be integrated or dedicated to a game engine middleware component, able of processing a number of actions like normal/texture mapping, radiosity, shading and reflection among many others. Nowadays 3D games often require real time rendering where the engine itself is placed into a constant update cycle where the CPU allows the game to keep running, yet if it does not work harmoniously with the GPU then the routine of the update cycle will be executed stopping the game from running entirely or severely glitching.
When rendering there are two main types of illumination: Direct illumination gives a very flat, unnatural lightning scenario which may attempt to create a realistic and natural light using a number  of sources, meaning the programmers have to really consider where they put these lights otherwise it could look like a mess. Sometimes the lightning may not bounce effectively off reflective surfaces as the radiosity may not be able to work as well. It also means lightning and shadows are diffused within the graphics.
Ray tracing though gives a realistic approach to the game where the radiosity can work at its best, where people can actually witness where colours meld into one another making the visual seem more rich. It even allows for better lightning and reflective effects but one major drawback is its heavy usage on the GPU and being costly. Another important thing to mention is Ray Casting, which is where the light can be reflective,  effectively but can never trace secondary rays which is why people avoid putting in a house of mirrors in a game, for rendering, the light rays will ‘break’ the GPU and be very time consuming.
Now I mentioned earlier about culling, it’s where it removes parts of a scene that does not contribute to the final image. So during gameplay part of the environment might fade into the background by some mysterious fog. For example in Kingdom Hearts: Dream Drop Distance, as you advance through worlds you’ll see the environments are simple looking shapes until you walk up to them, then they become something like a castle or some trees. Compare this to Legend of Zelda: Twilight Princess where a mysterious fog won’t just appear in the game, the previous backdrops will appear/disappear on an opacity scale which is quite common in most games.  This is so the CPU and GPU don’t have to work overtime with models that players aren’t going to see/need to see saving effort and time. There are various culling techniques in which people use to rid unnecessary assets in the rendering and can take place at the application, geometry or rasterizer (provided it’s hardware assisted)

Binary Space Partitioning (BSP)
BSP is a way for subdividing a space into convex sets, the subdivision itself will give a rise to the representation of objects within a  the space by means of a tree data structure. The tree allows 3D information about the objects in a scene considered useful in rendering such as their ordering from back-to-front at a given location, to be accessed rapidly as well operations with shapes in a CAD, collision detection in models, ray tracing or other computer applications that involve handling of complex 3D scenes.

>View Frustum
View frustum is a region of space (in the shape of a frustum, hence name) in the game’s environment that potentially appears on the screen. The planes that cut the frustum perpendicular to the viewing direction are called the near plane and the far plane, anything close to the camera view point however is clipped, the rendered when the player view (e.g the frustums planes) come into contact with them objects. 

>Back Face Culling
Back-face culling decides whether a model/mesh is visible, testing whether the object will appear clockwise, anti-clockwise or counter-clockwise order when visible to the player, if however the object is projected as counter-clockwise winding it will have to be rotated to face away from the players perspective; not being drawn.
This methods helps reduce the number of polygons for the program to draw, as an example if we look at Grand Theft Auto’s city designs, there wouldn’t be any point of modelling meshes for the sides of the buildings that face away from the camera’s perspective within the game as they’re obstructed by the sides of the environment that face the camera.


>Occlusion Culling

Another similar technique to Back Face and View Frustum is also known as Occlusion culling, which tends to skip the drawing of models which are hidden from the viewpoint by other visible models. The technique will only work however, with single-sided polygons whereas double-sided polygons are rendered from both sides meaning they don’t have to be culled by other methods such as Back Face. Occlusion culling does use z-buffering and can be dependent on an object, only if that object isn’t discarded from the rendering pipeline.

>Portal Culling
This technique divides the scene with portals and manipulated camera angles. During the rendering stage, a camera view will be in one of the rooms that’s rendered normally but for each portal that’s visible to the player, a View Frustum is set up for the size of the portal then the room behind it is also rendered, meaning it will be like having two mirrors where one is underneath you and the other perpendicularly above so that when you look in one you see the other? Hence why these are used in indoor scenes, however they can be FoV dependant.
>Contribution Culling
Finally the last technique is used when objects are too far away they do not contribute much to the final image, especially if their screen projection is petite. So instead additional features are thrown in to give the sense of illusion on distance, which could be the reason why we see mists or fog during gameplay.
Texturing, Shaders and Anti-Aliasing
So far we’ve been through quite a bit about rendering but there’s plenty left still to learn, for example the games and mods wouldn’t be very detailed if it wasn’t for the fact we texture these models, for different things like putting a certain colour on them or turning them into fire, glass, distinguishing the difference between body parts and clothing. Textures always need to be rendered to the power of two, so that it’s not putting the modelling program on over-drive, will be equal and better to work with when UV mapping and won’t look a mess when rendered, this texturing can add a lot of realism to a game and obviously will change to suit a certain theme or style of the game. If we compared the first Legend of Zelda to its later sequel Twilight Princess, you’ll see the texturing has added more of a realistic and adult approach.

Legend of Zelda gameplay
Legend of Zelda: Twilight Princess gameplay

Another great thing about texturing is that it ties in with culling, is the ability to create shadows and fog within the game, as it gives an immersive sense of realism to the environment by using a variety of techniques and shaders; to produce high quality shadows.  Shadows are produced using shadow mapping which a process of where shadows are added to 3D models, used in both pre-rendered and realtime scenes. They’re created by testing a pixel from a light source and comparing it to a z-buffer or depth image of the light source’s view then stored in a form of a texture. Without it that asset could look unreal or maybe people would think there’s a constant light around the object. One the easiest methods of shading is called Stencil Shading, which is where an outline of, let’s say a body, is projected onto the ground as a shadow like it would be in real life around 12pm.

Now with that in mind shaders give the pixels themselves colour and additional attributes which adds more for the player to see, combined with shadows and lighting effects (plus texturing) we can see very realistic objects that will be displayed during the game, hence why when a new generation of game or console is brought out people always like to compare the graphics.


Anti-Aliasing is basically smoothing/ removing jagged edges within the game. To be honest not a lot of objects or pieces take part in the process because even it gives a high performance, it does lower the FPS and contains an intent code. Without actually looking for it people can’t see the difference between what is ‘AA-ed’ and what isn’t. Below is a scene taken from Final Fantasy 14 where you can clearly see the difference of the edging with AA involved.
Collision Detection:
This term refers to the computational problem of detecting the intersection of two or more objects. Objects in games interact with the player, the environment and each other and are represented by two meshes; one being a highly rendered model to the player’s eye. However the CPU will only recognise it as a simple shape. For example a treasure chest will appear as a cuboid, making it easier to insert another mesh that will collide with the other. Usually these simple meshes are collision geometery- maybe being a bounding box, sphere or convex hull. Engines using these as a final shape for collision detection are considered simple and can narrow the number of collisions set on a costly mesh.

Artificial Intelligence (AI)
In video games AI is the behaviour of NPCs, however it can be used to refer to a set of algorithms. AI can be centred on appearance of intelligence a d amazing gameplay, being a different approach to traditional AI; workarounds and cheats can be acceptable and in cases, the computer’s abilities must be toned down to give human players a sense of fairness.

Constraints and Boundaries on AI
So we know AI is the behaviour of NPCs but what exactly restrains them in a game? NPCs relate to different things, differently meaning they’ll have dynamic responses depending on how well people have animated to respond. Another thing is certain NPC’s might stay in one place of the level, could accompany playable characters throughout the game or are basically everywhere. What I’m referring to with this is those that stay in one level/part of the game could be a member of a village? For example in Abe’s Oddysee, Paramites (known to be enemies/ friends) only appear in Paramonia platforms and their response to Abe depends on how many of them is around at once.  If on their own they tend to help the protagonist or moreover leave him alone fleeing to the rest of the pack, if in a group they tend to hunt you down through the platforms until you reach higher ground or have been killed by them or something else. 
NPCs that accompany the main playable character could be companions or have relations with the protagonist. In Okami, Issun is Ammy’s companion and stays with her until she reaches Celestial Plains. He’s an NPC that’s basically helps you throughout the whole game compared to another NPC Waka (slight guardian role) who comes and goes throughout the level and challenges you along the way, so his limelight was restricted until you were supposed to meet him in the story whereas Issun was never really restricted in being around you, but had to stay on Ammy’s head-unless in cutscenes.
My last idea of NPC’s being everywhere would be enemies or ‘friendly-enemies’ like guards for instance. Their paths would be set a certain way, where if you stepped near them they surround you or follow your path through the game. In Kingdom Hearts: Dream Drop Distance, enemies would pop into existence as soon as you stepped foot in a certain area of the dream lands and then disappear if you ran away from them as they couldn’t leave a set space.

 Compared to Legend of Zelda: Twilight Princess, where in some places these foes would spot you and continuously follow you around, grouping together until you’ve turned round and defeated them before they get you, so basically how do creators make these characters have set paths to follow within the game that the game engine can understand?

Waypoints
Objects placed on waypoints are extremely limited in functionality and don’t allow your characters dynamic adjustment, as they’re placed on a fixed route. The polygon is placed on a navigation mesh allowing the bot to move only in them areas, if they were to be on a bridge that’s blown up, when they got to the end of it they couldn’t find way around the hole or return.  All that will happen is they slight glitch then respawn back where they started or advance over to the other side of the bridge to carry on the waypoint. This wouldn’t be very effective or realistic but was used in early games, hence why some games you play nowadays still do this.

Path Finding
This is another way for your NPCs to move around, since this is a more updated version it’s therefore likely to work the opposite of waypoints and more effectively. Path finding will allow a character to move ‘less-constrained’ within given areas so they have a more dynamic response to things. Going back on the bridge idea, these objects would probably find a different way to get around the hole to carry on between locations or simply return to where it started, you might even get a certain reaction from some of them meaning movement isn’t a problem. Still there are certain limitations to this as it can put a constant strain on the CPU/GPU usage.
Physics
Physics are extremely important in games, by taking physic laws from reality into the gaming world people will feel as though they game itself is better. We need physics in games for everything that needs to move, move which is what animation is there for and even that’s limited by physics laws. We wouldn’t allow characters to simply walk through heavy wind without being slowed down or blown away, or if they have a high jump it can only be used for certain ledges not every tall ledge. In the Spyro the Dragon series, Spyro could only fly a certain distance as a young dragon and do a quick hover to get across ledges in order to get treasures, eggs, orbs ect As he was only a young dragon they wouldn’t have developed the muscles in their wingspan to carry them for long distances which limits the players to certain actions but challenges them to get to certain places by finding another way or completing side-quests adding more of a natural effect.
Two well-known game engines called HavoK and PhysX are basically physics engines designed for games allowing real time collision and dynamics of rigid bodies in 3D. HavoK itself provides different types of dynamic constraints between rigid bodies by using what’s known to be a dynamical simulation, creating more realistic virtual worlds in games. Their sub-company known as Havok Animation provides the user with efficient playback and compression of animations for characters in the games, featuring inverse kinematics. Games that use this engine are: Darksiders 2, Ni No Kuni and Super Smash Brothers Brawl.
Now PhysX is classed a proprietary real-time physics engine middleware SDK. Games that support hardware acceleration by PhysX itself can be sped up by either PhysX PPU or a CUDA-enabled GPU, allowing physics calculations from the CPU to be off-loaded, whilst the CPU can perform other tasks instead-giving a smoother gaming experience and space for additional visual effects. Usually middleware physics engines will let game developers avoid writing down their own code, to see to complex physics interactions that appear in modern games, hence why PhysX is  vastly popular among game engines and is available for Microsoft Windows, Mac OS X, Linux, PlayStation 3, Xbox 360 and the Wii, Games that have used this engine are: Borderlands 2, Mirror’s Edge and Alice: Madness Returns.

Sound
Game sounds are extremely important to the player as it helps make the game seem more genuine and can give a set of codifiers to help in the game. For instance, people who have played Zelda always know when they’ve opened a chest or found something useful as the tune goes: ‘den-nah-nah-nah’, compared to when they’ve found a rupee in a chest or when an enemy is about to attack, as instant alarming music will immediately take over the original soundtrack pieces set in each part of Hyrule until it fades away again.
It can also give characters a voice which is why voice-acting is important to games like Personna, Final Fantasy, Professor Layton ect as their games contain cinematic/anime cut scenes triggered throughout the game. Sound can define our actions and the games reactions so we don’t get confused between rules.

Networking
Usually we like to keep updated with the world and tell everybody we know what we are doing throughout most of the day, also hearing what our friends have done through social networking sites. Since most consoles now allow us to connect to the internet and even have famous networking sites like Facebook, Twitter, Bebo integrated into the system we can still play games and keep updated. Networking can handle game interaction over the internet easily nowadays which is what MMO and MMORPG games like World of Warcraft, Howrse and Pottermore rely on. It means players across the world can control their personal avatar to interact with others, so the game doesn’t have as many NPCs as it normally would in a console game. It also means we are susceptible to recent upgrades that give us DLC, help our consoles/ games and receive messages from across the world.

No comments:

Post a Comment