Introduction
Right now, we are at the start of an AI revolution and you might hear people talking about AI threatening to take over our jobs. Most people think that the arts and creative fields are outside the scope of AI, but people couldn’t be more wrong.
What about creative AI doing something actually useful, something that could potentially save millions of dollars and improve an entire industry by leaps and bounds. Well, this is happening AI’s creativeness is now
helping Mobile App Design Companies with 3d visual arts, potentially revolutionizing video game production and graphics.
In this blog, we will take a look at some ways in which the AI revolution is seeping into gaming.
Real-Time Ray Tracing
Since Crysis in 2007, the gaming industry hasn’t seen a major step forward in the visual quality of games. Yes, they do look marginally better as each year progresses but it hasn’t been an incredible leap forward. This is about to change with something called Real-Time Ray Tracing.
Ray tracing, without going into much detail, is a technique that simulates the way light reflects and interacts with surfaces in the real world. It does this by calculating the path that each ray of light would take in a particular scene or setting.
With ray tracing, game developers have that element of realism that was missing in recent games. This technique was achieved by producing hardware tailored for AI to do all of the heavy lifting. This hardware is called NGX and it uses processors that are custom-built for matrix multiplication, the mathematical heart and soul of modern AI.
With this, the hardware-enabled AI leans how light should bounce and simulates it. It puts the paths of light where it thinks they should go and the result is a realistic real-time image that for the first time comes down to the consumer level.
Not so long ago ray tracing was integrated into development tools like the Unreal Engine and to highlight just how much of a difference ray tracing makes there are a lot of videos on YouTube showing that the simplest of games like Minecraft looking more realistic than ever before.
Machine Learning and High-Quality Motion
AI is also being used to create more realistic smoke and fluid simulations without the heavy processing that used to go with it. Not only this, but there are also machine learning systems developed that are trained by watching a wide verity of motion capture clips showing various kinds of movements.
This system correspondingly generates an animation based on the motions learned, this can be anything from a jog to a run. Even the motion of the waves crashing on a rock can be simulated and produced accurately in real-time.
To train such a system game developers first feed the system several long sequences of raw locomotion data at a verity of speeds, facing different directions and turning angles. Similarly the motions of stepping, climbing, and running over different obstacles are also fed as input using different terrains and conditions.
The result is that by either using a game controller or any element in the virtual environment, one can get the character to crouch and move under obstacles in numerous verities of ways. This approach takes the tedious work out of creating realistic movements of video game characters.
AI’s Impact on Cost of Production
In a free market, whatever is cheaper faster and better eventually becomes the standard and the same applies to the video game industry. It is no secret that the cost of video games is going up, but why is this?
Once upon a time, major games could be made with a small number of people this was because the assets or 3d components of the game were really simple to create, but as time progressed games strove to chase realism. As this happened the components of the game became ever more complex, it has got to the point where it takes a large team and a blockbuster movie budget to complete a major video game title.
One detailed 3D building can take up to 22 hours to create, 12 hours for the modeling stage, and another 10 hours for the texturing process, and the job is far from complete at this stage. The end process can be 44 to 88 hours of work just for a single building within a game.
If the wage rate is $60 per hour then the average cost of one building is around $4000. Now, you can multiply that and assess how much does it cost to create a complete triple-A, open-world environments.
To create a new 3D asset, be it a car or a building requires the designer to start almost completely from scratch. This is where AI’s neural networks are well suited to a kind of workflow called procedural content generation.
This is basically where a neural network is trained to create realistic buildings, cars, or audio using vast volumes of data and over time it learns what looks good and what is realistic. With procedural content generation a game designer simply has to type in certain parameters, the AI then is capable enough to generate a whole bunch of options and the designer simply has to pick one.
This is where the cost advantage comes in, if a 3D asset is needed, instead of starting from scratch, the designer can make AI do the heavy lifting and countless development hours are saved.
Conclusion
This blog shines a light on a certain side of AI that really isn’t talked about. AI is not only creative but is also proving to be practically useful. Combining all of the elements that we have observed in this blog, it’s clear the drastic effect AI will bring about in the game development industry.