Computer graphics under the microscope: from polygons to procedural generation

Published by Nazar Yavorskyi

Realistic computer games are taken for granted in the modern world, but it wasn’t always that way. They have come a long way from the first image on an oscilloscope to procedural and neural generation. The word «graphics» actually hides tens of thousands of technologies that give us whole worlds. In this article, we will talk about them in more detail.

How it all started

First video game «Tennis for Two» was created by an American physicist William Higinbotham. Already during the first screening on May 18, 1958, a huge line of people formed to see this novelty. He developed it in three weeks and never imagined that it would become so popular. «Tennis for Two» laid the foundation for computer games, and it used an oscilloscope display. The game was perceived as a real hit. Today it is kept at the Brookhaven National Laboratory.

Some people may laugh at the primitivism of «Tennis for Two», but before doing so, I advise you to first look at the documentation of the developers of those times. It will turn out that behind the simplicity of the form hides an extraordinary design talent.

The first 3D game is considered to be Maze War, which was released in 1974. It was created by for Imlac PDS-1 Steve Collie at NASA’s Ames Research Center. It’s a first-person shooter in a maze with rectangular walls, fixed perspective, and invisible lines

Imlac PDS-1, which was used to launch Maze War was a 16-bit graphic minicomputer with 8kb of memory. Of the interesting things It used a so-called vector display. The image on such a display is built by randomly moving the beam across the screen, not along lines, as in the CRT monitors of the time

Why did we bring up this old story? It’s simple. When we come across an old project and automatically label it «outdated graphics», it’s worth remembering that the development process is gradual and what seems outdated now looked like the future yesterday.

Between the first game on an oscilloscope and the 3D graphics we’re used to, there’s a gulf of time and technology. This is nothing but the era of 2D technologies, which we won’t mention in this article, although many elements have migrated from them to 3D.

The sum of 3D technologies

The world of 3D graphics, and therefore the gaming world, is full of various terms. When it comes to graphics engines, most people don’t really understand what the developers are talking about when they promise to achieve true photorealism.

Of course, to fully understand all the processes, you need to be at least a serious specialist and not skip higher and discrete mathematics at university. Nevertheless, we will try to understand the basic technologies of 3D graphics at least superficially in order to lift the veil of secrecy and understand what is «under the hood» of typical games.

Landfills

In the mathematical world, a point — is just a term for a place in geometric space. And since it has no physical size, it is convenient to use to establish the exact position of any object. In the world of 3D, this information is extremely important, and it determines whether your character will stay in the right place and not fall outside the textures. It is the basis for calculating the start and end coordinates.

You might find it difficult to understand what is actually happening in this scene. That’s because it consists only of lines

Each point in them involves a large number of calculations. If we process points in groups, especially in the form of triangles, we can achieve significant performance optimization.And so we come to the conclusion that our triangle is nothing more than three segments connecting at three points (vertices or vertex). Since we want to see a 3D world, we need a coordinate system with three values (x, y, z). All vertex data is stored in a continuous block of memory called the vertex bufferIn a rendering program, shapes that are formed from vertices are called primitives. In the framework of Direct3D. They are represented by a list of points, lines, and triangles. If you combine them correctly, triangle lines use common vertices for several vertices at once.

This is how we slowly came to the concept of a landfill Most often, triangles are used as polygons. But sometimes polygons with many sides can be used. The shader processors of the video card calculate the location of the vertices of the polygons, and then connect them with straight lines. Many of these polygons are used to draw the skeleton of a 3D model. Then textures, effects, and lighting are applied to it, resulting in the image we see in the game.

High-poly and low-poly model

Even a child will tell you that there is something wrong with a character or an object when they see a low-poly model. But the reason for this is not a conspiracy of cunning developers, but the issue of the performance of the hardware on which this or that game is running.

25 or more years ago, video cards were simply unable to process a large number of polygons, so in some conventional Need For Speed you could see rectangular wheels on some car in the background.

Textures

Model without texture

Textures — are essentially digital «wallpapers» for 3D objects that make them look alive and realistic. Imagine playing a game, and without textures, all the trees, walls, or characters would look like smooth plastic figures. It’s like makeup for the virtual world, because without it, everything looks boring and unnatural. Textures can be different: from simple raster images to complex procedural images generated by a computer based on algorithms.

For example The Last of Us Part II the textures here are so detailed that you can see every scratch on the weapon or every crease in the characters’ clothes. Naughty Dog developers used thousands of textures to make the post-apocalyptic world as believable as possible

To create them, they scanned real people to capture the smallest details, such as pores and wrinkles. This makes the characters look almost as if they were alive.

High and low resolution textures

Textures — is not only about beauty, but also about optimization. Imagine that in a game like Assassin’s Creed Valhalla with a huge open world, every blade of grass or pebble would have to be drawn separately — the computer would simply not be able to handle it! That’s why developers often use repeating textures and procedural methods to create details on the fly.

Textures in DOOM

Interestingly, the first games, such as Doom in 1993, had very simple textures due to the limitations of technology, but even then they created an atmosphere. Today, textures are a whole art form that combines technology and creativity so that we can immerse ourselves in virtual worlds.

Tessellation

Modern graphics cards can process tens of millions of polygons per second. That’s why game models are becoming more and more realistic. At the beginning of the century, before the advent of various relief texturing technologies, the only way to increase the geometric complexity of a scene was to add polygons to it, which negatively affected the performance of graphics processors of those years.

On the other hand, creating highly detailed models for rendering — is a very painstaking process, and if the model is located at a certain distance from the scene, all these details will disappear in vain.

Somehow, you need to tell the GPU how to split a large object, such as a single flat triangle, into a set of smaller triangles located inside the original one. This process is called tessellation.

ATI TruForm

ATI was the first to solve this problem. In 2001, TruForm was released, which is essentially the first implementation of tessellation in computer games. It was used in such games as Half-Life, Counter-Strike, Quake, Return to Castle Wolfenstein, and Unreal Tournament. After some time, tessellation technologies evolved and in 2011 were integrated into DirectX 11.

Shaders

Modern computer games are characterized by realistic graphics: high-quality shadows, reflections, dynamic lighting, and many other effects. Behind all this is a technology known as shaders. Shaders are special programs that process data about pixels, vertices, and textures to create the final image. There are several types of them:

  • Pixel Shader. They are used after vertex processing, working with pixels. Shaders determine the color and texture of an object. They are responsible for lighting, texturing, and visual effects.
  • Vertex Shader. Processes the vertices of 3D objects. They are responsible for converting vertex coordinates from the model to screen coordinates. They are also responsible for calculating normals and other attributes.
  • Compute Shader. Used for calculations that are not directly related to graphics. For example, for data processing, physics simulation, or mathematical calculations.
  • Geometry Shader. Work with primitives (for example, triangles, points, and lines). Take in already processed vertices and create new ones to create shadows, volume lighting, or simplify geometry

Vertex shaders (Vertex Shader’s)

Vertex shaders are programs that run on the graphics processor to process data about the vertices of geometric objects in 3D graphics. They are responsible for transforming vertex coordinates from model space to screen space, as well as for calculating parameters such as position, normals, texture coordinates, and other attributes, which are then passed to the next stages of the graphics pipeline.

Vertex shaders allow developers to create complex deformation, animation, and lighting effects at the geometry processing stage, which greatly enhances visualization capabilities. It is one of the key tools in modern computer graphics that provides flexibility in working with 3D objects.

Vertex shaders also allow you to process skeletal animation, where each vertex of the character model is linked to a specific bone in the skeleton, and these vertices dynamically change their position during movement. This creates smooth and natural movements that look realistic even with complex actions such as running, jumping, or fighting. Without vertex shaders, such effects would require significantly more computing resources or be impossible in real time.

Shaders in water surface visualization.

Another example is the simulation of natural phenomena, such as the movement of grass or leaves in the wind, as seen in games like Horizon Zero Dawn. Vertex shaders allow you to dynamically change the position of the tops of plant models depending on external factors such as wind direction and strength, creating a swaying effect.

This adds depth and realism to the scenes, as each blade of grass or branch moves independently, reacting to the virtual environment. This approach saves resources compared to using pre-recorded animations for each object.

Vertex shaders are also often used to create surface deformation effects, where they are used to simulate waves on water or deformation of the ground during battles. Thanks to real-time vertex processing, surfaces can change their shape depending on player actions or events in the game, creating a more interactive and lively world. Such effects are achieved without the need to create separate models for each state of the object, making them an indispensable tool for modern games with a high level of detail.

Pixel shaders

Pixel shaders are programs that are executed by the video chip during rasterization for each pixel of the image. They perform texture sampling and mathematical operations on the color and depth (Z-buffer) values of pixels. All pixel shader instructions are processed for each pixel separately after the geometry transformation and brightening operations are completed. What are they used for?

  • Stylized lighting: In many games, it’s not the lighting that looks much more interesting from a style perspective, but the stylization.
  • Deformation effects. Realistic waves, rain effect, crushing when hit by a bullet.
  • Object animation. Games look more alive and interesting when plants react to a character or trees sway in the wind. Vertex shaders are also used for this purpose.

As a result of its work, the pixel shader produces the final color value of the pixel and the Z-value for the next stage of the graphics pipeline — blending. The simplest example of a pixel shader is ordinary multitexturing, i.e. blending two textures and applying the result to a pixel.

With the advent of DirectX pixel shaders, not only multitexturing, but also much more has become possible. Today, the capabilities of pixel shaders have reached a level where they can even be used to implement raytracing.

Procedural Textures (Procedural Textures)

Procedural textures, unlike conventional textures, are generated algorithmically using mathematical functions rather than being created manually or from raster images. They allow you to create detailed surfaces, such as wood, marble, stone, or clouds, without having to store large amounts of graphic data.

Thanks to this, procedural textures save memory and provide high flexibility, as they can be scaled, changed, or adapted to different objects in real time. This technology has become a real revolution in computer graphics, especially in games and movies, where detail and optimization are important.

No man’s sky

One of the most striking examples of the use of procedural textures is the creation of realistic landscapes in a game No Man’s Sky. In this game, procedural algorithms generate textures for planetary surfaces, such as grass, sand, or rocks, depending on the type of biome and environmental conditions. Thus, unique worlds are created for each planet without the need to manually draw or save thousands of textures.

Another interesting example can be seen in the games of the Minecraftwhere procedural textures are used to create the effect of pixelated detail on surfaces such as wood or stone, even at low resolution

Although the game has stylized graphics, procedural algorithms allow adding small details that mimic natural textures, such as cracks in stone or fibers in wood. This creates a feeling of greater depth and realism without overloading the system, and allows the game to run even on weak devices.

What’s next?

The new generation of NVIDIA RTX 50 graphics cards features the latest neural shader generation technology, which we have already written before. Generative AI will help game developers dynamically create diverse landscapes, implement real-time generation of more complex NPC behavior, and much more. This will change professional 3D modeling applications in the near future. Developers will be able to create designs much faster than ever before, based on predefined criteria.

As you can imagine, the topic of 3D graphics in games is not only fascinating, but also very extensive. We will try to tell you more about it in a series of articles that will cover both the theoretical part and go back in time to recall the legendary game engines that opened the door to an incredible world for all gamers.