How graphics are generated, or rather, how a graphics card works, is a mystery to many users. However, the GPU follows a series of steps that we will explain here to generate the graphics that you can see on the screen. Do you want to know them? Let’s continue.
First of all, you have to know what some fundamental components are for the operation of the graphics generated by the graphics card or GPU.
- Graphics API is a set of functions and protocols that allow software developers to interact with the graphics card (GPU) and create graphics, images and animations in applications and games. Some examples of graphics APIs are OpenGL, Vulkan and DirectX 3D.
- Graphics Engine is software that uses a graphics API to process and render images, graphics, and scenes in applications and games. Some examples of well-known engines are Source, Unity 3D, RockStar Advanced Engine, CryEngine, Unreal Engine, DOOM Engine, Godot, etc.
- GPU driver it is a software/firmware that acts as an intermediary between the operating system and the graphics card.
Thanks to these elements, a graphics app can send a series of commands to the GPU so that they are processed and the appropriate graphics are displayed.
How does the graphics card work to create the images?
From the moment the commands are sent to draw a graphic on the screen until the frame reaches the framebuffer and is seen on the screen or monitor, a series of basic steps occur that you should know. And the thing is, all the interface graphics of the operating system, programs, or video games, videos, etc., that you can see, are nothing more than texture, color, lighting, geometric data, etc. All of this is processed by the GPU in fractions of a second to display dozens of frames per second (FPS). And all this starts in the simplest way…
The CPU, through the software (applications, APIs, and drivers), will use commands or objects through the graphics API and send them to the GPU for processing. Once the vertices have been sent to the GPU and are ready to be displayed, the graphics API will be in charge of managing the so-called “Rendering pipeline”, which consists of 6 stages:
- Per-Vertex Operation: This is the initial phase of the process, where the vertices of the image are processed by a Vertex Shader, which is a shading unit on the GPU. This computing unit performs mathematical operations with integer and floating point numbers. Each vertex is multiplied by a transformation matrix, resulting in a change from the 3D coordinate system to a projective coordinate system, which is essential for further processing.
- Primitive Assembly: after the shader unit has generated the three vertex coordinates necessary to create an image, it proceeds with the assembly of primitives, which consists of connecting these vertices in a specific order.
- Primitive Processing: before the generated primitive advances to the next stage of the process, trimming or clipping must be performed. This implies that a generated image can be larger than the screen, but only what is within the visible area on the screen, known as the “View-Volume”, will be displayed. During the clipping process, everything outside the visible area is clipped and ignored for the next stage of the process.
- Rasterization: everything that has been done previously cannot be displayed on the screen directly, since pixels are required to form the final frame of the image. It is in this rasterization stage where those pixels are generated. Only fragments that have passed the clipping stage will be processed in this phase to convert them into individual pixels that will be used to build the final image.
- Fragment Processing: At this point, the fragments that have passed the previous stage advance to another step, where they are again processed by an additional shader. This shader is called Fragment Shader and is responsible for applying color and texture to the pixels of the fragment. Essentially, the Fragment Shader is responsible for determining how individual pixels will look in the final image by applying the appropriate colors and textures based on the properties of the scene.
- Per-Fragment Operation: In this phase, the fragment pixels that have already been colored and textured are sent to the framebuffer. In this buffer, the pixels are grouped into the Default-Framebuffer, which will be the final image to be displayed on the screen.
Finally, the generated frame will be transmitted through the output of the GPU, which is connected to the screen, allowing the image to be displayed on the screen. The frame refresh rate, also known as FPS (Frames Per Second), will depend on both the capacity of the GPU and the screen. Depending on these capabilities, frames will update more or less quickly, which will determine the fluidity and smoothness of the animation or visual content displayed on the screen.
The pixels will be mapped onto the display matrix or panel, turning the pixels off, on or toggling to the color needed to finally display the triangle and its texture…