Fundamentals of 3D Computer Graphics in 2018

One of the most basic functions of mobile devices is also one of the hardest to explain: how your phone or tablet translates the binary 1s and 0s in its memory into 3D graphics on its screen. While we could never hope to cover all of computer graphics in a single article, we can learn the fundamentals of what makes text appear on your phone’s screen, or what allows your car’s infotainment display to render turn-by-turn directions for you. The whole process starts with connecting dots.

3D Computer Graphics Sample

Connecting The Dots

At some point, probably during your childhood, you’ve completed a ‘connect the dots’ worksheet. You start with a sheet with a series of numbered dots spread out across the page. In their original form, they don’t create a discernable image. But as you connect the dots in the correct sequence, a picture starts to form. Computer graphics work in a similar manner.

Objects on your screen are composed of points called vertices. The computer stores lists of vertices in a specific order. However, instead of the points forming a flat 2D image on a page, these points are linked together to form triangles.

This is one of the essential concepts underlying modern computer graphics: Nearly everything is made of triangles. It doesn’t matter if it’s a space marine, an architectural model of a building, or a sports car. All of them are stored as lists of triangles.

Vertices in computer graphics explained

“The best option,” said graphics programmer John Chapman in a Computerphile interview, “is always to divide [objects] into triangles because any 3D surface in the universe is able to be approximated by fitting together triangles.”

Let’s take a two-dimensional square as an example. A typical square is made of four points, each one representing a corner of the figure. The computer maintains this list of vertices in memory. It connects the dots in the correct order, forming two adjacent triangles. You can see in the diagram how the vertices and their coordinates are stored.

More complex models and shapes can have tens of thousands or even hundreds of thousands of vertices defining their shape. Trying to enter all these vertices by hand would be an exercise in patience, so artists typically use specialized tools like Maya, 3ds Max, or Blender instead. The same programs also usually include facilities for 3D animation services, which alter the coordinates of the model’s vertices to produce motion.

Understanding Texture Projection

Even the most highly detailed model will appear off if it doesn’t have a convincing skin put over it. This is where textures come in. Textures are 2D images that are mapped to all of a model’s triangles using what are referred to as UV coordinates. In other words, the 2D texture is projected onto the 3D mesh of vertices, almost like wrapping paper drawn over a box. It’s a complicated process, but we can again turn to our example diagram to show how it works:

2D texturing explained

As the UV coordinates of the triangle change, the position of the texture mapped to the polygon changes as well. In this way, texture artists can map segments of their texture to the model. An interesting side effect of this method is that the raw textures often look distorted until they are mapped onto a 3D figure. Texture artists must also work within strict confines to ensure their art is memory-efficient depending on the platform they’re targeting. This is one of the most crucial aspects of 3D graphics development: the hardware that makes it all possible.

Standards and Practices

When reading 3D graphics literature, you’ll often encounter the notion that the GPU, or graphics processing unit, is a “black box.” The code is sent in, and pixels appear on the screen. The programmers and 3D artists aren’t privy to what occurs within the GPU. Instead, the results are guaranteed by a set of hardware manufacturer standards. Some of the most common graphics APIs include:

  • OpenGL
  • OpenGL ES
  • Direct3D
  • Vulcan
  • Metal

Each of these standards has its own methods of rendering graphics to the screen. Depending on which platform developers are targeting, they might have to choose one framework over another, or support multiple frameworks to ensure cross-platform compatibility. While it’s becoming increasingly rare, some platforms resort to software-based rendering, meaning the bulk of rendering calculations are carried out by the device’s CPU device. However, the majority of graphics processing is preformed on discrete chips, especially in embedded systems.

Further Reading

While this article covered a lot of ground, we’ve only discussed a sliver of 3D graphics programming as a whole. However, you now have a grasp of the fundamentals of computer graphics and why the low-level hardware that displays them is worth billions of dollars to the industry’s largest companies. If this crash course in 3D graphics has piqued your interest, we’ve assembled a list of much more in-depth explorations of computer graphics: