3D scanning solutions

How Artec 3D is supporting Ukraine

How does 3D scanning technology work?

Feb. 05, 2020
Longread
23693
summary

Starting out in the world of 3D scanning can be intimidating, but everything becomes clear once you take a moment to understand the technology behind it. From your own eyes (the original scanner!) to the latest 3D scanner on the market, here’s how it all works!

Types of scanners
4+
Accuracy range
0.1 – 5 mm
Resolution Range
1 – 1,000 cm

Dimensions of our world, for viewing, scanning, and more

Before we plunge into the fascinating universe of 3D scanning, with its legions of laser scanners, structured light scanners, design software, 3D models, and more, let’s take a minute to better understand what it is we’re talking about when it comes to those three dimensions that surround us and describe us wherever we go. Everyone knows that we live in a 3D world, even people who have no idea what a 3D scanner is. But what in reality does a “3D world” actually mean? It means that the space around us has three dimensions, and that the position of anything can be described by using three numbers, also referred to as parameters or coordinates. There are various ways to specify these three parameters, and the rules for doing so are known as a coordinate system.

fig.1a

The most popular coordinate system is a Cartesian coordinate system.

When we talk about the width, height, and depth of things around us, we are using the terms of a Cartesian coordinate system – which can be either the right-hand system (RHS) or the left-hand system (LHS). The only difference between the two is the direction of the z-axis, the one that refers to the depth of something.

fig.1b

fig.1c

There are a few other coordinate systems, including a Spherical coordinate system (left) and a Cylindrical coordinate system (right).

What all 3D coordinate systems share in common is that they have three independent parameters that clearly describe the position of any point in space, whether a surface or otherwise. This seems like a simple enough point, but when it comes to 3D scanners and scanning, it’s the basic, fundamental principles such as this that will truly help you better grasp and successfully work with this world-changing technology.

Any discussion of dimensions as they pertain to the universe of 3D scanning and scanners becomes even more important when we take into account the accuracy and resolution of today’s professional 3D scanning solutions and software. Up into levels well beyond what’s visible to the human eye, the performance of the latest 3D scanners thoroughly depends upon reliable and repeatable hardware and software-based coordinate systems.

A very brief intro about objects, and 3D scanning them

Besides their positions in space, all physical objects have dimensions. Objects can be 0D, 1D, 2D, or 3D.

0D & 1D objects

2D objects

3D objects

Let’s think about something extremely small, like an atom, which we can say takes up almost no space at all, and so we can call it a point. When a point object has a position in space which can be described by its x, y, and z coordinates but has no dimensions, it is referred to as a 0D object. And while you can certainly find scanners for 1D, 2D, and 3D objects, the same can’t be said for 0D objects.

A very thin chain is an example of a 1D object. Each link, except for the first and last, has only two "neighbors," or adjacent links.

A thin sheet of paper (c) is a 2D object, as its third dimension (thickness) is insignificant compared to its width and height.

A simple example of a 3D object is a box, which has a width, height, and depth and takes up a certain amount of space in all 3 dimensions.

In terms of professional 3D scanners available on the market today, manufacturers clearly designate the optimal object size(s) (to be 3D scanned) on their product pages, as well as within their product documentation. 3D scanners range in size from automated desktop scanners, for tiny and small objects, handheld structured light scanners, for small and medium-sized objects, and larger scanners, such as 3D laser scanners, for large and even massive objects. In turn, the 3D models created via these scanners can be resized as needed, including via professional CAD design software.

How you and I perceive (and mentally scan) the world in 3D

Most information about distant objects comes to us with the help of light. Light is simply electromagnetic radiation racing through space at the fastest possible speed. Mostly coming from the sun, light bounces off surfaces, is absorbed or reflected by them, and will continue to travel onward unless it’s absorbed. Lots of things can happen to light. It can be reflected, refracted, scattered or absorbed, and after hitting objects in its path, it can even change its properties, meaning its color, intensity, direction, etc.

The human eye is a sense organ that can detect the direction, intensity, and color of visible light. The eye has a crystalline lens that focuses light passing through it onto the retina. The retina contains special light-sensitive cells, which consist of around 120 million rods and 6-7 million cones. These rods let us perceive black and white, while the cones let us see colors. To view those colors, our eyes collect rays of light from our surroundings and channel them to our retinas.

Our eyes can't see objects in sharp focus at all distances at the same time, so when we look at something close to us, objects farther away will appear blurry, and vice versa. A special focusing process referred to as “the accommodation reflex” lets us see clearly at distances from 6-7 centimeters (2.5 inches) out to infinity. Most of the time, accommodation works like a reflex, yet it can also be consciously controlled.

fig.2

Focusing: a. on something faraway…

b. on something nearby

One aspect of accommodation is when the corresponding muscles make the necessary adjustments to the lens of the eye, as shown in fig.2, to focus at different distances.

fig.3a

fig.3b

Along with how it helps the eye to focus, accommodation also lets us distinguish near objects from those farther away, even though a single human eye can’t perceive depth that well. This is where having a second eye makes all the difference.

Human 3D vision is based on the so-called stereoscopic effect. This effect refers to the process of viewing an object from two different positions, where the image seen by each of your eyes is similar, yet slightly shifted. How big the shift is depends on the depth (the distance) between you and the object, while the image tends to shift more for objects located closer to you. This phenomenon is referred to as retinal (binocular) disparity, aka binocular parallax.

Unfortunately, the eye’s resolution is not the same when it comes to everything it’s looking at. The highest density of cones is in the center, so if we want to have good resolution and depth perception, both eyes should be directly focused on the object. Convergence for looking at closer objects (see fig. 3b) makes use of the extraocular muscles, and the angle of view is significantly smaller when focusing on faraway objects.

After the two pictures (one from each eye) are projected upon the retinas, they pass through the optic nerves and on to various visual brain systems. Different parts of the brain analyze the image simultaneously. Some parts detect simple surface geometry, while some register motion, and others compare the image to previously learned images, etc.

Finally, in only about 50 milliseconds, all this information bubbles up into our conscious awareness and we notice the color, depth, movement, and shape(s) of what we’re looking at. Artec 3D scanners work in almost the same way, but are much more precise in terms of depth measurement than the human visual system can ever be, and this refers to Artec 3D laser scanners as well as structured light scanners.

The human eye, 3D perception, and 3D scanners

Because light behaves differently according to the circumstances, 3D visual perception doesn't always work so well.

Even though in reality, every physical object greater in size than a nanometer is 3D, it’s rather difficult for the human eye or a modern scanner to see all sides of an object at the same time, because quite often the view can be blocked by other objects. This includes, for example, non-transparent, complex objects, which can have their rear surfaces visually blocked by their front surfaces.

It’s crucial to observe (and scan) an object from multiple points of view in order to see the entire 3D shape, especially when the shape is unknown. It can also be challenging to perceive large objects with uniform colors and simple geometries in 3D, including those with flat or very smooth surfaces.

Key point

A good example of this is when you try to park a car in a parking space that's painted entirely in a light color. If the background is all the same color, with no visible features to contrast with the lines of the parking space, your eyes and brain will have a very hard time perceiving the depth of the space.

This happens because our vision needs contrasting images for the eyes to focus, and uniformly-colored surfaces without irregularities are seen as having no contrast at all. The same also applies to black surfaces.

Many professional 3D scanners have a difficult time when scanning black or even dark surfaces and colors, because of the challenges faces with differentiation as described above. For many technicians and 3D scanning specialists, this has posed a serious hurdle, often requiring different scanning strategies or even entirely different scanners. That said, if darker surfaces will be 3D scanned at least occasionally, it’s worth testing a 3D scanner’s performance with such objects prior to purchase, whenever possible. Choosing the best scanner for the job goes well beyond accuracy and resolution.

3D models via scanning, CAD, and more

The current generation of professional 3D scanning solutions, including structured light scanners, laser scanners, and software, is closely connected to computer technology. This has made it possible to develop new computer-controlled machinery, aka CNC (Computer Numerical Control). With CNC technologies we have made a great step forward in the production of objects of many shapes (sometimes called free-form surfaces).

The main idea of CNC is that a computer controls the machine tools instead of a person. Computers can do this with the utmost levels of accuracy, very precisely, and in a highly efficient way. Having said that, computers need special commands to tell them exactly what operations to perform. These commands are generated by software systems known as computer-aided manufacturing (CAM) and computer-aided design (CAD). Let’s briefly take a look at how computers deal with 3D objects.

What is a vertex, and how does it relate to 3D scanning?

A vertex in the world of computer graphics and 3D scanners refers to a data structure that describes the various attributes of a point. The main attribute of any point is its position, but other attributes can include color, reflectance, coordinates, normal and tangent vectors, etc.

Ordinarily, a vertex is assumed to be a point where lines, curves, or edges come together, so this basic geometry feature is quite often used to describe other, more complex geometries, such as an edge, a face, a mesh, or a surface. This is why some vertex attributes describe more than just a point, but rather an entire surface around or near a point.

A point cloud is an array of vertices normally produced by 3D scanners, especially 3D laser scanners.

What is an edge? A description in 17 words.

An edge is any straight line that connects two points (vertices). Can be part of a face.

A handful of sentences about faces, polygons, and other stuff.

A face is a closed sequence of edges. Each vertex of a face has two connected edges. The face of a triangle has three edges, while a quad face has four edges.

triangle / trigon

quadrilateral / tetragon

pentagon / pentagram / pentacle

hexagon

heptagon

octagon

Faces with 3 or more edges are named polygons, starting with a Greek-derived prefix corresponding to the number of edges and a 'gon' at the end. A pentagon (also known as a pentagram or pentacle) has 5 edges, a hexagon — 6, a heptagon — 7, and an octagon has 8.

Any polygon with more than 4 edges can be replaced by the corresponding number of triangles or quads which make up its form.

Meshes in the world of 3D scanning

A mesh in 3D technology (including models created via 3D scanners) refers to the way surfaces are represented in software via computer graphics. Simply put, a mesh is a collection of vertices and faces, together with information on how the vertices make up the faces, and how they are connected to each other.

Usually, faces can consist of any type of polygons, but in most cases, triangles are used, because they are easier to implement in Graphics Processing Units (GPUs). Different kinds of meshes require particular polygon types, and their rules are application-specific:

Face-vertex — vertices and a set of polygons that point to the vertices it uses.

Winged-edge — each edge has two vertices, two faces, and four edges that touch them.

Quad-edge — this consists of edges, half-edges, and vertices, without any reference to polygons.

Corner-tables — these store vertices in a predefined table to define the polygons.

This is, in essence, a triangle fan used in hardware graphics rendering. The representation is more compact, and retrieving polygons is more efficient, but any operations to change polygons are slow. Furthermore, corner-tables don't fully represent meshes. Multiple corner-tables (triangle fans) are needed to represent most meshes.

Vertex-vertex meshes — these use only vertices that point to other vertices. This is a very size-efficient format, although the range of mesh-efficient operations that can be performed is limited.

vertices

faces

mesh

Triangle mesh (a polygon mesh consisting of triangles)

Simple meshes can be created manually, while more complex meshes can be modeled via math equations, algorithms, or by digitally capturing real objects with 3D scanners. One of the most important characteristics of a mesh is its simplicity. There are multiple ways which the same surfaces can be captured and digitally represented.

A few words about voxels and 3D scanning

The entire volume in a Cartesian coordinate system can be divided up into small rectangular parallelepipeds (3D figures consisting of six parallelograms) called voxels. If the dimensions along the x, y, and z axes are the same, they become cubes. After this simplification, any solid object can be created via a number of voxels. The smaller the voxel, the more exact the approximation.

pixel

voxel

Voxel coordinates are defined by their positions in the data array. The standard character of the data and basic shape of voxels makes processing both simple and reliable, but this usually demands extra disk space for storage and more memory for processing. Similar to 2D digital images, non-rectangular surfaces representing voxel faces contain discrete data.

For a non-rectangular model to be precise, it should contain very small voxels. Since this will require a significant amount of disk space, voxels are not commonly used for representing these kinds of objects. Voxels are most effective for representing complex, more diverse objects, which makes them ideal for use in 3D scanning, imaging, and CAD solutions.

What about solids and 3D geometry from scanning and elsewhere?

Any kind of real-life object takes up a certain amount of volume in space and consists of some type of material. There are various ways to model a solid object: sweeping, surface mesh modeling, cell decomposition, etc. Every object has its own boundaries (surfaces), and the boundaries of solid objects separate space into two parts: the interior and the exterior of the solid. In this way, a solid object can be represented by boundaries and some data, such as a mesh, and can be used to separate the interior from the exterior.

Constructive solid geometry CSG result

Another approach is used in constructive solid geometry, or CSG, where the basic elements are already solids (spheres, cones, cubes, tori, etc.), and more advanced solids are built from these primitive solids via Boolean operations: fusion, subtraction, intersections, etc.

Texture and how it applies to 3D scanning

In computer graphics and 3D scanning terms, texture refers to an image painted upon a surface. A texture image is stored in a special file where each pixel with U and V coordinates has a corresponding color. Applying texture onto a surface is called Texture Mapping or UV Mapping.

Considering that the human brain mostly relies on shadows, colors, and color gradients to visually perceive the world around, texture is a highly effective way to emulate a shape without having to change its geometry, and it's often used by manufacturers of computer games for rendering graphics faster and more efficiently.

Manufacturers of 3D scanners can include in them a special camera for capturing texture, called a texture camera. Obtaining high-quality images requires bright and uniform lighting conditions unless the scanner itself is equipped with a flash.

Surface without texture colors

Surface with texture colors

Texture file

Conclusions about 3D technology, scanning, current use, and future trends

Understanding the various components of 3D technology not only helps us more clearly comprehend some thought-provoking aspects of the world around us, but also gives us an idea about how 3D solutions, including 3D scanning, actually work.

Particularly in the past two decades, 3D technologies have taken part in many challenging and crucial scientific projects from east to west. Some of these include 3D laser scanners and software being used to preserve cultural heritage sites and objects on the brink of destruction, engineers with handheld structured light 3D scanners reverse engineering parts with complex surfaces and shapes, then using CAD design software with the 3D models for their final stages, and doctors and medical professionals 3D scanning their patients for a variety of applications, including prosthesis design, dermatological diagnoses, and much more.

As far as the usefulness of a solid understanding of 3D technology goes, every day around the world, this knowledge is becoming all the more important. The ever-growing utilization of 3D technology across society has compelled some experts to say that in the future, home, school, and work will all embrace far-reaching use of 3D technologies.

 

Key point

Currently, the adoption of 3D technologies has been expanding in fields as diverse as aerospace, engineering, digital manufacturing, healthcare, CGI, and more. While in the future, 3D specialists with scanning experience can look forward to even greater demand for their skills and expertise.

Today’s younger generations are growing up to see 3D scanners as not merely something found in a laboratory or, as was the case in past decades, in science fiction movies and novels. With each passing year, professional 3D scanning is moving closer to our everyday lives, and manufacturers of such technologies have made it a point to seamlessly integrate their solutions across all levels of society. The result is that even children are becoming comfortable with using structured light scanners in their classrooms, as well as seeing 3D scanners in use in medical and dental offices. What was once strictly limited to the realm of professionals is now becoming an irreplaceable aspect of our everyday lives.

Manufacturers of professional 3D scanners and software, including laser scanners and structured light scanners, have been making great strides in heightening the accuracy as well as the resolution of their scanners. At the same time, design professionals and other technical specialists are adopting 3D scanning more readily than ever before, realizing that such scanners, together with the resulting 3D models, ease the burdens of their work and allow them to accomplish some tasks previously either impossible or extremely difficult.

Technologists are envisioning widespread use of solutions such as VR/AR educational applications, where children will visit Amazonian rainforests or craggy Himalayan peaks or anyplace else, all from the safe comfort of their classrooms; digital design engineers will make extensive use of 3D scanning and modeling, employing VR/AR environments for viewing and interacting with the objects they've designed, and then 3D printing them in a range of materials, as desired; doctors will quickly be able to 3D scan your body and then 3D print lifelike replacement organs and other anatomical structures using your very own stem cells, entirely removing the possibility of immune system rejection... and more. When it comes to tapping the vast potential of 3D scanning, this is only the beginning. 

Table of contents
WRITTEN BY:
natalia-kivolya

Natalia Kivolya

Tech reporters

More from

the Learning center

When using a structured-light 3D scanner, there are certain rules and factors every professional should know. In this article, we’ll show you how to prepare an object and environment before you start scanning, different techniques, and some tricks to get the best possible 3D scan of your object.

3D scanning is now more popular than ever, and businesses all over the world are embracing this versatile technology to boost productivity, eliminate unnecessary costs, and create new and exciting products and services.

Looking for a professional 3D scanner for an upcoming project or an existing task? We know how hard it can be to find the solution best suited to your project. In this article, we share different things to consider, how to compare scanners, core specifications to look at (and what they actually mean), and more of what you need to choose a 3D scanner.