So far, we have been looking at objects located in an object plane, with the lens focused such that the image appears perfectly sharp. In the real world, however, the three-dimensional objects are hardly ever located in a single plane, but extending somewhat before and behind it. What happens in these cases?

The purple rays are indicating an object which is perfectly in focus, i.e. for a given focal length **f**, the object distance **g** and the image distance **h** are adjusted according to the lens equation (L5)

1 / f | = | 1 / g + 1 / h | (D1) |

In this case, the picture of a point (in the object plane) is again a point (in the image plane).

Now, imagine an object a little closer to the camera, while the settings of the lens remain unchanged. This situation is illustrated by the green rays. According to the lens equation

1 / f | = | 1 / g_{near} + 1 / h_{near} | (D2a) |

the rays are now converging somewhere *behind* the original image plane. On the image plane where our image sensor is located, the rays are thus illuminating a small spot. It will have the same shape as the aperture of the lens, i.e. in most cases, it will be (more or less) circular.

Next, imagine an object a little farther away from the camera, again with the same settings of the lens. This situation is illustrated by the yellow rays. According to the lens equation

1 / f | = | 1 / g_{far} + 1 / h_{far} | (D2b) |

the rays are now converging somewhere *before* the image plane, again resulting in a small spot on the image sensor. (Note: **g**_{far} is not drawn to scale, but somewhat shortened to fit on the page.)

In both cases, the image of a point is no longer a point, but a small circular spot. In other words, the image gets blurred, it appears slightly unsharp.

## Minimum acceptable sharpness

Instead of moving an object back and forth and seeing how the size of the blur spot is affected, we can look at it the other way, define a maximum size of the spot, and see how far we can move our object such that the blur spot never exceeds this size. The maximum diameter that we are willing to accept as sharp is called the *(acceptable) circle of confusion* **c**.

There is much debate on how the circle of confusion should be determined, which we will ignore for the moment. Just think of it as a small value such as 0.033 mm (one thirtieth of a mm).

## Near and far limits

The range from **g**_{near} to **g**_{far} as shown in the figure above for which our blur spot does not exceed a given circle of confusion **c** is called the *depth of field (DoF)*. It describes the distance of objects which will be rendered with a minimum acceptable sharpness. **g**_{near} and **g**_{far} are known as *near* and *far limit*, respectively.

The depth of field can be derived as follows. By similar triangles along the green and yellow rays behind the lens, we get

c / (h_{near} – h) | = | a / h_{near} | (D3a) |

c / (h – h_{far}) | = | a / h_{far} | (D3b) |

Solving these equations for **h**_{near} and **h**_{far} respectively gives

h_{near} | = | h a / (a – c) | (D4a) |

h_{far} | = | h a / (a + c) | (D4b) |

Note that the aperture **a** of the lens is a distance which can be measured in mm, inches, or any other unit of length. With equation (A1), we can express **a** with the commonly used f-stop **A** as

a | = | f / A | (D5) |

Substituting equation (D5) into equations (D4a) and (D4b) gives

h_{near} | = | h f / (f – A c) | (D6a) |

h_{far} | = | h f / (f + A c) | (D6b) |

We can now use the lens equations (D2a) and (D2b) to calculate the distances in front of the lens:

1 / f | = | 1 / g_{near} + (f – A c) / h f | (D7a) |

1 / f | = | 1 / g_{far} + (f + A c) / h f | (D7b) |

With the basic lens equation (D1), we can eliminate **h** and get a simpler form

g_{near} | = | g f^{2} / (f^{2} + A c (g – f)) | (D8a) |

g_{far} | = | g f^{2} / (f^{2} – A c (g – f)) | (D8b) |

Note that these formulas measure the near and far limits from the lens. However, to keep the resulting figures consistent with the focusing distance **d**, it makes sense to measure these distances from the image plane. This can be achieved simply by adding **h** (not **h**_{near} or **h**_{far}). Thus, we define

d_{near} | = | g f^{2} / (f^{2} + A c (g – f)) + h | (D9a) |

d_{far} | = | g f^{2} / (f^{2} – A c (g – f)) + h | (D9b) |

Both the equations according to (D8) and (D9) can be found in the literature. For the depth of field calculator, the formulas given in equations (D9a) and (D9b) are used.

## Focal length, distance, aperture and depth of field

Equations (D9a) and (D9b) are not exactly intuitive on how the depth of field depends on its various parameters. To gain a better insight, we consider the range

dof | = | d_{far} – d_{near} | (D10) |

With equations (D9a) and (D9b), we get

dof | = | 2 g f^{2} A c (g – f) / (f^{4} – (A c (g – f))^{2}) | (D11) |

This doesn’t look any better yet. However, a number of approximations can be applied. If the object is reasonably far away, i.e. the focusing distance **d** is much bigger than **f**, both **g** and (**g** – **f**) can be approximated by **d**. Furthermore, if the focal length is not too small, the second term of the denominator can be neglected. Thus, we get

dof | ≈ | 2 d^{2} A c / f^{2} | (D12) |

This is now much easier to understand. For a deep depth of field, e.g. in landscape photography,

- use a wide angle lens
- move away from your object
- close the aperture (use a large f-number)

For a shallow depth of field, e.g. for a portrait,

- use a telephoto lens
- get up close to your object
- open the aperture (use a small f-number)

It’s interesting to note that the distance to your object has a greater impact on the depth of field than the aperture. This is great news if you have a slow lens, but want to take a portrait with a nicely blurred background (bokeh). However, the part of the scene that you can capture this way may become quite small. For example, you may be able to cover the face, but not the shoulders or even the whole body [Wegner 2017, ch. 4.2].

## Magnification and depth of field

As was stated above, a telephoto lens will give a shallower depth of field than a wide angle lens. But we should take a second look here. If you want to take a frame filling picture of someone or something, it’s always the same magnification, no matter what focal length you are using. You just have to move a little closer or farther away. So how does the magnification come in?

With the approximation of the magnification (M9)

m | ≈ | f / d | (D13) |

which is also valid under the conditions mentioned above, approximation (D12) becomes

dof | ≈ | 2 A c / m^{2} | (D14) |

In other words, the depth of field increases proportionally to the f-stop, but decreases quadratically with the magnification. This explains why you can easily capture a scene with a great depth of field in landscape photography, but get a very shallow depth of field in macro photography (unless techniques such as focus stacking are used).

Likewise, the smaller your image sensor, the smaller the magnification, and the deeper the depth of field. A tiny smartphone camera renders everything more or less sharp, while you can easily create sharp portraits with a lovely blurred background with a medium format sensor (although applications of computational photography such as Apple’s *Portrait Mode* are increasingly successful in bokeh simulation even with small sensors).

The depth of field also provides the basis to calculate the hyperfocal distance, which is often recommended for landscape photography.