RTG2 ReferencePathTracer
RTG2 ReferencePathTracer
RTG2 ReferencePathTracer
ABSTRACT
The addition of ray tracing to a real-time renderer makes it possible to create
beautiful dynamic effects—such as soft shadows, reflections, refractions,
indirect illumination, and even caustics—that were previously not possible.
Substantial tuning and optimization are required for these effects to run in
real time without artifacts, and this process is simplified when the real-time
rendering engine can progressively generate a “known-correct” reference
image to compare against. In this chapter, we walk through the steps to
implement a simple, but sufficiently featured, path tracer to serve as such
a reference.
14.1 INTRODUCTION
With the introduction of the DirectX Raytracing (DXR) and Vulkan Ray Tracing
(VKR) APIs, it is now possible to integrate ray tracing functionality into
real-time rendering engines based on DirectX and Vulkan. These new APIs
provide the building blocks necessary for ray tracing, including the ability to
(1) quickly construct spatial acceleration structures and (2) perform fast ray
intersection queries against them. Critically, the new APIs provide ray tracing
operations with full access to memory resources (e.g., buffers, textures) and
common graphics operations (e.g., texture sampling) already used in
rasterization. This creates the opportunity to reuse existing code for geometry
processing, material evaluation, and post-processing and to build a hybrid
renderer where ray tracing works in tandem with rasterization.
© NVIDIA 2021
161
A. Marrs, P. Shirley, I. Wald (eds.), Ray Tracing Gems II, https://doi.org/10.1007/978-1-4842-7185-8_14
RAY TRACING GEMS II
Figure 14-1. An interior scene from Evermotion’s Archinteriors Vol. 48 for the Blender
package [5] rendered with our reference path tracer using 125,000 paths per pixel.
BRDF
Light Path
BRDF
Surface
Figure 14-2. Monte Carlo Unidirectional Path Tracing. Left: light energy bounces around the
environment on its way to the camera. Right: the path tracer’s model of the light transport from
the left diagram. Surface material BRDFs are stochastically sampled at each point where a ray
intersects a surface.
14.2 ALGORITHM
When creating reference images, it is important to choose an algorithm that
best matches the use case. A path tracer can be implemented in many ways,
and that variety of algorithms is accompanied by an equivalent amount of
trade-offs. An excellent overview of path tracing can be found in the book
Physically Based Rendering [11]. Since this chapter’s focus is simplicity and
high image quality, we’ve chosen to implement a Monte Carlo Unidirectional
Path Tracer. Let’s break down what this means—starting from the end and
working backward:
163
RAY TRACING GEMS II
14.3 IMPLEMENTATION
Before implementing a reference path tracer in your engine, it is helpful to
understand the basics of modern ray tracing APIs. DXR and VKR introduce
new shader stages (ray generation, closest-hit, any-hit, and intersection),
acceleration structure building functions, ray dispatch functions, and shader
management mechanisms. Since these topics have been covered well in
previous literature, we recommend Chapter 3, “Introduction to DirectX
Raytracing,” [19] of Ray Tracing Gems [6], the SIGGRAPH course of the same
name [18], and Chapter 16 of this book to get up to speed. For a deeper
understanding of how ray tracing works agnostic of API specifics, see
Shirley’s Ray Tracing in One Weekend series [13].
The code sample accompanying this chapter is implemented with DXR and
extends the freely available IntroToDXR sample [10]. At a high level, the steps
necessary to perform GPU ray tracing with the new APIs are as follows:
> At startup:
With the basic execution model in place, the following sections describe the
implementation of the key elements needed for a reference path tracer.
164
CHAPTER 14. THE REFERENCE PATH TRACER
Memory for built acceleration structures can be managed using the same
system as memory for geometry, since their lifetime is coupled with the
meshes they represent. Scratch buffers, on the other hand, are temporary
and may be resized or deallocated entirely once acceleration structure builds
are complete. This presents an opportunity to decrease the total memory use
with more adept management of the scratch buffer.
165
RAY TRACING GEMS II
Listing 14-1. HLSL code to generate primary rays that match a rasterizer’s output.
Listing 14-1. Note that the aspect ratio and field of view are extracted from
the projection matrix and the camera basis right (gData.view[0].xyz), up
(gData.view[1].xyz), and forward (gData.view[2].xyz) vectors are read
from the view matrix.
If the view and projection matrices are only available on the GPU as a single
combined view-projection matrix, the inverse view-projection matrix (which is
typically also available) can be applied to “unproject” points on screen from
normalized device coordinate space. This is not recommended, however, as
near and far plane settings stored in the projection matrix cause numerical
precision issues when the transformation is reversed. More information on
constructing primary rays in ray generation shaders can be found in
Chapter 3.
166
CHAPTER 14. THE REFERENCE PATH TRACER
Figure 14-3. Visualizing geometry instance indices (left) and triangle barycentrics (right) is useful
when testing that primary ray tracing is working properly.
Now that primary rays are traversing acceleration structures and intersecting
geometry in the scene, the next step is to load the geometry and material
properties of the intersected surfaces. In our code sample, we use a bindless
approach to access resources on the GPU in both ray tracing and
rasterization. This is implemented with a set of linear buffers that contain all
167
RAY TRACING GEMS II
10 LSB 14 MSB
Figure 14-4. Bindless access of geometry and material data using an encoded instance ID.
geometry and material data used in the scene. The buffers are then marked
as accessible to any invocation of any shader.
The main difference when ray tracing is that the index and vertex data of the
geometry (e.g., position, texture coordinates, normals) must be explicitly
loaded in the shaders and interpolated manually. To facilitate this, we encode
and store the buffer indices of the geometry and material data in the 24-bit
instance ID parameter of each TLAS geometry instance descriptor. Illustrated
in Figure 14-4, the index into the array of buffers containing the geometry
index and vertex data is packed into the 14 most significant bits of the
instance ID value, and the material index is packed into the 10 least significant
bits (note that this limits the number of unique geometry and material entries
to 16,384 and 1,024 respectively). The encoded value is then read with
InstanceID() in the closest (or any) hit shader, decoded, and used to load the
proper material and geometry data. This encoding scheme is implemented in
HLSL with two complementary functions shown in Listing 14-2.
Listing 14-2. HLSL to encode and decode the geometry and material indices.
168
CHAPTER 14. THE REFERENCE PATH TRACER
Figure 14-5. Visualizations of various properties output by the ray generation shader. From left to
right: base color, normal, world-space position, and texture coordinates.
To confirm that the geometry and material data have been loaded and
interpolated correctly, it is useful to output G-buffer–like visualizations from
the ray generation shader of geometric data (e.g., world-space position,
geometric normal, texture coordinates) and material properties (e.g., albedo,
shading normal, roughness, etc.). These images can be directly compared
with visualizations of a G-buffer generated with a rasterizer, as shown in
Figure 14-5.
At the core of every path tracer, there is a random number generator (RNG).
Random numbers are necessary to drive the sampling of materials, lights,
procedurally generated textures, and much more as the path tracer simulates
light transport. A high-quality RNG with a long period is an essential tool in
path tracing. It ensures that each sample taken makes meaningful progress
toward reducing noise and improving its approximation of the rendering
equation without bias.
169
RAY TRACING GEMS II
Listing 14-3. Initialization of the random seed for an xorshift-based RNG for the given pixel and
frame number. The seed provides an initial state for the RNG and is modified each time a new
random number is generated.
1 uint jenkinsHash(uint x) {
2 x += x << 10;
3 x ^= x >> 6;
4 x += x << 3;
5 x ^= x >> 11;
6 x += x << 15;
7 return x;
8 }
9
10 uint initRNG(uint2 pixel , uint2 resolution , uint frame) {
11 uint rngState = dot(pixel , uint2(1, resolution.x)) ^ jenkinsHash(frame);
12 return jenkinsHash(rngState);
13 }
(LCG) seeded with a hash function [12]. First, the random seed for the RNG is
established by hashing the current pixel’s screen coordinates and the frame
number (see Listing 14-3). This hashing ensures a good distribution of
random numbers spatially across neighboring pixels and temporally across
subsequent frames. We use the Jenkins’s one_at_a_time hash [8], but other
fast hash functions, such as the Wang hash [17], can be used as well.
Next, each new random number is generated by converting the seed into a
floating-point number and hashing it again (see Listing 14-4). Notice how the
rand function modifies the RNG’s state. Since the generated number is a
sequence of random bits forming an unsigned integer, we need to convert this
to a floating-point number in the range [0, 1). This is achieved by using
Listing 14-4. Generating a random number using the xorshift RNG. The rand function invokes
the xorshift function to modify the RNG state in place and then converts the result to a random
floating-point number.
1 float uintToFloat(uint x) {
2 return asfloat (0 x3f800000 | (x >> 9)) - 1.f;
3 }
4
5 uint xorshift(inout uint rngState)
6 {
7 rngState ^= rngState << 13;
8 rngState ^= rngState >> 17;
9 rngState ^= rngState << 5;
10 return rngState;
11 }
12
13 float rand(inout uint rngState) {
14 return uintToFloat(xorshift(rngState));
15 }
170
CHAPTER 14. THE REFERENCE PATH TRACER
Listing 14-5. Implementation of the PCG4D RNG [7]. An input vector of four numbers is
transformed into four random numbers.
1 uint4 pcg4d(uint4 v)
2 {
3 v = v * 1664525u + 1013904223u;
4
5 v.x += v.y * v.w;
6 v.y += v.z * v.x;
7 v.z += v.x * v.y;
8 v.w += v.y * v.z;
9
10 v = v ^ (v >> 16u);
11 v.x += v.y * v.w;
12 v.y += v.z * v.x;
13 v.z += v.x * v.y;
14 v.w += v.y * v.z;
15
16 return v;
17 }
The four inputs PCG4D requires are readily available in the ray generation
shader; however, they must be passed into other shader stages (e.g., closest
and any-hit). For best performance, the payload that passes data between ray
tracing shader stages should be kept as small as possible, so we hash the
four inputs to a more compact seed value before passing it to the other
shader stages. A discussion of different strategies for hashing these
parameters and updating the hash for drawing subsequent samples is
discussed in the recent survey by Jarzynski and Olano [7].
171
RAY TRACING GEMS II
With the ability to trace rays and a dependable random number generator in
hand, we now have the tools necessary to sample a scene, accumulate the
results, and progressively refine an image. Shown in Listing 14-6, progressive
refinement is implemented by adding the image generated by each frame to
an accumulation buffer and then normalizing the accumulated colors before
displaying the image shown on screen. Figure 14-6 compares a single frame’s
result with the normalized accumulated result from many frames.
Figure 14-6. A single path traced frame (left) and the normalized accumulated result from one
million frames (right).
172
CHAPTER 14. THE REFERENCE PATH TRACER
173
RAY TRACING GEMS II
ray = generatePrimaryRay();
throughput = 1.0;
radiance = 0.0;
for bounce ∈ {1 . . . MAX_BOUNCES} do
Trace(ray);
if hit surface then
brdf, brdfPdf, ray = SampleBrdf();
throughput *= brdf / brdfPdf;
else
radiance += throughput * skyColor;
break;
return radiance;
Now we are ready to trace rays beyond primary rays and create full paths
from the camera to surfaces and lights in the environment. To accomplish
this, we extend our existing ray generation shader and implement a basic
path tracing loop, illustrated in Figure 14-7. The process begins by initializing
a ray to be cast from the camera into the environment (i.e., a primary ray).
Next, we enter a loop and ray tracing begins. If a ray trace misses all surfaces
in the scene, the sky’s contribution is added to the result color and the loop is
terminated. If a ray intersects geometry in the scene, the intersected
surface’s material properties are loaded and the associated bidirectional
reflectance distribution function (BRDF) is evaluated to determine the
direction of the next ray to trace along the path. The BRDF accounts for the
composition of the material, so for rough surfaces, the reflected ray direction
is randomized and then attenuated based on object color. Details on BRDF
evaluation are described in the next section.
For a simple first test of the path tracing loop implementation, set the sky’s
contribution to be fully white (float3(1, 1, 1)). This creates lighting
conditions commonly referred to as the white furnace because all surfaces in
the scene are illuminated with white light equally from all directions. Shown
in Figure 14-8, this lighting test is especially helpful when evaluating the
energy conservation characteristics of BRDF implementations. After the
174
CHAPTER 14. THE REFERENCE PATH TRACER
Figure 14-8. The interior scene rendered using the white furnace test with white diffuse
materials applied to all surfaces.
white furnace test, try loading values from an environment map in place of a
fixed sky color.
Note the two variables, radiance and throughput, declared and initialized at
the beginning of the path tracing loop in Figure 14-7. Radiance is the final
intensity of the light energy presented on screen for a given path. Radiance is
initially zero and increases as light is encountered along the path being
traced. Throughput represents the amount of energy that may be transferred
along a path’s ray segment after interacting with a surface. Throughput is
initialized to one (the maximum) and decreases as surfaces are encountered
along the path. Every time the path intersects a surface, the throughput is
attenuated based on the properties of the intersected surface (dictated by the
material BRDF at the point on the surface). When a path arrives at a light
source (i.e., an emissive surface), the throughput is multiplied by the intensity
of the light source and added to the radiance. Note how simple it is to support
emissive geometry using this approach!
SURFACE MATERIALS
175
RAY TRACING GEMS II
Diffuse Lobe
Specular Lobe
Figure 14-9. Importance sampling of a BRDF involves selecting a direction of the reflecting ray,
based on the incident ray direction and surface properties. The image shows several possible ray
directions for the specular lobe (black arrows) and one selected direction highlighted in blue.
176
CHAPTER 14. THE REFERENCE PATH TRACER
the sample code) and using a PDF of brdfPdf = cos ω/π. This distributes rays
across the entire hemisphere above the surface proportional to the cosine
term found in the rendering equation. The Lambertian term
brdfWeight = diffuseColor/π can be pre-divided by the PDF and multiplied by
the cosine term, resulting in brdfWeight/brdfPdf being equal to the diffuse
color of the surface.
Many material models are composed of two or more BRDF lobes, and it is
common to evaluate separate lobes for the specular and diffuse components
of a material. In fact, when rendering semitransparent or refractive objects,
an additional bidirectional transmittance distribution function (BTDF) is also
evaluated. This is discussed in more detail in Chapter 11.
177
RAY TRACING GEMS II
pSpecular = getBrdfProbability();
if rand(rngState) < pSpecular) then
brdfType = SPECULAR;
throughput /= pSpecular;
else
brdfType = DIFFUSE;
throughput /= (1 - pSpecular);
Figure 14-10. Importance sampling and BRDF lobe selection. We select lobes using a random
number and probability pspecular . Throughput is divided by either pspecular or 1 – pspecular ,
depending on the selected lobe.
In our code sample, we implement BRDF lobe selection using the importance
sampling algorithm shown in Figure 14-10. The probability of choosing a
specular or diffuse lobe as an estimate of their respective contributions
especular and ediffuse is written as
especular
pspecular = , (14.1)
especular + ediffuse
This approach is simple and fast, but can generate lobe contribution estimates
that are very small in some cases (e.g., rough materials with a slight specular
highlight or an object that is barely semitransparent). Low-probability
estimates cause certain lobes to be undersampled and can manifest as a “salt
and pepper” pattern in the image. Dividing by small probability values
introduces numerical precision issues and may also introduce firefly artifacts.
To solve these problems, we clamp pspecular to the range [0.1, 0.9] whenever it
is not equal to zero or one. This ensures a reasonable minimal sampling
frequency for each contributing BRDF lobe when the estimates are very small.
178
CHAPTER 14. THE REFERENCE PATH TRACER
Listing 14-8. Using a Russian roulette approach to decide if paths should terminate.
SURFACE NORMALS
Since ray segments along a path may intersect a surface on either side—not
only the side facing the camera—it is best to disable backface culling when
179
RAY TRACING GEMS II
Initialization...
for bounce ∈ {1 . . . MAX_BOUNCES} do
Trace ray and evaluate direct lighting...
russianRoulette = luminance(throughput);
if russianRoulette < rand() then
break;
else
throughput /= russianRoulette;
return radiance;
Figure 14-11. The improved path tracing loop with Russian roulette path termination.
1 float3 V = -ray.Direction;
2 if (dot(geometryNormal , V) < 0.0f) geometryNormal *= -1;
3 if (dot(geometryNormal , shadingNormal) < 0.0f) shadingNormal *= -1;
180
CHAPTER 14. THE REFERENCE PATH TRACER
P’
Figure 14-12. The offsetRay method in our code sample calculates a new ray origin P′
originating at the ray/surface intersection point P to minimize self-intersections and light leaks
caused by insufficient numerical precision.
Since point lights are just a position in space without any area, the
contribution of the point light increases to infinity as the distance between the
light and a surface approaches zero. This creates a singularity that causes
invalid values (NaNs) and firefly artifacts that are important to mitigate in a
path tracer. We use a method by Yuksel to avoid this obstacle when evaluating
point lights [20]. Listing 14-10 shows how point light evaluation is
implemented in our sample code. Our code sample supports point and
directional lights, but it is straightforward to implement more light types as
necessary. For debugging use, we automatically place one directional light
source (the sun) and one point light source attached to the camera (a
headlight) in the scene. To improve performance when evaluating lights, we
181
RAY TRACING GEMS II
Initialization...
for bounce ∈ {1 . . . MAX_BOUNCES} do
Trace(ray);
if hit then
lightIntensity, lightPdf = SampleLight();
radiance += ray.throughput * EvalBrdf() * lightIntensity *
CastShadowRay() / lightPdf;
if bounce == MAX_BOUNCES then
break;
brdf, brdfPdf, ray = SampleBrdf();
throughput *= brdf / brdfPdf;
else
radiance += throughput * skyColor;
break;
Russian roulette ray termination...
return radiance;
Figure 14-13. The path tracing loop modified to support virtual lights.
Light Source
3.Miss
Camera
2.Hit
1.Hit
Figure 14-14. An illustration of the path tracing loop with virtual lights. A primary ray (yellow) is
generated and cast. At every surface intersection, a shadow ray (green) is cast toward selected
light sources in addition to the BRDF lobe ray (blue).
create a dedicated DXR hit group for shadow ray tracing—as these rays only
need to determine visibility—and skip the closest-hit shader in the ray tracing
pipeline.
182
CHAPTER 14. THE REFERENCE PATH TRACER
1 Light light;
2 float lightWeight;
3 sampleLight(rngState , hitPosition , geometryNormal , light , lightWeight);
4
5 // Prepare data needed to evaluate the light.
6 float lightDistance = distance(light.position , hitPosition);
7 float3 L = normalize(light.position - hitPosition);
8
9 // Cast shadow ray toward the light to evaluate its visibility.
10 if (castShadowRay(hitPosition , geometryNormal , L, lightDistance))
11 {
12 // Evaluate BRDF and accumulate contribution from sampled light.
13 radiance += throughput * evalCombinedBRDF(shadingNormal , L, V, material)
* ( getLightIntensityAtPoint (light , lightDistance) * lightWeight);
14 }
SELECTING LIGHTS
Shown in Listing 14-11, a simple solution is to randomly select one light from
the list of lights using a uniform distribution, with a probability of selection
equal to 1/N, where N is number of lights (the lightWeight variable in
Listing 14-10 is equal to reciprocal of this PDF, analogous to the way we
handle BRDF sampling). This approach is straightforward but produces noisy
results because it does not consider the brightness of or distance to the
selected light. On the opposite end of the spectrum, an expensive solution
may evaluate the surface’s BRDF for all lights to establish the probability of
selecting each light, and then use importance sampling to select lights based
on their actual contribution.
Listing 14-11. Random selection of a light from all lights with a uniform distribution.
1 uint randomLightIndex =
2 min(gData.lightCount - 1, uint(rand(rngState) * gData.lightCount));
3 light = gData.lights[randomLightIndex ];
4 lightSampleWeight = float(gData.lightCount);
183
RAY TRACING GEMS II
TRANSPARENCY
Listing 14-12. Implementation of the alpha test transparency using the any-hit shader. Note the
relatively large number of operations performed on every tested surface while searching for the
closest hit.
184
CHAPTER 14. THE REFERENCE PATH TRACER
14.4 CONCLUSION
The path tracer described in this chapter is relatively simple, but suitable for
rendering high-quality references for games and as an experimentation
platform for the development of real-time effects. It has been used during the
development of an unreleased AAA game and has proved to be a helpful tool.
The code sample accompanying this chapter is freely available, contains a
functional path tracer, and can be extended and integrated into game engines
without any restrictions.
There are many ways this path tracer can be improved, including support for
refraction, volumetrics, and subsurface scattering as well as denoising
techniques and performance optimizations to produce noise-free results
while running at real-time frame rates. The accompanying chapters in this
book, its previous volume [6], and Physically Based Rendering [11] are
excellent sources of inspiration for improvements as well. We recommend the
blog posts “Effectively Integrating RTX Ray Tracing into a Real-Time
Rendering Engine” [14] and “Optimizing VK/VKR and DX12/DXR Applications
Using Nsight Graphics: GPU Trace Advanced Mode Metrics” [3] for guidance
on the integration of real-time ray tracing into existing engines and on
optimizing performance.
185
RAY TRACING GEMS II
REFERENCES
[1] Akenine-Möller, T., Crassin, C., Boksansky, J., Belcour, L., Panteleev, A., and Wright, O.
Improved Shader and Texture Level of Detail Using Ray Cones. Journal of Computer
Graphics Techniques, 10(1):1–24, 2021. http://jcgt.org/published/0010/01/01/.
[2] Akenine-Möller, T., Nilsson, J., Andersson, M., Barré-Brisebois, C., Toth, R., and
Karras, T. Texture Level of Detail Strategies for Real-Time Ray Tracing. In E. Haines and
T. Akenine-Möller, editors, Ray Tracing Gems, chapter 20, pages 321–345. Apress, 2019.
[3] Bavoil, L. Optimizing VK/VKR and DX12/DXR Applications Using Nsight Graphics: GPU
Trace Advanced Mode Metrics. NVIDIA Developer Blog,
https://developer.nvidia.com/blog/optimizing-vk-vkr-and-dx12-dxr-applications-
using-nsight-graphics-gpu-trace-advanced-mode-metrics, March 30, 2020. Accessed
March 17, 2021.
[6] E. Haines and T. Akenine-Möller, editors. Ray Tracing Gems: High-Quality and Real-Time
Rendering with DXR and Other APIs. Apress, 2019.
[7] Jarzynski, M. and Olano, M. Hash Functions for GPU Rendering. Journal of Computer
Graphics Techniques, 9(3):20–38, 2020. http://jcgt.org/published/0009/03/02/.
[9] Kahn, H. and Marshall, A. Methods of Reducing Sample Size in Monte Carlo
Computations. Journal of the Operations Research Society of America, 1(5):263–278, 1953.
DOI: 10.1287/opre.1.5.263.
[11] Pharr, M., Jakob, W., and Humphreys, G. Physically Based Rendering: From Theory to
Implementation. Morgan Kaufmann, third edition, 2016.
[14] Sjoholm, J. Effectively Integrating RTX Ray Tracing into a Real-Time Rendering Engine.
NVIDIA Developer Blog, https://developer.nvidia.com/blog/effectively-integrating-rtx-
ray-tracing-real-time-rendering-engine, October 29, 2018.
186
CHAPTER 14. THE REFERENCE PATH TRACER
[15] Talbot, J., Cline, D., and Egbert, P. Importance Resampling for Global Illumination. In
Eurographics Symposium on Rendering, pages 139–146, 2005. DOI:
10.2312/EGWR/EGSR05/139-146.
[16] Wächter, C. and Binder, N. A Fast and Robust Method for Avoiding Self-Intersection. In
E. Haines and T. Akenine-Möller, editors, Ray Tracing Gems, chapter 6, pages 77–85.
Apress, 2019.
[18] Wyman, C., Hargreaves, S., Shirley, P., and Barré-Brisebois, C. Introduction to DirectX
Raytracing. ACM SIGGRAPH 2018 Courses, http://intro-to-dxr.cwyman.org, 2018.
[20] Yuksel, C. Point Light Attenuation Without Singularity. In ACM SIGGRAPH 2020 Talks,
18:1–18:2, 2020. DOI: 10.1145/3388767.3407364.
Open Access This chapter is licensed under the terms of the Creative Commons
Attribution-NonCommercial-NoDerivatives 4.0 International License
(http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits any
noncommercial use, sharing, distribution and reproduction in any medium or format, as long as you give
appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license
and indicate if you modified the licensed material. You do not have permission under this license to share
adapted material derived from this chapter or parts of it.
The images or other third party material in this chapter are included in the chapter’s Creative Commons
license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s
Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the
permitted use, you will need to obtain permission directly from the copyright holder.
187