Games101 光线追踪 代码框架解读

发布时间 2023-03-24 21:15:30作者: Dba_sys

1 前言

不同于之前的四次作业,这次的作业来了个大换血。整体框架完全重构,用了自己写的数学库。框架中大量使用c++17的新特性。

如果以老师在课堂上所述的光线追踪算法,与之前光栅化的知识。这次作业的判断光线打到三角型内算法rayTriangleIntersect()可能还好做一点。但是如何生成初始的光线,就比较难了,虽然答案只有两行代码,难就难在光线追踪一些约定好的规则。

幸运的是,本次作业的代码框架与https://www.scratchapixel.com/ray-tracing-generating-camera-rays所给出示例代码的基本上一致。又因为scratchapixel刚好是一个做计算机图形学教学的网站,值得一提的是其质量非常的高。

所以这个页面 https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-generating-camera-rays/definition-ray.html 将会是我们理解光线追踪的一个重要的参考内容,当然对作业的完成亦有很大的帮助。

最后,本次作业所填写的代码不超过10行,但是框架中附带了许多比较艰深的内容。

  1. c++17 大量语法
    • [[nodiscard]]
    • <optional>
    • ....
  2. 与折射 反射 相关的数学推导与物理定律。
  3. 递归函数

这里也会给出相关的解读。

2 main.cpp

#include "Scene.hpp"
#include "Sphere.hpp"
#include "Triangle.hpp"
#include "Light.hpp"
#include "Renderer.hpp"

// In the main function of the program, we create the scene (create objects and lights)
// as well as set the options for the render (image width and height, maximum recursion
// depth, field-of-view, etc.). We then call the render function().
int main()
{
    Scene scene(1280, 960);

    auto sph1 = std::make_unique<Sphere>(Vector3f(-1, 0, -12), 2);
    sph1->materialType = DIFFUSE_AND_GLOSSY;
    sph1->diffuseColor = Vector3f(0.6, 0.7, 0.8);

    auto sph2 = std::make_unique<Sphere>(Vector3f(0.5, -0.5, -8), 1.5);
    sph2->ior = 1.5;
    sph2->materialType = REFLECTION_AND_REFRACTION;

    scene.Add(std::move(sph1));
    scene.Add(std::move(sph2));

    Vector3f verts[4] = {{-5,-3,-6}, {5,-3,-6}, {5,-3,-16}, {-5,-3,-16}};
    uint32_t vertIndex[6] = {0, 1, 3, 1, 2, 3};
    Vector2f st[4] = {{0, 0}, {1, 0}, {1, 1}, {0, 1}};
    auto mesh = std::make_unique<MeshTriangle>(verts, vertIndex, 2, st);
    mesh->materialType = DIFFUSE_AND_GLOSSY;

    scene.Add(std::move(mesh));
    scene.Add(std::make_unique<Light>(Vector3f(-20, 70, 20), 0.5));
    scene.Add(std::make_unique<Light>(Vector3f(30, 50, -12), 0.5));

    Renderer r;
    r.Render(scene);

    return 0;
}

我们可以看到,主函数在场景里添加了两个圆Sphere一个设置成了透明珠子,另一个设置成了粗糙的漫反射。这也就是我们可以看到的两个?物体。

另外设置了MeshTriangle从他的顶点来说,所有的 y=-3 也就是说地板是处于同一个平面的。不过因为光线追踪本身就是透视投影所以看起来是往上翘的。虽然看着有10个横纵的小方块,但是实际上就是一个大方块里用插值插出来的颜色。这里并不细说。

综上所述:

SENCE: 4 objects
1. sph1 DIFFUSE_AND_GLOSSY;
2. sph2 REFLECTION_AND_REFRACTION;
3. MeshTriangle1 {-5,-3,-6}, {5,-3,-6}, {-5,-3,-16}
4. MeshTriangle2 {5,-3,-6}, {+5,-3,-16}, {-5,-3,-16};

注意到 scene.Add(std::make_unique<Light>(Vector3f(-20, 70, 20), 0.5)); 光线的intensity=0.5 这在做作业3时,所给出的光线强度是 intensity=500,由于光线强度特别小,而且光源又离物体比较远,虽然render.cpp里使用 Bling-Phong 光照模型。但是我们看到并没有使用去除以距离。所以导致整个画面的都是一致的亮度,并没有明显的明暗变化。不过正是因为离的远,基本上光线到达的时候,强度差不多,所以不除距离也可以。

3 render.cpp

render.cpp的作用就是射出光线,判断交点,计算颜色。

3.1 折射reflect 反射refract 相关

inline float deg2rad(const float &deg)
{ return deg * M_PI/180.0; }

内联函数,编译器在编译时会将函数内部代码替换到函数调用的地方,也就是说,我们执行的时候其实是顺序执行的。而不是跳到函数里执行,这即利用了函数本身的可读性,简便性,也提高了程序的执行速度。[1]
https://www.geeksforgeeks.org/inline-functions-cpp/

不过按现代编译器的优化来说,如果你的函数体只有一两行,不用你说内联,编译器也会帮你内联。所以这个关键字用的比较少了。

在计算反射折射时,有一点和光栅化的光照模型不一样。就是入射光线的方向是从射出点到物体表面的折射点。所以两者的点积会是负数。

以下两个函数均是使用此规定。

// Compute reflection direction
Vector3f reflect(const Vector3f &I, const Vector3f &N)
{
    return I - 2 * dotProduct(I, N) * N;
}
// Compute refraction direction using Snell's law
//
// We need to handle with care the two possible situations:
//
//    - When the ray is inside the object
//
//    - When the ray is outside.
//
// If the ray is outside, you need to make cosi positive cosi = -N.I
//
// If the ray is inside, you need to invert the refractive indices and negate the normal N
Vector3f refract(const Vector3f &I, const Vector3f &N, const float &ior)
{
    float cosi = clamp(-1, 1, dotProduct(I, N));
    float etai = 1, etat = ior;
    Vector3f n = N;
    if (cosi < 0) { cosi = -cosi; } else { std::swap(etai, etat); n= -N; }
    float eta = etai / etat;
    float k = 1 - eta * eta * (1 - cosi * cosi);
    return k < 0 ? 0 : eta * I + (eta * cosi - sqrtf(k)) * n;
}

\[\begin{align} r &= r_{||} + r_{\bot}\\ &= \frac{\eta_{1}}{\eta_{2}}(i + cos\theta_{i}n) - \sqrt{1 - (\frac{\eta_{1}}{\eta_{2}})^2(1 - cos^2\theta_{i})}n\\ &= \frac{\eta_{1}}{\eta_{2}}i + (\frac{\eta_{1}}{\eta_{2}}cos\theta_{i} - \sqrt{1 - (\frac{\eta_{1}}{\eta_{2}})^2(1 - cos^2\theta_{i})})n \end{align}\\ \]

通过三角函数可知:

\[sin^2\theta_{i} +cos^2\theta_{i} = 1 \]

\[\frac{\eta_{1}}{\eta_{2}}i + (\frac{\eta_{1}}{\eta_{2}}cos\theta_{i} - \sqrt{1 - (\frac{\eta_{1}}{\eta_{2}})^2 sin^2\theta_{i}}n \]

发生全内反射的条件是:光线从折射率大的介质射到折射率较小的介质中,一旦入射角大于某一个临界角,折射光线就会消失。入射角等于临界角时,折射光线的折射角 \(\theta_{r} = \pi/2\)。根据 \(\eta_1sin\theta_i = \eta_2sin\theta_r\) 可以得出 \(sin\theta_i = \frac{\eta_2}{\eta_1}\)。此时 \(k = 0\),随着 \(\theta_i \quad \theta_i \in[0, \pi/2]\)的增大,\(sin\theta_i\)会使 k 变为负数,这也贴合了全反射的物理特性。

注意到如果光线从外部射入物体内,$cos\theta_{i} \ \text{is positive but}\ i \cdot n \ \text{is negative} $。代码中的 \(cos\theta_{i} = i \cdot n\), 所以会去判断 \(cos\theta_{i}\)的正负,来符合上式。

image

另一种表示:

\[cos\theta_{i} = -i \cdot n \]

\[r = \frac{\eta_{1}}{\eta_{2}}i - (\frac{\eta_{1}}{\eta_{2}}(i \cdot n) + \sqrt{1 - (\frac{\eta_{1}}{\eta_{2}})^2(1 - (i \cdot n)^2)})n \]

[2] 反射向量和折射向量的推导 by CJT 罗切斯特理工学院 计算机科学硕士

[3] stanford cs148 2006.11.13

3.2 Fresnel equation

// [comment]
// Compute Fresnel equation
//
// \param I is the incident view direction
//
// \param N is the normal at the intersection point
//
// \param ior is the material refractive index
// [/comment]
float fresnel(const Vector3f &I, const Vector3f &N, const float &ior)
{
    float cosi = clamp(-1, 1, dotProduct(I, N));
    float etai = 1, etat = ior;
    if (cosi > 0) {  std::swap(etai, etat); }
    // Compute sini using Snell's law
    float sint = etai / etat * sqrtf(std::max(0.f, 1 - cosi * cosi));
    // Total internal reflection
    if (sint >= 1) {
        return 1;
    }
    else {
        float cost = sqrtf(std::max(0.f, 1 - sint * sint));
        cosi = fabsf(cosi);
        float Rs = ((etat * cosi) - (etai * cost)) / ((etat * cosi) + (etai * cost));
        float Rp = ((etai * cosi) - (etat * cost)) / ((etai * cosi) + (etat * cost));
        return (Rs * Rs + Rp * Rp) / 2;
    }
    // As a consequence of the conservation of energy, transmittance is given by:
    // kt = 1 - kr;
}

有一个很有意思的点是,既然我们已经有了折射和反射的计算方法,为什么还要使用 Fresnel equation 呢?问题出现在REFLECTION_AND_REFRACTION, 也就是说,当我们的1根光线打进来,变成了2根。能量是守恒的,所以2根光线需要按比例来分配这一根光线。

而Fresnel equation根据你光线的入射角度,给出一个反射光(reflect light)的占比 eg:kr = 0.7 or 0.9

这是符合我们日常的生活规律的。通俗的说就是入射角小,折射的比例大一点(光路可逆推出:水下的东西清楚)。入射角大,反射的比例大一点(光路可逆:水上的东西看的清楚。)

菲涅尔方程(Fresnel Equation)王江荣 求职ing

2.3 递归函数光线追踪


// [comment]
// Returns true if the ray intersects an object, false otherwise.
//
// \param orig is the ray origin
// \param dir is the ray direction
// \param objects is the list of objects the scene contains
// \param[out] tNear contains the distance to the cloesest intersected object.
// \param[out] index stores the index of the intersect triangle if the interesected object is a mesh.
// \param[out] uv stores the u and v barycentric coordinates of the intersected point
// \param[out] *hitObject stores the pointer to the intersected object (used to retrieve material information, etc.)
// \param isShadowRay is it a shadow ray. We can return from the function sooner as soon as we have found a hit.
// [/comment]
std::optional<hit_payload> trace(
        const Vector3f &orig, const Vector3f &dir,
        const std::vector<std::unique_ptr<Object> > &objects)
{
    float tNear = kInfinity;
    std::optional<hit_payload> payload;
    for (const auto & object : objects)
    {
        float tNearK = kInfinity;
        uint32_t indexK;
        Vector2f uvK;
        if (object->intersect(orig, dir, tNearK, indexK, uvK) && tNearK < tNear)
        {
            payload.emplace();
            payload->hit_obj = object.get();
            payload->tNear = tNearK;
            payload->index = indexK;
            payload->uv = uvK;
            tNear = tNearK;
        }
    }

    return payload;
}

// [comment]
// Implementation of the Whitted-style light transport algorithm (E [S*] (D|G) L)
//
// This function is the function that compute the color at the intersection point
// of a ray defined by a position and a direction. Note that thus function is recursive (it calls itself).
//
// If the material of the intersected object is either reflective or reflective and refractive,
// then we compute the reflection/refraction direction and cast two new rays into the scene
// by calling the castRay() function recursively. When the surface is transparent, we mix
// the reflection and refraction color using the result of the fresnel equations (it computes
// the amount of reflection and refraction depending on the surface normal, incident view direction
// and surface refractive index).
//
// If the surface is diffuse/glossy we use the Phong illumation model to compute the color
// at the intersection point.
// [/comment]
Vector3f castRay(
        const Vector3f &orig, const Vector3f &dir, const Scene& scene,
        int depth)
{
    if (depth > scene.maxDepth) {
        return Vector3f(0.0,0.0,0.0);
    }

    Vector3f hitColor = scene.backgroundColor;
    if (auto payload = trace(orig, dir, scene.get_objects()); payload)
    {
        Vector3f hitPoint = orig + dir * payload->tNear;
        Vector3f N; // normal
        Vector2f st; // st coordinates
        payload->hit_obj->getSurfaceProperties(hitPoint, dir, payload->index, payload->uv, N, st);
        switch (payload->hit_obj->materialType) {
            case REFLECTION_AND_REFRACTION:
            {
                Vector3f reflectionDirection = normalize(reflect(dir, N));
                Vector3f refractionDirection = normalize(refract(dir, N, payload->hit_obj->ior));
                Vector3f reflectionRayOrig = (dotProduct(reflectionDirection, N) < 0) ?
                                             hitPoint - N * scene.epsilon :
                                             hitPoint + N * scene.epsilon;
                Vector3f refractionRayOrig = (dotProduct(refractionDirection, N) < 0) ?
                                             hitPoint - N * scene.epsilon :
                                             hitPoint + N * scene.epsilon;
                Vector3f reflectionColor = castRay(reflectionRayOrig, reflectionDirection, scene, depth + 1);
                Vector3f refractionColor = castRay(refractionRayOrig, refractionDirection, scene, depth + 1);
                float kr = fresnel(dir, N, payload->hit_obj->ior);
                hitColor = reflectionColor * kr + refractionColor * (1 - kr);
                break;
            }
            case REFLECTION:
            {
                float kr = fresnel(dir, N, payload->hit_obj->ior);
                Vector3f reflectionDirection = reflect(dir, N);
                Vector3f reflectionRayOrig = (dotProduct(reflectionDirection, N) < 0) ?
                                             hitPoint + N * scene.epsilon :
                                             hitPoint - N * scene.epsilon;
                hitColor = castRay(reflectionRayOrig, reflectionDirection, scene, depth + 1) * kr;
                break;
            }
            default:
            {
                // [comment]
                // We use the Phong illumation model int the default case. The phong model
                // is composed of a diffuse and a specular reflection component.
                // [/comment]
                Vector3f lightAmt = 0, specularColor = 0;
                Vector3f shadowPointOrig = (dotProduct(dir, N) < 0) ?
                                           hitPoint + N * scene.epsilon :
                                           hitPoint - N * scene.epsilon;
                // [comment]
                // Loop over all lights in the scene and sum their contribution up
                // We also apply the lambert cosine law
                // [/comment]
                for (auto& light : scene.get_lights()) {
                    Vector3f lightDir = light->position - hitPoint;
                    // square of the distance between hitPoint and the light
                    float lightDistance2 = dotProduct(lightDir, lightDir);
                    lightDir = normalize(lightDir);
                    float LdotN = std::max(0.f, dotProduct(lightDir, N));
                    // is the point in shadow, and is the nearest occluding object closer to the object than the light itself?
                    auto shadow_res = trace(shadowPointOrig, lightDir, scene.get_objects());
                    bool inShadow = shadow_res && (shadow_res->tNear * shadow_res->tNear < lightDistance2);

                    lightAmt += inShadow ? 0 : light->intensity * LdotN;
                    Vector3f reflectionDirection = reflect(-lightDir, N);

                    specularColor += powf(std::max(0.f, -dotProduct(reflectionDirection, dir)),
                        payload->hit_obj->specularExponent) * light->intensity;
                }

                hitColor = lightAmt * payload->hit_obj->evalDiffuseColor(st) * payload->hit_obj->Kd + specularColor * payload->hit_obj->Ks;
                break;
            }
        }
    }

    return hitColor;
}

这里必须提前声明,递归函数的实现可以算的上是 热爱105度的大脑了。简单的说,就是非常特别难。所以不理解也不用担心。

递归函数的实现虽然很难,但如果你想看懂还是很简单的。因为传说中存在着一种邪教方法,可以完美的理解。

Vector3f castRay(const Vector3f &orig, const Vector3f &dir, const Scene& scene, int depth);

首先这个函数的声明以及实现的邪教方法理解:意思为我打出一根光线(很快啊),你给我计算一下他打到这个场景里会是什么颜色呢。没打到物体的话就是天空的颜色呢。打到玻璃珠子是颜色A呢,打到玻璃上是颜色B呢,打到其他东西就是颜色C呢。

这里有递归吗?哪里有函数自己调用自己呢?没有,就是普通的函数调用呢。当然颜色A,和颜色B是递归调用得到的。

递归函数的设计,最最重要的就是,抽象统一的操作,提出终止条件。
抽象的统一操作:光线的打出。
终止条件:没有打中或者是光线弹射depth不超过5次。

tips:别问我为什么是这个口气呢,是因为受到了小彭老师的影响啊。

......

4 关于精度

很多人不太清楚计算机或者说是计算器,算的不一定很准。尤其是浮点数。

所以我们经常会允许一个 1e-6 的偏差,即百万分之一的偏差。

作业5中出现地板没有着色的情况下,在判断与三角形相交时时,各个变量(尤其是需要更新的那三个)一定要容忍1e-6这么大的偏差。否则会有很大的问题在两个三角形的斜边交汇处。

Python 3.8.8 (default, Apr 13 2021, 15:08:03) [MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license()" for more information.
>>> 0.1+0.2
0.30000000000000004

5 光线追踪的相机旋转

Finally, we want to be able to render an image of the scene from any particular point of view. After you have moved the camera from its original position (centered at the origin of the world coordinate system and aligned along the negative z-axis) you can express the translation and rotation values of the camera with a 4x4 matrix. Usually, this matrix is called the camera-to-world matrix (and its inverse is called the world-to-camera matrix). If we apply this camera-to-world matrix to our points O and P then the vector ||O'P'|| (where O' is the point O and P' is the point P transformed by the camera-to-world matrix) represents the normalized direction of the ray in world space (figure 8). Applying the camera-to-world transform to O and P transforms these two points from camera space to world space. Another option is to compute the ray direction while the camera is in its default position (the vector OP), and apply the camera-to-world matrix to this vector.

Note how the camera coordinate system moves with the camera. Our pseudo code can easily be modified to account for camera transformation (rotation and translation, scaling a camera are not particularly recommended):

float imageAspectRatio = imageWidth / imageHeight; // assuming width > height
float Px = (2 * ((x + 0.5) / imageWidth) - 1) * tan(fov / 2 * M_PI / 180) * imageAspectRatio;
float Py = (1 - 2 * ((y + 0.5) / imageHeight) * tan(fov / 2 * M_PI / 180);
Vec3f rayOrigin = Point3(0, 0, 0);
Matrix44f cameraToWorld;
cameraToWorld.set(...); // set matrix
Vec3f rayOriginWorld, rayPWorld;
cameraToWorld.multVectMatrix(rayOrigin, rayOriginWorld);
cameraToWorld.multVectMatrix(Vec3f(Px, Py, -1), rayPWorld);
Vec3f rayDirection = rayPWorld - rayOriginWorld;
rayDirection.normalize(); // it's a direction so don't forget to normalize

To compute the final image we will need to create a ray for each pixel of the frame using the method we have just described and test if any one of these rays intersects the geometry from the scene. Unfortunately, we are not to a point yet in this series of lessons where we can compute the intersection between rays and objects but this will be the topic of the next two lessons.

generating-camera-rays scratchapixel 上述内容是此文章的截取。