作业 3: Pipeline and Shading
作业要求
在这次编程任务中,我们会进一步模拟现代图形技术。我们在代码中添加了 Object Loader (用于加载三维模型), Vertex Shader 与 Fragment Shader ,并且支持 了纹理映射。
而在本次实验中,你需要完成的任务是:
- 修改函数
rasterize_triangle(const Triangle& t) in rasterizer.cpp: 在此处实现与作业 2 类似的插值算法,实现法向量、颜色、纹理颜色的插值。 - 修改函数
get_projection_matrix() in main.cpp : 将你自己在之前的实验中 实现的投影矩阵填到此处,此时你可以运行 ./Rasterizer output.png normal 来观察法向量实现结果。 - 修改函数
phong_fragment_shader() in main.cpp : 实现 Blinn-Phong 模型计算 Fragment Color. - 修改函数
texture_fragment_shader() in main.cpp : 在实现 Blinn-Phong 的基础上,将纹理颜色视为公式中的 kd ,实现 Texture Shading Fragment Shader. - 修改函数
bump_fragment_shader() in main.cpp : 在实现 Blinn-Phong 的基础上,仔细阅读该函数中的注释,实现 Bump mapping. - 修改函数
displacement_fragment_shader() in main.cpp : 在实现 Bump mapping 的基础上,实现 displacement mapping.
rasterize_triangle(const Triangle& t)
深度插值
首先通过 auto [alpha, beta, gamma] = computeBarycentric2D(i + 0.5, j + 0.5, t.v); 得到
α
,
β
,
γ
\alpha, \beta, \gamma
α,β,γ。之前提到过,重心坐标在投影后可能会发生改变,所以对于三维空间中的属性,应该取三维空间中的坐标计算重心坐标。
假设二维空间中小三角形三个顶点的深度值分别为
Z
1
′
,
Z
2
′
,
Z
3
′
Z_1^{'}, Z_2^{'}, Z_3^{'}
Z1′?,Z2′?,Z3′?,其内部一点的重心坐标为
(
α
′
,
β
′
,
γ
′
)
(\alpha^{'}, \beta^{'},\gamma^{'})
(α′,β′,γ′)。三维空间中该三角形三个顶点的深度值分别为
Z
1
,
Z
1
,
Z
1
Z_1, Z_1, Z_1
Z1?,Z1?,Z1?,重心坐标为
(
α
,
β
,
γ
)
(\alpha,\beta, \gamma)
(α,β,γ) 。如何从
(
α
′
,
β
′
,
γ
′
)
(\alpha^{'}, \beta^{'},\gamma^{'})
(α′,β′,γ′) 得到
(
α
,
β
,
γ
)
(\alpha,\beta, \gamma)
(α,β,γ) ,或是得到真实的深度
Z
Z
Z?
通过下面推导可以得到真实的深度
Z
Z
Z
Z
=
α
Z
1
+
β
Z
2
+
γ
Z
3
Z
′
=
α
′
Z
1
′
+
β
′
Z
2
′
+
γ
′
Z
3
′
∵
α
′
+
β
′
+
γ
′
=
1
∴
Z
=
(
Z
1
Z
1
α
′
+
Z
2
Z
2
β
′
+
Z
3
Z
3
γ
′
)
Z
=
Z
α
′
Z
1
Z
1
+
Z
α
′
Z
2
Z
2
+
Z
α
′
Z
3
Z
3
∴
α
=
Z
α
′
Z
1
,
?
β
=
Z
β
′
Z
2
?
γ
=
Z
γ
′
Z
3
∵
α
+
β
+
γ
=
1
∴
Z
=
1
α
′
Z
1
+
β
′
Z
2
+
γ
′
Z
3
Z=\alpha Z_1+\beta Z_2+ \gamma Z_3 \\Z^{'}=\alpha^{'} Z_1^{'}+ \beta^{'} Z_2^{'}+\gamma^{'} Z_3^{'}\\ \because \quad\alpha^{'}+ \beta^{'}+\gamma^{'}=1\\ \therefore \quad Z=(\frac{Z_1}{Z_1}\alpha^{'}+\frac{Z_2}{Z_2}\beta^{'}+\frac{Z_3}{Z_3}\gamma^{'})Z=\frac{Z\alpha^{'}}{Z_1}Z_1+\frac{Z\alpha^{'}}{Z_2}Z_2+\frac{Z\alpha^{'}}{Z_3}Z_3\\ \therefore \quad \alpha = \frac{Z\alpha^{'}}{Z_1},\ \beta =\frac{Z\beta^{'}}{Z_2}\ \gamma =\frac{Z\gamma^{'}}{Z_3}\\ \because \quad \alpha +\beta+\gamma = 1\\\therefore \quad Z=\frac{1}{\frac{\alpha^{'}}{Z_1}+\frac{\beta^{'}}{Z_2}+\frac{\gamma^{'}}{Z_3}}
Z=αZ1?+βZ2?+γZ3?Z′=α′Z1′?+β′Z2′?+γ′Z3′?∵α′+β′+γ′=1∴Z=(Z1?Z1??α′+Z2?Z2??β′+Z3?Z3??γ′)Z=Z1?Zα′?Z1?+Z2?Zα′?Z2?+Z3?Zα′?Z3?∴α=Z1?Zα′?,?β=Z2?Zβ′??γ=Z3?Zγ′?∵α+β+γ=1∴Z=Z1?α′?+Z2?β′?+Z3?γ′?1? 同理,对于任意属性
V
V
V,可以得到
V
=
α
V
1
+
β
V
2
+
γ
V
3
=
Z
α
′
Z
1
V
1
+
Z
β
′
Z
2
V
2
+
Z
γ
′
Z
3
V
3
=
(
α
′
Z
1
V
1
+
β
′
Z
2
V
2
+
γ
′
Z
3
V
3
)
/
(
α
′
Z
1
+
β
′
Z
2
+
γ
′
Z
3
)
\begin{aligned}V&=\alpha V_1 + \beta V_2 + \gamma V_3\\&=\frac{Z\alpha^{'}}{Z_1}V_1+\frac{Z\beta^{'}}{Z_2}V_2+\frac{Z\gamma^{'}}{Z_3}V_3\\&=(\frac{\alpha^{'}}{Z_1}V_1+\frac{\beta^{'}}{Z_2}V_2+\frac{\gamma^{'}}{Z_3}V_3)/(\frac{\alpha^{'}}{Z_1}+\frac{\beta^{'}}{Z_2}+\frac{\gamma^{'}}{Z_3})\end{aligned}
V?=αV1?+βV2?+γV3?=Z1?Zα′?V1?+Z2?Zβ′?V2?+Z3?Zγ′?V3?=(Z1?α′?V1?+Z2?β′?V2?+Z3?γ′?V3?)/(Z1?α′?+Z2?β′?+Z3?γ′?)? 我们再来看框架中提供的代码
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w()
+ gamma * v[2].z() / v[2].w();
zp *= Z;
代码中 Z 即为真实深度,因为透视投影矩阵的最后一行为
[
0
,
0
,
1
,
0
]
[0,0,1,0]
[0,0,1,0],因此投影后点的
w
w
w 即为三维空间点的
z
z
z 坐标。zp*z 好像要算出真实的深度,即把上面公式中的
V
V
V 用
Z
Z
Z 代替,但是为什么用投影后的
z
z
z 坐标呢?况且,根据下面代码
td::array<Vector4f, 3> Triangle::toVector4() const
{
std::array<Vector4f, 3> res;
std::transform(std::begin(v), std::end(v), res.begin(), [](auto& vec) { return Vector4f(vec.x(), vec.y(), vec.z(), 1.f); });
return res;
}
投影后所有三角形顶点的
w
w
w 已全部初始化为 1,所以一顿操作下来,算出来深度
Z
=
α
′
Z
1
′
+
β
′
Z
2
′
+
γ
′
Z
3
′
Z=\alpha^{'} Z_1^{'}+ \beta^{'} Z_2^{'}+\gamma^{'} Z_3^{'}
Z=α′Z1′?+β′Z2′?+γ′Z3′? ,与不修正的计算方法一样,搞不懂想要做什么。
后续 interpolated_color、interpolated_normal、interpolated_texcoords、interpolated_shadingcoords 都直接用二维平面的重心坐标近似插值计算。
参考1,参考2
光栅化
光栅化操作基本与作业 2 相同,本来想试着用 MASS 来进行光栅化,但是发现有点问题。
void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos)
{
// TODO: From your HW3, get the triangle rasterization code.
auto v = t.toVector4();
// TODO : Find out the bounding box of current triangle.
// 得到 bounding box,注意上下取整
int xmin = std::floor(std::min(v[0].x(), std::min(v[1].x(), v[2].x())));
int xmax = std::ceil(std::max(v[0].x(), std::max(v[1].x(), v[2].x())));
int ymin = std::floor(std::min(v[0].y(), std::min(v[1].y(), v[2].y())));
int ymax = std::ceil(std::max(v[0].y(), std::max(v[1].y(), v[2].y())));
// iterate through the pixel and find if the current pixel is inside the triangle
for (int i = xmin; i <= xmax; ++i)
for (int j = ymin; j <= ymax; ++j) {
if (insideTriangle(i, j, t.v)) {
// TODO: Inside your rasterization loop:
// * v[i].w() is the vertex view space depth value z.
// * Z is interpolated view space depth for the current pixel
// * zp is depth between zNear and zFar, used for z-buffer
auto [alpha, beta, gamma] = computeBarycentric2D(i + 0.5, j + 0.5, t.v);
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;
if (zp < depth_buf[get_index(i, j)]) {
// TODO: Interpolate the attributes:
auto interpolated_color = interpolate(alpha, beta, gamma, t.color[0], t.color[1], t.color[2], 1);
auto interpolated_normal = interpolate(alpha, beta, gamma, t.normal[0], t.normal[1], t.normal[2], 1).normalized();
auto interpolated_texcoords = interpolate(alpha, beta, gamma, t.tex_coords[0], t.tex_coords[1], t.tex_coords[2], 1);
auto interpolated_shadingcoords = interpolate(alpha, beta, gamma, view_pos[0], view_pos[1], view_pos[2], 1);
fragment_shader_payload payload(interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
payload.view_pos = interpolated_shadingcoords;
auto pixel_color = fragment_shader(payload);
set_pixel(Vector2i(i, j), pixel_color);
depth_buf[get_index(i, j)] = zp;
}
}
}
}
运行结果:
phong_fragment_shader()
实现 Blinn-Phong 模型计算 Fragment Color,得到向量
l
,
n
,
h
,
v
l,n,h,v
l,n,h,v 与距离
r
r
r 即可。
L
=
L
a
+
L
d
+
L
s
=
k
a
I
a
+
k
d
(
I
/
r
2
)
max
?
(
0
,
n
?
l
)
+
k
s
(
I
/
r
2
)
max
?
(
0
,
n
?
h
)
p
\begin{aligned} L &=L_{a}+L_{d}+L_{s} \\ &=k_{a} I_{a}+k_{d}\left(I / r^{2}\right) \max (0, \mathbf{n} \cdot \mathbf{l})+k_{s}\left(I / r^{2}\right) \max (0, \mathbf{n} \cdot \mathbf{h})^{p} \end{aligned}
L?=La?+Ld?+Ls?=ka?Ia?+kd?(I/r2)max(0,n?l)+ks?(I/r2)max(0,n?h)p?
Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
Vector3f l = (light.position - point).normalized(),
n = normal.normalized(),
v = (eye_pos - point).normalized(),
h = (v + l).normalized(),
I = light.intensity;
float r2 = (light.position - point).dot(light.position - point);
Vector3f Ld = kd.cwiseProduct(I / r2) * std::max(0.0f, n.dot(l)),
Ls = ks.cwiseProduct(I / r2) * std::pow(std::max(0.0f, n.dot(h)), p);
result_color += (Ld + Ls);
}
result_color += ka.cwiseProduct(amb_light_intensity);
return result_color * 255.f;
}
运行结果:
texture_fragment_shader()
利用 payload 的 texture 中的 getColor 方法得到纹理颜色传入,再与前面一样得到光照就行了
关于 payload 是个啥? 应该是一个自定义数据结构,用于存放进行求交或者着色计算时的一些附加的必要信息,单纯的函数调用或者光线信息并不足以支撑某个点的着色计算,所以需要传递一些附加的信息。
一个小问题: 在 Visual Studio 上搭建环境时,Debug 模式运行 output.png texture 时,可能报错 Microsoft C++ 异常: cv::Exception
Eigen::Vector3f getColor(float u, float v)
{
auto u_img = u * width;
auto v_img = (1 - v) * height;
auto color = image_data.at<cv::Vec3b>(v_img, u_img);
return Eigen::Vector3f(color[0], color[1], color[2]);
}
这是因为上面函数中的 v_img or u_img 取到边界值,不知道如何解决 但是如果在 Release 模式下运行,就会跳过这个报错,非常神奇。
Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = {0, 0, 0};
if (payload.texture)
{
// TODO: Get the texture value at the texture coordinates of the current fragment
return_color = payload.texture->getColor(payload.tex_coords.x(), payload.tex_coords.y());
}
Eigen::Vector3f texture_color;
texture_color << return_color.x(), return_color.y(), return_color.z();
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = texture_color / 255.f;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = texture_color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
Vector3f l = (light.position - point).normalized(),
n = normal.normalized(),
v = (eye_pos - point).normalized(),
h = (v + l).normalized(),
I = light.intensity;
float r2 = (light.position - point).dot(light.position - point);
Vector3f Ld = kd.cwiseProduct(I / r2) * std::max(0.0f, n.dot(l)),
Ls = ks.cwiseProduct(I / r2) * std::pow(std::max(0.0f, n.dot(h)), p);
result_color += (Ld + Ls);
result_color += ka.cwiseProduct(amb_light_intensity);
}
return result_color * 255.f;
}
运行结果:
bump_fragment_shader()
这一函数为凹凸贴图/法线贴图的实现,
纹理可以改变表面的相对高度,对任何一个像素的法线做一个扰动,实际几何体没有变化,仅仅改变视觉效果.
-
初始表面法线为
n
(
p
)
=
(
0
,
0
,
1
)
n(p)=(0,0,1)
n(p)=(0,0,1) -
计算
p
p
p 偏微分
d
p
d
u
=
c
1
?
[
h
(
u
+
1
)
?
h
(
u
)
]
d
p
d
v
=
c
2
?
[
h
(
v
+
1
)
?
h
(
v
)
]
\frac{dp}{du}=c_1*[h(u+1)-h(u)]\\\frac{dp}{dv}=c_2*[h(v+1)-h(v)]
dudp?=c1??[h(u+1)?h(u)]dvdp?=c2??[h(v+1)?h(v)] -
扰动法线为
n
=
(
?
d
p
d
u
,
?
d
p
d
v
,
1
)
.
n
o
r
m
a
l
i
z
e
d
(
)
n=(-\frac{dp}{du}, -\frac{dp}{dv},1).normalized()
n=(?dudp?,?dvdp?,1).normalized()
关于切线空间,可见参考;根据上式,求得 dU 与 dV 即可。至于为什么不是直接加一,因为纹理坐标 u,v? 每增加 1,?u\_img,v\_img? 增加 ?width,height? 。
上式中的函数
h
h
h,在代码中为颜色的范数,即 getColor().norm() ,通过这个来反映高度。
igen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
float kh = 0.2, kn = 0.1;
// TODO: Implement bump mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Normal n = normalize(TBN * ln)
float x = normal.x(), y = normal.y(), z = normal.z();
Vector3f t(x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z)),
b = normal.cross(t);
Matrix3f TBN;
TBN << t.x(), b.x(), normal.x(),
t.y(), b.y(), normal.y(),
t.z(), b.z(), normal.z();
float u = payload.tex_coords.x(), v = payload.tex_coords.y(),
h = payload.texture->height, w = payload.texture->width;
float dU = kh * kn * (payload.texture->getColor(u + 1.0 / w, v).norm() - payload.texture->getColor(u, v).norm()),
dV = kh * kn * (payload.texture->getColor(u, v + 1.0 / h).norm() - payload.texture->getColor(u, v).norm());
Vector3f ln(-dU, -dV, 1);
normal = TBN * ln;
Eigen::Vector3f result_color = {0, 0, 0};
result_color = normal.normalized();
return result_color * 255.f;
}
运行结果:
displacement_fragment_shader()
在 bump 基础上加入了光照因素.
Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = payload.color;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = payload.color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = payload.normal;
float kh = 0.2, kn = 0.1;
// TODO: Implement displacement mapping here
// Let n = normal = (x, y, z)
// Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
// Vector b = n cross product t
// Matrix TBN = [t b n]
// dU = kh * kn * (h(u+1/w,v)-h(u,v))
// dV = kh * kn * (h(u,v+1/h)-h(u,v))
// Vector ln = (-dU, -dV, 1)
// Position p = p + kn * n * h(u,v)
// Normal n = normalize(TBN * ln)
float x = normal.x(), y = normal.y(), z = normal.z();
Vector3f t(x * y / sqrt(x * x + z * z), sqrt(x * x + z * z), z * y / sqrt(x * x + z * z)),
b = normal.cross(t);
Matrix3f TBN;
TBN << t.x(), b.x(), normal.x(),
t.y(), b.y(), normal.y(),
t.z(), b.z(), normal.z();
float u = payload.tex_coords.x(), v = payload.tex_coords.y(),
h = payload.texture->height, w = payload.texture->width;
float dU = kh * kn * (payload.texture->getColor(u + 1.0 / w, v).norm() - payload.texture->getColor(u, v).norm()),
dV = kh * kn * (payload.texture->getColor(u, v + 1.0 / h).norm() - payload.texture->getColor(u, v).norm());
Vector3f ln(-dU, -dV, 1);
point += (kn * normal * payload.texture->getColor(u, v).norm());
normal = TBN * ln;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
// TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular*
// components are. Then, accumulate that result on the *result_color* object.
Vector3f l = (light.position - point).normalized(),
n = normal.normalized(),
v = (eye_pos - point).normalized(),
h = (v + l).normalized(),
I = light.intensity;
float r2 = (light.position - point).dot(light.position - point);
Vector3f Ld = kd.cwiseProduct(I / r2) * std::max(0.0f, n.dot(l)),
Ls = ks.cwiseProduct(I / r2) * std::pow(std::max(0.0f, n.dot(h)), p);
result_color += (Ld + Ls);
result_color += ka.cwiseProduct(amb_light_intensity);
}
return result_color * 255.f;
}
运行结果:
其他模型
只需在主函数中更改路径即可,部分模型会报错
双线性插值采样
Linear interpolation (1D)
lerp
?
(
x
,
v
0
,
v
1
)
=
v
0
+
x
(
v
1
?
v
0
)
\operatorname{lerp}\left(x, v_{0}, v_{1}\right)=v_{0}+x\left(v_{1}-v_{0}\right)
lerp(x,v0?,v1?)=v0?+x(v1??v0?) Two helper lerps
u
0
=
lerp
?
(
s
,
u
00
,
u
10
)
u
1
=
lerp
?
(
s
,
u
01
,
u
11
)
\begin{array}{l} u_{0}=\operatorname{lerp}\left(s, u_{00}, u_{10}\right) \\ u_{1}=\operatorname{lerp}\left(s, u_{01}, u_{11}\right) \end{array}
u0?=lerp(s,u00?,u10?)u1?=lerp(s,u01?,u11?)? Final vertical lerp, to get result:
f
(
x
,
y
)
=
lerp
?
(
t
,
u
0
,
u
1
)
f(x, y)=\operatorname{lerp}\left(t, u_{0}, u_{1}\right)
f(x,y)=lerp(t,u0?,u1?) 只需在 Texture.hpp 中增加函数 Eigen::Vector3f getColorBilinear(float u, float v)
Eigen::Vector3f getColorBilinear(float u, float v) {
float u_00 = int(u * width), v_00 = int((1 - v) * height),
u_01 = u_00 + 1, v_01 = v_00,
u_10 = u_00, v_10 = v_00 + 1,
u_11 = u_00 + 1, v_11 = v_00 + 1;
Eigen::Vector3f color_00, color_01, color_10, color_11, color_u0, color_u1, color;
color_00 = getColor(u_00 / width, 1 - v_00 / height);
color_01 = getColor(u_01 / width, 1 - v_01 / height);
color_10 = getColor(u_10 / width, 1 - v_10 / height);
color_11 = getColor(u_11 / width, 1 - v_11 / height);
color_u0 = color_00 + (color_01 - color_00) * (u * width - u_00);
color_u1 = color_10 + (color_11 - color_10) * (u * width - u_00);
color = color_u0 + (color_u1 - color_u0) * ((1 - v) * height - v_00);
return color;
}
进行双线性插值前:
进行双线性插值后:
显然过渡更加平滑.
|