Texture and Environment Mapping
Texture Mapping and Environment Mapping
Why Texture Mapping?
- Cheap way to enhance object appearance: Textures provide a cost-effective method for making objects look more complex and detailed without increasing the number of vertices and edges.
- Reduced computational cost: Using textures reduces the computational load compared to rendering highly detailed models.
Texture Mapping
- Texture mapping involves applying a 2D image onto the surface of a 3D object.
- Texture images are typically defined by coordinates S and T, which range from 0 to 1.
- The surface of a 3D object is a 2D area residing in a 3D space (with X, Y, and Z coordinates).
- The goal is to map texture elements (texels) to pixels on the screen.
- Texels are the elements of a texture image.
Texture Coordinates
- Texture coordinates are represented as S and T, with values between 0 and 1.
- The coordinates specify how the texture image is mapped onto the object's surface.
Example: Mapping a Texture onto a Sphere
- A sphere can be mapped with a 2D texture image by correlating points in 3D space to points on the texture.
- 3D points in space can be defined using polar angles:
- \theta (theta): the angle between the radius vector r and the z-axis.
- \phi (phi): the angle between the projection of r onto the xy-plane and the x-axis.
Equations for a Sphere
- Given radius r, the Cartesian coordinates can be defined as:
- z = r \cos(\theta)
- x = r \sin(\theta) \cos(\phi)
- y = r \sin(\theta) \sin(\phi)
Mapping Texture Coordinates to Sphere Points
- Establish a linear relationship between angles \theta and \phi and texture coordinates S and T.
- \phi ranges from 0 to 2\pi, and \theta ranges from 0 to \pi.
- The relationships are:
- S = \frac{\phi}{2\pi}
- T = \frac{\theta}{\pi}
- To find the correlation between 3D points and 2D texture coordinates, extract \phi and \theta from x, y, and z:
- \theta = \arccos(\frac{z}{r})
- \phi = \arctan(\frac{y}{x})
*Substituting these relations into the equations above, we have - S = \frac{\arctan(\frac{y}{x})}{2\pi}
- T = \frac{\arccos(\frac{z}{r})}{\pi}
Handling Texture Coordinates Outside the [0, 1] Range
- GL_REPEAT: Repeats the texture across the surface.
- GLMIRROREDREPEAT: Mirrors the texture at integer boundaries, which is useful for textures with contrast changes to avoid visible seams.
- GLCLAMPTO_EDGE: Clamps the texture coordinates to the edge, repeating the edge pixels.
- GLCLAMPTO_BORDER: Assigns a default border color to coordinates outside the [0, 1] range.
Defining Texture Maps
- Pelting: Unstitches and unfolds the 3D object's surface to create a 2D texture map; may introduce distortions for complex objects.
- Fast Textures: Defines texture maps by patching local textures together, which is more computationally efficient.
Forward and Inverse Mapping
- Forward Mapping: Starts from the texture space, maps the texture onto the object, and then projects the object onto the screen.
- Inverse Mapping: Starts from the screen space and determines which point in the texture image corresponds to each pixel.
Forward Mapping Steps
- Start with texture coordinates S and T.
- Map S and T to the object's surface in 3D space using the sphere equations.
- Project the 3D points onto the 2D screen space using a simple projection, such as xs=x and ys=z.
Inverse Mapping Steps
- Start with screen coordinates xs and ys.
- Use the inverse of the projection to find the corresponding 3D points x and z, setting x=xs and z=ys.
- Calculate texture coordinates:
- \theta = \arccos(\frac{z}{r})
- \phi = \arctan(\frac{y}{x})
- S = \frac{\phi}{2\pi}
- T = \frac{\theta}{\pi}
Interpolation
- Since a pixel on the screen corresponds to a face on the 3D object, and faces are often triangles, barycentric interpolation is used.
- Barycentric interpolation estimates the texture coordinates (S and T) for a point P within a triangle using the texture coordinates of the triangle's vertices.
- The coefficients in the interpolation are based on the distances from point P to each vertex. A point closer to a vertex has a higher coefficient.
- If the interpolated texture coordinate does not fall exactly on a texel:
- GL_NEAREST: Chooses the nearest texel.
- Linear averaging of neighboring texels can be used to smooth the texture.
Texel and Pixel Size Mismatch
- Oversampling: if one texel covers multiple pixels.
- Undersampling: if one pixel covers multiple texels
- In these cases, the texture image must be either super-sampled or down-sampled to match the screen resolution.
Forward vs. Inverse Mapping
- Forward Mapping: Common approach, starting from the texture and projecting onto the screen.
- Inverse Mapping: Useful when texture is calculated on the fly; it's efficient to calculate only the necessary texture elements.
OpenGL Functions for Texture Mapping
- glGenTextures: Generates texture objects.
- glBindTexture: Binds a texture object to a texture type (e.g., GLTEXTURE2D).
- glTexImage2D: Sets the texture data.
- Specifies the target (e.g., GLTEXTURE2D), level of detail (for mipmapping), internal format (e.g., GL_RGBA), width, height, border, format, type, and data.
Shader Code
- Texture coordinates are passed as 2D vectors.
- The uniform sampler2D is used to sample the texture.
- The texture function samples the texture at a given coordinate.
Activating and Binding Textures
- glActiveTexture: Activates a texture unit (e.g., GL_TEXTURE0).
- glGetUniformLocation: Retrieves the location of a uniform variable (e.g., texture object) in the shader.
- glUniform1i: Sets the value of a uniform variable.
- glBindTexture: Binds the texture object to the active texture unit.
Environment Mapping
- Environment mapping simulates reflective surfaces by applying a texture that represents the environment around the object.
- Useful when dealing with local illumination models where object-to-object interactions are not computed.
- Basic idea: Have a reflective object in the center of a sphere, and take a picture of the reflection of the environemnt on the sphere. Then, map the sphere surface to the reflective object
Cube Mapping
- Instead of a sphere, a cube is used to capture the environment.
- A camera is conceptually positioned at the center of the cube, capturing six images (top, bottom, left, right, front, back) corresponding to the cube's faces.
- These six images are used as a cube map, which is then applied to the object's surface.
- This process must be redone for dynamic scenes where the environment changes.
- Frame Buffers: Frame buffers are used to collect the textures from all six faces of the surrounding cube.
OpenGL Functions for Cube Mapping
- glGenerateFrameBuffer:
- glBindFrameBuffer
- GLCOLORATTACHMENT
- GLDEPTHATTACHMENT
- The 6 faces of the cubes can be specified using arguments such as:
- GLTEXTURECUBEMAPPOSITIVE_X
- GLTEXTURECUBEMAPNEGATIVE_X
- GLTEXTURECUBEMAPPOSITIVE_Y
- GLTEXTURECUBEMAPNEGATIVE_Y
- GLTEXTURECUBEMAPPOSITIVE_Z
- GLTEXTURECUBEMAPNEGATIVE_Z
- glTexImage2D: Used to assign texture to 2D textures
- glTexImageCubeMap: Used to assign texture to cube maps
Shader Code for Environment Mapping
- Uses a samplerCube uniform instead of sampler2D.
- Calculates the reflected vector based on the camera position and the surface normal. The reflect function helps find that vector
- Samples the cube map using the reflected vector to determine which part of the environment is reflected at each point on the object.
Forward vs. Inverse Mapping
Texture Mapping:
- Textures are used by either using texture2D for texture mapping or cube mapping for textures with six faces
- Texture coordinates are the point we sample
Environment Mapping
- Cube map is what changes
- SamplerCube is what changes
- Reflected direction is what replaces texture coordinates. Can be computed based on where the camera is and the normals at each point.