Immersive Experiences · Notes

Wed, 09 Aug 2023 06:33:22 GMT
Bump vs. normal vs. displacement maps

Bump maps and normal maps are essentially the same thing. The primary difference is that normal maps have more information in them (because they use a RGB input) to give a more accurate bump effect. The RGB information in the normal maps correspond to the x, y and z axis.

Bump maps use only a black to white map to understand the depth.

Both bumps and normals are essentially a way to cheat the shading to give the effect of depth... meaning no resolution is added to the geometry in anyway... to do that that we use displacement maps. Displacement maps unlike the other two actually modify the geometry at render time.

Wed, 09 Aug 2023 06:33:47 GMT
Quads vs. tris

Why don't you like 12 faced cubes? Having Face4 just doubles the amount of code paths for processing faces, because they all get converted to triangles before being sent to the GPU.

Also, quads are generally inferior, because a quad isn't a planar primitive, so it will have either a peak or valley fold between the two corners, and that is indeterminate. So, you may or may not get the face folded the way you want from your exporter. Using triangles gets rid of that problem. That also brings up the fact that then exporters have to support multiple paths for tris vs. quads.

Wed, 09 Aug 2023 06:34:08 GMT
World (or global) vs. local space

World space is the global coordinate system (relative to the origin of your scene). Local space is the local coordinate system relative to the origin of the object itself.

Wed, 09 Aug 2023 06:35:20 GMT
Augmented Reality (AR)

How it works:

  1. The camera captures video of the camera view and sends it to the computer.
  2. Software on the computer searches through each video frame for any square shapes (square markers).
  3. If a square marker is found and the image content embedded by the square, the pattern, is matched and identified, the software uses mathematics to calculate, relative to the camera, both the position of the black square and the pattern orientation.
  4. Once the position and orientation of the camera are known, a computer graphics model is drawn using an offset to the calculated position and with a matching orientation.
  5. This model is drawn in the foreground of the captured video and tracked against the movements of the background video causing the model to appear attached to the background.
  6. The final output is shown back in the display, so when the user looks through the viewer, they see the rendered graphic model over a real world video stream; seemingly homogeneous with the camera view.

Wed, 09 Aug 2023 06:41:15 GMT

“Content” has become a fungible resource to be consumed by our eyeballs and earholes, which transforms it into a value-added product called “engagement,” and which the platform owners in turn package and resell to advertisers as a service called “impressions”.