Three.js is probably the widest-used library for working with GPU-accelerated 3D graphics in the browser. A lot of other web-3D technologies actually use Three.js under the hood (e.g. Aframe).
Three.js builds upon the WebGL API (and WebGL2, WebGPU, etc.) available in recent browsers, which is built upon OpenGL, one of the oldest and most established 3D hardware graphics APIs. But whereas the WebGL (and OpenGL) APIs are relatively low-level and state-based, Three.js offers a more mid-level object-based interface, including a "scene-graph", "materials", and other elements you might find in a modern game engine. But on the other hand, unlike a game engine, we create a scene not through a drag & drop user interface, but by typing code (as we might with Processing or P5.js etc.) The structure of a Three.js code document might look something like this:
animate()
function that defines the core animation loop, updating at 60fps (desktop) or higher (VR)At the heart of any Three.js project is an animation loop in which a THREE.WebGLRenderer
takes a THREE.Camera
and a THREE.Scene
to actually draw to the screen.
function render() {
// update members & properties of the scene here for animation
// TODO
// now render the scene:
renderer.render(scene, camera);
}
renderer.setAnimationLoop(render);
Before this loop there will be setup code to define the renderer
, the camera
, but most of all, the actual contents of the scene
. The scene
is much like the scene graph of game engines: a tree-like structure in which each node contains child objects.
The Three.js
ontology is roughly as follows:
Renderer (THREE.WebGLRenderer
)
antialias: true
Camera (THREE.PerspectiveCamera
, THREE.OrthographicCamera
, THREE.StereoCamera
, etc.). For VR/XR we will always be using THREE.PerspectiveCamera
.
Scene (THREE.Scene
) -- root object of a scene graph tree
Every object in the tree inherits from the base class THREE.Object3D
THREE.Group
for that).layers
: an object is only rendered if it has a layer tag in common with the camera. Also used to filter raycasting.object.matrixAutoUpdate = false;
The most typical kind of object is a Mesh (usually THREE.Mesh
). A Mesh has:
THREE.BufferGeometry
(actually everything uses this under the hood). Geometry based on Javascript Typed Arrays, which are more flexible and faster native memory blocksMeshBasicMaterial
and MeshStandardMaterial
to customized ShaderMaterial
etc.texture.needsUpdate = true;
To efficiently render many objects with the same geometry, use THREE.InstancedMesh, with InstancedBufferAttribute, InstancedBufferGeometry, etc.
Lights
HemisphereLight
, AmbientLight
, SpotLight
, PointLight
, DirectionalLight
, etc.Possibly other scene entities
Note: There could be many cameras and many scenes, but only one of each is used to render per frame
For postprocessing, see the docs here -- but be careful, as many screen-space post-processing effects do not work well for VR/XR.
Three.js also has classes for animation, raycasting, physics, positional audio, and many more.
Three.js code is written in Javascript, embedded within a normal HTML5 page.
For online code sketching, I recommend signing up for an account on codepen or stackblitz.com.
Here's the initial HTML boilerplate. It's mostly standard HTML boilerplate, with a little CSS to help the canvas fill the page, and a Javascript <script>
element with an import
to pull in the Three.js library:
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<style>
/* remove extra spacing around elements so we can fill the available page */
* { margin: 0; }
</style>
<script type="importmap">
{
"imports": {
"three": "https://unpkg.com/three@0.164.1/build/three.module.js",
"three/addons/": "https://unpkg.com/three@0.164.1/examples/jsm/"
}
}
</script>
</head>
<body>
<script type="module">
// import the Three.js module:
import * as THREE from "three";
// Our Javascript will go here.
</script>
</body>
</html>
Everything from now on will be Javascript code inside that final <script>
element.
To render anything, we need a renderer. We also need an HTML <canvas>
to render to; which the renderer can create for us. Here we configure the renderer to use better-than-default quality:
// create a renderer with better than default quality:
const renderer = new THREE.WebGLRenderer({ antialias: true });
renderer.setPixelRatio(window.devicePixelRatio);
// make it fill the page
renderer.setSize(window.innerWidth, window.innerHeight);
// create and add the <canvas>
document.body.appendChild(renderer.domElement);
The renderer also needs a definition of a camera
to define the optics used, and also the viewpoint and direction. Here we use a perspective-based camera (this is always used for VR/XR).
// create a perspective camera
const camera = new THREE.PerspectiveCamera(
75, // this camera has a 75 degree field of view in the vertical axis
window.innerWidth / window.innerHeight, // the aspect ratio matches the size of the window
0.05, // anything less than 5cm from the eye will not be drawn
100 // anything more than 100m from the eye will not be drawn
);
// position the camera
// the X axis points to the right
// the Y axis points up from the ground
// the Z axis point out of the screen toward you
camera.position.y = 1.5; // average human eye height is about 1.5m above ground
camera.position.z = 2; // let's stand 2 meters back
One more addition: if the browser window gets resized, we need to update the renderer and camera accordingly:
window.addEventListener("resize", function() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
// bugfix: don't resize renderer if in VR
if (!renderer.xr.isPresenting) renderer.setSize(window.innerWidth, window.innerHeight);
});
The renderer also needs a scene
to know what it should draw. This scene will contain all the objects in the world.
// create the root of a scene graph
const scene = new THREE.Scene();
To actually create an object we can see, we need to know both its Geometry
(it's shape), as well as its Material
(how its surfaces respond to light). Together, the geometry and material are combined as a Mesh
, which is also an Object3D
.
Here we make a very simple cube, with a standard material in blue color, add it to the scene and position it 1.5m above ground:
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshStandardMaterial({ color: 0x008ff0 });
const cube = new THREE.Mesh( geometry, material );
// position the cube 1.5m above ground, and add it to the scene:
cube.position.y = 1.5;
scene.add(cube);
But, since materials respond to light, we also need a light source! A light is also an Object3D
, and needs to be added to the scene. Here we use the generic HemisphereLight
:
const light = new THREE.HemisphereLight(0xfff0f0, 0x606066);
scene.add(light);
Or for a simple directional light:
const light = new THREE.DirectionalLight(0xffffff, 3);
light.position.set(1, 1, 1).normalize();
scene.add(light);
Finally, we can add our animation loop -- this is a function that is called on every frame. Here we can update the scene for animation, and finally use the renderer
, camera
and scene
to draw the world to the <canvas>
.
Here we added a little code to rotate the cube, so we can see that animation is working:
function animate() {
// first, update any changes to the scene:
// here let's rotate our cube around its central anchor
cube.rotation.x += 0.01;
cube.rotation.y += 0.01;
// then, use the renderer and camera to draw the scene:
renderer.render(scene, camera);
}
// tell the renderer how to update & render our scene:
renderer.setAnimationLoop(animate);
If you want to know how well the page is performing, you can add the Stats module:
// load in the module:
import Stats from "three/addons/libs/stats.module";
// add a stats view to the page to monitor performance:
const stats = new Stats();
document.body.appendChild(stats.dom);
// wrap everything in the animate function with stats.begin() and stats.end():
function animate() {
// monitor our FPS:
stats.begin();
//... everything as it was before ...
// monitor our FPS:
stats.end();
};
For a WebXR scene, we need to add a few more lines of code:
// load in the VRButton module for the "Enter VR" button
import { XRButton } from "three/addons/webxr/XRButton.js";
import { XRControllerModelFactory } from "three/addons/webxr/XRControllerModelFactory.js";
// enable XR option in the renderer
renderer.xr.enabled = true;
// add a button to enable WebXR:
document.body.appendChild(XRButton.createButton(renderer));
https://codepen.io/grrrwaaa/pen/NWmVeBr?editors=0011
(TODO 2024 verify:) If you want to load static resources such as models, image textures, etc., they will need to be on a public server URL with appropriate access sharing, as Stackblitz itself does not currently have good support for static files.
For static files I can recommend using a Github pages account. Any Github repository can be a static file server by adding a branch called gh-pages
. Note that there is a file size limit of around 25mb -- but you should avoid even files as large as this anyway, to prevent slow downloads of the site!
const clock = new THREE.Clock();
function animate() {
// get current timing:
const dt = clock.getDelta();
const t = clock.getElapsedTime();
// ...
};
https://threejs.org/examples/#webgl_helpers
Most things in Three.js update automatically, but sometimes you need (or want) to do this manually:
https://threejs.org/docs/index.html#manual/en/introduction/How-to-update-things
One place in which it is necessary to do things manually is to clean up memory (on CPU and GPU) when removing things:
https://threejs.org/docs/index.html#manual/en/introduction/How-to-dispose-of-objects
In Three.js, a Mesh is a combination of a Geometry and a Material.
There are many kinds of Geometry (search "Geom" in the Three.js docs), such as BoxGeometry, OctahedronGeometry, ConeGeometry, etc. They are all subclasses of the more generic BufferGeometrry. That means, anything you can do with a BufferGeometry, you can do with a BoxGeometry, etc., and they share the same underlying structure of data.
It also means, you can create your own geometries directly by manipulating a BufferGeometry.
Here's a minimal example:
// create an empty geometry
const geometry = new THREE.BufferGeometry();
// create an array to hold our point data:
const pts = []
// fill it with random values:
for (let i = 0; i < NUM_POINTS; i++) {
let x = Math.random() - 0.5;
let y = Math.random() - 0.5;
let z = Math.random() - 0.5;
pts.push(x, y, z); // add to end of the array
}
// create a raw memory block of floating point numbers for the vertices
// (because that's what BufferGeometry wants)
// fill it with the data from our javascript array:
const vertices = new Float32Array(pts);
// add an "attribute" to the geometry, in this case for the vertex "position"
// itemSize = 3 because there are 3 values (components) per vertex
geometry.setAttribute("position", new THREE.BufferAttribute(vertices, 3));
// for simplicity, we'll render with points:
const material = new THREE.PointsMaterial({
color: 0x888888,
size: 0.02
});
const points = new THREE.Points(geometry, material);
scene.add(points);
Now you can create any geometric shape if you know the mathematics/algorithm to specify the vertex positions! (For example, how about trying one of the parametric geometries on Paul Bourke's amazing website).
Alternatively, rather than building a geometry from scratch, you can take any existing geometry constructor (such as CylinderGeometry
) and then modify the attributes it already has. Use console.log()
to inspect the geometry and its attributes to understand them first.
For example, you can access the raw positions array under geometry.attributes.position.array
. After modifying it, you need to tell Three.js to update the GPU using geometry.attributes.position.needsUpdate = true;
Surfaces
To draw surfaces rather than points, it gets a little more complex. Surfaces are made of triangle faces.
The simplest way to do this is to insert positions in triplets, for each point of each triangle face. Example: https://threejs.org/examples/#webgl_buffergeometry
This is quite wasteful, since for most surfaces, points are shared between faces. So, another way to do it is to fill the positions array with the points needed, then fill another array (called the Index) with triplets of integer indices for each triangle face. Example: https://threejs.org/examples/#webgl_buffergeometry_indexed
Additionally, for any surface lighting to work, you will need to add normals for each vertex. This works similar to creating the position
attribute, but with a new buffer attribute called normal
: geometry.setAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) );
This means you need to know the algorithm that can compute the correct normal direction for your parametric geometry!
Fortunately, Three.js has a fallback to automatically compute these normals: you can just call geometry.computeVertexNormals()
Example:
https://codepen.io/grrrwaaa/pen/gOxzzPx?editors=0010
Drawing lots of different meshes in a scene can quite quickly become expensive on the GPU, but there's a fantastic method to speed this up when most of the objects have the same basic geometry, such as a field of trees, asteroid field, etc, using what's called "GPU instancing". Here, the GPU uses the same basic geometry and material for each "instance", but with a few small variations such as the world matrix transform (position, rotation, scale) or base material color.
In this case we can use InstancedMesh.
For example:
https://codepen.io/grrrwaaa/pen/Vwzxxgr?editors=0010
Agents
Instancing can be particularly useful for a multi-agent system, where several "agents" are moving around the space according to internal rules, and often look very simliar to one another (e.g. NPCs).
In general, I recommend separating out the simulation/control logic entirely from the rendering code. This is helpful when scaling a system up, e.g. for distributed applications. This means, having one data structure representing the state of the population, and a completely different data structure (such as our InstancedMesh) representing how to draw them. Similarly, one function that handles the simulation updates, that makes no direct contact with the GPU, which we can call update()
or simulate()
for example.
Example:
https://codepen.io/grrrwaaa/pen/xxLjzqX?editors=0010
This is a desktop-only GUI, it won't appear in VR.
import { GUI } from 'three/addons/libs/lil-gui.module.min.js';
const settings = {
// your parameters here, e.g.:
enable: true,
}
const gui = new GUI();
gui.add(settings, 'enable').onChange(function (value) {
// do something with `value` here
});
We can create a custom material for Three.js using GLSL shader programming via the ShaderMaterial
class: https://threejs.org/docs/#api/en/materials/ShaderMaterial
Before diving in, it will be very helpful to understand the basic model that OpenGL and related graphics interfaces use, so that we can understand where shaders fit in.
GLSL is the standard default language in OpenGL for writing programs that run on the GPU. Typically we need two programs:
Here are declarations of about the most minimal vertex and fragment shaders:
let vertexcode = `
// the main job is to set the gl_Position, the XYZW position of the vertex within the viewable frustum
// XY encodes horizontal & vertical screen space
// Z encodes the depth of the vertex from near plane to far plane
// W encodes the perspective effect
void main() {
// the vertex shader takes the vertex position ("position") as a vec4(x, y, z, 1.0)
// and transforms this by the current model matrix (representing the Mesh's position, rotation, scale) (also the instanceMatrix, if we are using instanced geometry)
// to get the position of the vertex in world space
vec4 worldpos = modelMatrix * instanceMatrix * vec4(position, 1.0);
// then transform by the current view matrix (representing the Camera's position & rotation)
// to get the vertex position relative to the camera:
vec4 viewpos = viewMatrix * worldpos;
// and then by the current projection matrix (representing the Camera's field of view, aspect ratio, and resolution)
// to get the perspective-distorted position in "clip space"
// and assigns this to "gl_Position" to tell the GPU where to position it
gl_Position = projectionMatrix * viewpos;
}`;
let fragcode = `
// the main job is to set the gl_FragColor, the RGBA color of the pixel
void main() {
// the fragment shader defines a color (RGBA)
// and assigns it to gl_FragColor to tell the GPU how to paint the pixel
gl_FragColor = vec4(1, 0.5, 0.7, 1.0);
}`;
And here's how we could use this to make a first custom material that we can apply to any Mesh:
let material = new THREE.ShaderMaterial({
vertexShader: vertexcode,
fragmentShader: fragcode,
});
We can also pass different kinds of parameters through shaders:
Built-in uniforms for the vertex shader:
// = object.matrixWorld
uniform mat4 modelMatrix;
// = camera.matrixWorldInverse
uniform mat4 viewMatrix;
// = camera.projectionMatrix
uniform mat4 projectionMatrix;
// = inverse transpose of modelViewMatrix
uniform mat3 normalMatrix;
// = camera position in world space
uniform vec3 cameraPosition;
Built-in uniforms in the fragment shader:
// the position and matrix transform of the camera
uniform vec3 cameraPosition;
uniform mat4 viewMatrix;
If, for any reason, you don't want these built-ins, you can use RawShaderMaterial
instead.
We can also add custom uniforms. For example, here's adding a uniform called time
to the fragment shader, to change color over time:
let fragcode = `
uniform float time;
void main() {
gl_FragColor = vec4(sin(time)*0.5+0.5, 0.5, cos(time)*0.5+0.5, 1.0);
}`;
let material = new THREE.ShaderMaterial({
uniforms: { time: { value: 1.0 } },
vertexShader: vertexcode,
fragmentShader: fragcode,
});
// inside the render function:
function animate() {
material.uniforms.time.value = clock.getElapsedTime(); // or to test quickly: Math.random();
// etc.
Built-in attributes in the vertex shader:
// default vertex attributes provided by BufferGeometry
attribute vec3 position; // the XYZ of the vertex relative to the geometry center
attribute vec3 normal; // the direction the surface faces toward at this vertex
attribute vec2 uv; // the texture coordinate at this vertex
// if using an InstancedMesh, you also have:
// = object.getMatrixAt(i)
attribute mat4 instanceMatrix; // encodes the position, quaternion rotation, and scale of the instance
We can for example use normal
to visualize normals. Here's the vertex code:
varying vec3 vertexNormal;
void main() {
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0);
// rotate the normal according to the view & model rotation
// so that it is in view space
// this is typically preferred for lighting calculations in the fragment shader
vertexNormal = normalMatrix * normal;
}
And here's the fragment code:
uniform float time;
varying vec3 vertexNormal;
void main() {
vec3 norm = normalize(vertexNormal);
gl_FragColor = vec4(norm, 1);
}
For custom attributes see https://threejs.org/docs/#api/en/core/BufferAttribute and https://threejs.org/docs/#api/en/core/BufferGeometry
A simple example: https://codepen.io/grrrwaaa/pen/PovwWKZ?editors=0010
A slighty less simple example: https://codepen.io/grrrwaaa/pen/YzbPNBX?editors=0010
So far so good, but... these custom ShaderMaterials do not give you all the amazing features of lighting, shadowing, etc. that things like MeshStandardMaterial do. Doing that is much more difficult.
Currently I can see two generally applied suggestions. One is hacking into the code generated by built-in materials via the onBeforeCompile()
method. The other is to use the newer NodeMaterial
class, but this is very experimental and not fully documented yet: http://raw.githack.com/sunag/three.js/dev-nodes-doc/docs/index.html?q=node#manual/en/introduction/How-to-use-node-material
So, either you'll be
ShaderMaterial
to make a rendering system that is mostly independent of the standard lighting system in Three.js, onBeforeCompile()
to make very minor adjustments to the standard Three.js lighting system, NodeMaterial
to investigate a completely new way of doing lighting in Three.js. If you want to explore some of the amazing things you can do with GLSL fragment shaders, you might enjoy exploring the Shadertoy website -- however be warned, ALL of the shaders on this website are 2D (even if they look 3D), and many of them will be difficult to translate to a true 3D environment as we need for XR.
For an object-centric viewpoint, look at Orbit controls, but see also Track ball controls
import { OrbitControls } from 'three/addons/controls/OrbitControls.js';
const controls = new OrbitControls(camera, renderer.domElement);
function animate() {
//...
controls.update(dt);
//...
}
For free movement in 3D space, look at Fly controls, but see also First person controls
import { FlyControls } from 'three/addons/controls/FlyControls.js';
const controls = new FlyControls(camera, renderer.domElement);
controls.movementSpeed = 1;
controls.rollSpeed = Math.PI / 3;
function animate() {
//...
controls.update(dt);
//...
}
For a WASD style interface, start from Pointer lock controls
import { PointerLockControls } from 'three/addons/controls/PointerLockControls.js';
const controls = new PointerLockControls( camera, document.body );
scene.add(controls.getObject());
// Pointer lock requires a user action to start, e.g. click on canvas to start pointerlock:
renderer.domElement.addEventListener( 'click', function () {
controls.lock();
});
// get callbacks when this happens:
// controls.addEventListener( 'lock', function () { /* e.g. hide "click to look" instructions */ })
// controls.addEventListener( 'unlock', function () { /* e.g. show "click to look" instructions */ })
// for WASD:
const move = {
forward: 0,
backward: 0,
right: 0,
left: 0,
dir: new THREE.Vector3(),
}
document.addEventListener( 'keydown', function (event) {
switch ( event.code ) {
case 'ArrowUp':
case 'KeyW':
move.forward = 1;
break;
case 'ArrowLeft':
case 'KeyA':
move.left = 1;
break;
case 'ArrowDown':
case 'KeyS':
move.backward = 1;
break;
case 'ArrowRight':
case 'KeyD':
move.right = 1;
break;
}
});
document.addEventListener( 'keyup', function (event) {
switch ( event.code ) {
case 'ArrowUp':
case 'KeyW':
move.forward = 0;
break;
case 'ArrowLeft':
case 'KeyA':
move.left = 0;
break;
case 'ArrowDown':
case 'KeyS':
move.backward = 0;
break;
case 'ArrowRight':
case 'KeyD':
move.right = 0;
break;
}
});
function animate() {
if (controls.isLocked === true) {
// use move properties and controls.moveRight() / controls.moveForward() to modify camera...
// use controls.getObject().position for navigation limits / collisions etc.
move.dir.z = move.forward - move.backward;
move.dir.x = move.right - move.left;
move.dir.normalize();
let spd = 3 * dt; // or 3/60
controls.moveRight(move.dir.x * spd);
controls.moveForward(move.dir.z * spd);
}
}
As an alternative, we can write code around OrbitControls to create a different kind of WASD controller. The code in this link adds jump and teleport, but also works in VR: https://stackblitz.com/edit/web-platform-phnvvr?file=index.html
Generally this means casting a ray into the scene from the mouse/touch point, and finding what objects are under this ray. Here are the Three.js examples for doing this with different kinds of objects:
The central idea is always the same. We create a THREE.Raycaster
and use it to intersect with the scene like this:
// in setup
let raycaster = new THREE.Raycaster();
// keep track of mouse position:
let pointer = new THREE.Vector2(); // x, y position of mouse
// update from the window when mouse moves:
document.addEventListener( 'pointermove', function onPointerMove(event) {
pointer.x = ( event.clientX / window.innerWidth ) * 2 - 1;
pointer.y = - ( event.clientY / window.innerHeight ) * 2 + 1;
})
// in animate:
// tell raycaster where to start from and where to direct ray into the scene:
raycaster.setFromCamera( pointer, camera );
// find the intersections in the scene:
// returns an array of what the ray intersected with, closest first
const intersects = raycaster.intersectObjects(scene.children);
// to loop over the intersections:
for (let intersection of intersects) {
// each intersection is an object of the form:
// { distance, point, object, instanceID, normal, uv }
// distance is the distance from the raycaster to the object it hit
// point is the worldspace location where it hit
// object is the Object3D (e.g. Mesh) that was hit
// instanceID is the instance number if the Mesh was an InstancedMesh
// normal & UV are the normal and texture coordinates of the object where it was hit
}
// if you only wanted the closest object
// you can save some CPU by setting:
raycaster.firstHitOnly = true;
// and use the value of intersects[0]
// load the addon for building controller models:
import { XRControllerModelFactory } from 'three/addons/webxr/XRControllerModelFactory.js';
const controllerModelFactory = new XRControllerModelFactory();
// getting 2 controllers:
let controller = renderer.xr.getController( 0 );
scene.add( controller );
let controller2 = renderer.xr.getController( 1 );
scene.add( controller2 );
// for each controller:
controllerGrip = renderer.xr.getControllerGrip( 0 );
controllerGrip.add( controllerModelFactory.createControllerModel( controllerGrip ) );
scene.add( controllerGrip );
We can use the Raycaster in the same way, but set from an XRController:
raycaster.setFromXRController(controller);
// adding event handlers for the controllers:
controller.addEventListener( 'selectstart', function(event) {
const controller = event.target;
// do a ray intersection:
getIntersections(controller);
});
controller.addEventListener( 'selectend', function(event) {
const controller = event.target;
// etc.
});
// call this in the 'selectstart' event, but also call it in animate()
// so that it continuously updates while moving the controller around
function getIntersections( controller ) {
controller.updateMatrixWorld();
raycaster.setFromXRController( controller );
let intersections = raycaster.intersectObjects( scene.children );
// etc.
}
// events for getting/losing controllers:
// adding controller models:
controller.addEventListener( 'connected', function ( event ) {
});
controller.addEventListener( 'disconnected', function () {
});
Controller input examples:
https://threejs.org/examples/?q=webxr#webxr_xr_controls_transform
https://threejs.org/examples/?q=webxr#webxr_xr_haptics
See also https://threejs.org/docs/#api/en/renderers/webxr/WebXRManager
First we need to request it:
const sessionInit = {
requiredFeatures: [ 'hand-tracking' ]
};
document.body.appendChild( VRButton.createButton( renderer, sessionInit ) );
For more examples of using VR controllers & Hands, see these examples:
The basic idea is to cast a ray to some distant object to find where to teleport to, and then update our position accordingly.
This is typically used in VR, but can also be used in a non-VR context. But let's look at the VR case here:
First, we need a regular scene with XR and XR controllers as above. We can cast a ray from the controller just as we did above.
Within the 'selectend' event, we can cause a teleportation by changing the renderer XRManager's reference space, like this:
// during setup, make sure we have an initial reference space:
let baseReferenceSpace = null
renderer.xr.addEventListener( 'sessionstart', () => baseReferenceSpace = renderer.xr.getReferenceSpace() );
function teleportTo(target) {
// assuming that `target` is the world-space location of the point we are jumping to:
const offsetPosition = { x: - INTERSECTION.x, y: - INTERSECTION.y, z: - INTERSECTION.z, w: 1 };
const offsetRotation = new THREE.Quaternion();
const transform = new XRRigidTransform( offsetPosition, offsetRotation );
const teleportSpaceOffset = baseReferenceSpace.getOffsetReferenceSpace( transform );
renderer.xr.setReferenceSpace( teleportSpaceOffset );
}
Maybe
Probably we want to also render a THREE.Line from the controller to whatever the first intersection is, to visualize what the potential teleport is.
Three.js teleport example: https://threejs.org/examples/?q=teleport#webxr_vr_teleport
A bezier-style teleport arc example: https://github.com/gkjohnson/webxr-sandbox/tree/main/teleport-controls
See https://threejs.org/examples/?q=webxr#webxr_xr_cubes
document.body.appendChild( VRButton.createButton( renderer, { 'optionalFeatures': [ 'depth-sensing'] }) );