What is interesting in particular? You have to stand in a particular place and look in a particular direction for the ephemeral glitches to solidify into geometry than you can then walk on
How this could adapt to our project’s requirements? it could be interesting having many users using perspective to create new geometries in the same world
What are the challenges to adapt to our project's requirements? There would have to be a way for geometry to decay/fragment again
Graham Wakefield
What is interesting in particular? You send out sound beams to perceive the geometry (like bat/dolphin echolocation)
How this could adapt to our project’s requirements? It could be interesting to see many visitors using this mechanism at the same time
What are the challenges to adapt to our project's requirements? With multiple users, should the visualization become more chaotic (hard to use) if everyone is making sounds at the same time? Should making the sounds also change the world somehow -- like each time we make a sound, it erodes the world a bit?
Graham Wakefield
What is interesting in particular? You throw paint to discover the geometry. The mechanic is to throw paint in order to reveal the geometry of the world (but if you throw too much, it will become unseen again).
How this could adapt to our project’s requirements? What if the paints were different chemicals, and they react with each other – so each visitor could be throwing a different chemical. Maybe some of them modify the world – melting it, or starting something to grow?
What are the challenges to adapt to our project's requirements?
Graham Wakefield
What is interesting in particular? The world is entirely made of point clouds -- which is simpler as geometry in a way, and you can always see through them. You reveal the geometry by "scanning" it.
How this could adapt to our project’s requirements? Multiple people scanning to reveal more
What are the challenges to adapt to our project's requirements? It seems too static. We would need a way to have the world change -- maybe scanning can also damage things?
Graham Wakefield
What is interesting in particular? Time freezes if you don't move. Every movement you make lets the rest of the world around you too. In VR this is quite intense!
How this could adapt to our project’s requirements? Any kind of growth or decay in the world would be frozen until it is visited. Slowing down physics could be really interesting!
What are the challenges to adapt to our project's requirements? Multi-user: we may want time to move forward locally, i.e. only when a nearby visitor moves.
Graham Wakefield
https://www.teamlab.art/w/graffiti_nature_reborn/
What is interesting in particular? You get to colour in your own animal, and after scanning it the drawing, the animal will move around the floor for you to interact with it. If you step on it it scatters and dies but comes back again somewhere on the floor, there are also other's animals around and the animals can also interact with each other.
How this could adapt to our project’s requirements? Drawing the animals can be done in all platforms and allows for multiplayer interactivity. Mobile can look through their phones with AR and see their animal or other's animals on the real life floor and interact with it by stepping on it and picking them up. Desktops can look into the virtual world with all the animals and interact with them on screen. VR can be smaller than the animals and try to escape from them. Inperson viewers can have projections on walls and ground and have another way to interact with the animals with sound or movement.
What are the challenges to adapt to our project's requirements? There are not much generative elements to it so we'll have to make another part to the project to allow for generative growth in the virtual world. maybe adding children of the animals the users make, or other plants growing around the animals as time goes.
Erica Wellman
https://youtu.be/Su6OsTb1w9Q?si=frSieHFaOyllXcgz
What is interesting in particular? Lethal Company uses procedural generation to create its rooms every time you enter the building. The items to collect and creatures are also placed randomly, except the creatures each have their own algorithms that specifies where and how they act. These are interesting features that allows players to have endless new experiences in game.
How this could adapt to our project’s requirements? I believe this could adapt to our project's generative, changing world requirement and a world worth returning to requirement (assuming the project is going along with the escape room idea as written in Project Design Document (2024/05/09)). Instead of create an escape room style game with a linear storyline, it would be more interesting to implement procedural generation where the rooms, corridors, interiors are randomly placed each time, but the puzzles remain the same but in different places.
What are the challenges to adapt to our project's requirements? The main challenge to adapt procedural generation to our project is the short time limit we have in order to code the algorithm that works without problems by the deadline. Making sure the rules applied to each object (and creatures if added) works without bugs would take a very long time that may not be realistic to achieve during this time limit.
Sonomi Modica
https://www.anumberfromtheghost.com
What is interesting in particular? The world in the game "A Number from the Ghost" is vast, vibrant, and imaginative. It's a 3D walking-simulator game where players explore various rooms. The goal is to experience how light, sound, and the environment differ in each room. One interesting aspect I discovered while playing is that if you stay in one spot for a while, your vision blurs, and you're transported to a new room with a screen showing the same view as the original room. As you move around, the view on the screen changes accordingly. The room designs and lighting effects in this game are very cool, making exploration a truly enjoyable experience.
How this could adapt to our project’s requirements? Our project shares similarities with "A Number from the Ghost" in several ways. Like the game, our project is multi-platform and presented in first-person 3D. It also features audiovisual elements and offers a world that persists and changes with a responsive environment. Both projects include open-ended generative systems, creating a dynamic experience. Just like " A Number from the Ghost," our project aims to be a place worth visiting and revisiting. Considering these similarities, our project could also take the form of a walking-simulator game, offering endless possibilities for development.
What are the challenges to adapt to our project's requirements? As our project involves multiple users or clients, we must plan how to build the server and address any related issues. Additionally, we should brainstorm how our project functions, how to make it unique and engaging, and what messages or ideas we aim to convey through it.
Yidan Zhang
https://store.steampowered.com/app/1435790/Escape_Simulator/
What is interesting in particular? I think this project has simulated a 'real escape room' experience with all the problems needed to be solved and it allows multi-players to play at the same time. From my playing experience, I love the atheistic and the cute mysteries. The game also provide me a chance to experience different kinds of escape rooms that I might be too afraid to go in the reality.
How this could adapt to our project’s requirements? The puzzles are all relatable and need to be solved in an order. I think this game has a similar concept of our idea and can be a reference of style, dimension and complexity.
What are the challenges to adapt to our project's requirements?
Nancy Jin
What is interesting in particular? The permanence in the way you need to build out a level as its space inverts on your return trip. So you need to consider creating a path that lets to go both ways. I believe this can result in an interesting tapestry should we give users tools like this to build out and reach goals. Emergent gameplay can provide interesting and unexpected consequence that stem from a players actions though this might be beyond the scope of this project.
How this could adapt to our project’s requirements? I envision giving users a randomly generated goal to reach in a 2D space, pick up an item and bring it back. While it is up to the users discretion on how they can place building blocks, They can grow and expand in ways that aren't entirely predicable. The resulting creation being both user and computer generated. The different platforms can also lend in unique ways in users can view and interact
What are the challenges to adapt to our project's requirements? Being able to properly scale and display what users have built while also enabling them to continue to build out. Creating unique and interesting interactions as well as creating an inversion of geometry could lead to some complications. I anticipate this will need to be simplified and cut down, at least for an MVP. Also unsure how to enable interaction in the physical space, perhaps it can be left as only to be observed.
Connor Fitzmaurice
https://threejs.org/examples/?q=Periodic#css3d_periodictable
What is interesting in particular? dynamic anime control with cyberpunk graphic style
How this could adapt to our project’s requirements? I think we could use the graphic style and smooth animation as an references and use it into the project
What are the challenges to adapt to our project's requirements? The virtual animation effect would be a challenges for developer
HAOQIANGU
The link is in this file Related work & critique_ (Complete Edition) Link: https://docs.google.com/document/d/1jLebG0bPNE6p3X3-b_Vgv7xzgea2v0voA0AE9KbNgks/edit?usp=sharing
What is interesting in particular? Related work & critique_ (Complete Edition) https://docs.google.com/document/d/1jLebG0bPNE6p3X3-b_Vgv7xzgea2v0voA0AE9KbNgks/edit?usp=sharing //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// What is interesting in particular? 01. marchingcubes What I find most interesting are two aspects: 1. the rich and varied materials, which result in a broader visual impact and are capable of eliciting various emotions in people. 2. The flexibility of the graphics. By this, I mean the highly adaptable relationships between shapes, including fusion, separation, speed adjustments, among other settings. This flexibility aids artists in leveraging their foundational shapes to craft more intricate and adaptable graphic effects. 02. webgl_lensflares The two things I find most interesting about this case are: 1. Enabling visual freedom of movement and exploration. 2. The halo effect enhances the exploratory, mysterious nature of the scene. 03. webgl_materials_cubemap_dynamic.html I think the most interesting point in this case is the mirroring of some of the elements in the piece. This adds a wonderful connection and correlation between the elements. 04. webgl_geometry_terrain The most interesting point is the The fog effect implemented by FogExp2, which adds a sense of mystery and layering to the scene and makes the artwork more in line with the feeling of being in the sky with the clouds. The sense of freedom to explore brought about by the first-person perspective controls. The random terrain generation brings a sense of uncertainty and surprise to the artwork.
How this could adapt to our project’s requirements? 01. marchingcubes The code's ability to generate dynamic shapes aligns perfectly with the thematic significance of clouds in our artwork 'Coupling'. Suitable for utilising Marching Cubes technology to create visuals of undersea creatures that light up as people approach and react to user interaction. 02. webgl_lensflares 3D scene rendering. This code helps structure the 3D scene, including the layout and rendering of elements such as clouds. FlyControls enables free-flight camera control, enabling users to navigate and explore the scene with ease. The halo effect contributes to the sense of mystery and exploration. This code helps us understand how to achieve this, thereby supporting the artwork 'coupling'. 03. webgl_materials_cubemap_dynamic.html I think it could be set up so that 8-10 elements of the seabed have this effect, randomly distributed among other jellyfish or small fish. This way there are individuals of the species that are presented as more closely related to the objects of their surroundings. This not only adds interest, but is also a great way to add visual interest and space for participants to think. (Depending on which undersea creatures we choose to work with) 04. webgl_geometry_terrain FogExp2 realises the fog effect, adding a sense of mystery and hierarchy to the scene. It also enhances the work's alignment with the feeling of being in the sky and among the clouds.When combined with the clouds and undersea creatures in my work, FogExp2 will create a visual effect of 'like fog but not fog, like water but not water,' delivering a visual impact while enriching the participants' experience. The control of the first-person perspective, provided by FirstPersonControls, enhances the sense of free exploration, allowing participants to delve deeper into experiencing this artwork. Random terrain generation. The generateHeight() and generateTexture() functions can be utilised to generate random terrain data and corresponding texture maps, providing more possibilities for implementing visual effects.
What are the challenges to adapt to our project's requirements? 01. marchingcubes The code's ability to generate dynamic shapes aligns perfectly with the thematic significance of clouds in our artwork 'Coupling'. Suitable for utilising Marching Cubes technology to create visuals of undersea creatures that light up as people approach and react to user interaction. 02. webgl_lensflares I think the support from the last code to this one makes the possibilities of presenting the artwork clearer. But the biggest challenge is also emerging -- how to make sure the code works as the visuals and interactions become richer and more complex. There may be compatibility challenges and issues. 03. webgl_materials_cubemap_dynamic.html I believe a challenge lies in ensuring that these changes do not conflict with the technical aspects mentioned earlier. 04. webgl_geometry_terrain Generating and rendering large-scale terrain data may require significant computational resources. How can we optimise this process to ensure a smooth user experience? Integration among elements and visual coherence are crucial. When designing interactive sections, it's essential to consider user behaviour and feedback to ensure intuitive operation and minimise confusion and frustration.
Wen Luo (Winnie)
https://threejs.org/examples/#webgl_postprocessing_pixel
What is interesting in particular? This work demonstrates how beautifully 3D models can be rendered to mimic a 2D pixel art style, significantly enhancing the artistic diversity of the project. Additionally, the project shows that lighting and polygons can be precisely represented in the pixel art style, ensuring that important details are not lost during the transformation process. This precision is crucial as it maintains the integrity of the original 3D design while presenting it in a stylized, 2D format.
How this could adapt to our project’s requirements? In my vision for our project, we can adapt this artistic rendering technique to allow different players to experience the game world in unique visual styles. For instance, Player 1 could see the world in its original 3D space, Player 2 could view it with pixelated post-processing, and Player 3 could see another type of post-processing effect such as the postprocessing / GTAO. This approach not only diversifies the visual experience but also introduces a layer of interactive puzzle-solving. Each player's world could contain special items that only they can see and interact with, which are essential for collective decryption tasks. Players would need to cooperate to piece together these clues; for example, the player might spot certain details or clues only visible when viewed from specific angles if they experiencing the pixel art style (similar to the game The Witness). These pixelated clues could be crucial for solving puzzles that require input from all players. if we don't adopt the initial idea, the pixel post-processing could become an integral part of our project's artistic direction. For instance, we could create an effect where the game transitions from a detailed 3D view into a flat pixelated view and back again, similar to the visual transition experienced when entering a battle in Pokémon games
What are the challenges to adapt to our project's requirements? Rendering different post-processing effects for multiple players simultaneously can be performance-intensive. On the other hand, my first idea places high demands on level design and can be challenging to maintain fairness among players. One way to mitigate this is by designing puzzles that are non-linear rather than linear. This approach allows players to choose which puzzle to solve first among multiple options (like the witness). This flexibility helps prevent a situation where one player's inability to solve a puzzle quickly diminishes the experience for everyone else.
Zhihan Ru
https://store.steampowered.com/app/2963650/Echo_Defy_Death/
What is interesting in particular? This upcoming game shares similarities to some of the ideas discussed in our class setting, namely the idea of a blank world and using sound to navigate it. It does this in a 2d setting and uses sound of your characters "echolocation" as well as the sounds of enemies to reveal areas visually.
How this could adapt to our project’s requirements? While this game is not released yet, the promo content on the page showcases a well-done usage of the echolocation mechanic. Having different revealable colours for the different things and how they can be interacted with would add depth to the idea as well, similar to how this game reveals enemies as red but neutral objects as white.
What are the challenges to adapt to our project's requirements? Transferring the concept from a 2d to 3d space and making it viewable along the different mediums, especially mobile.
Jacob Brintnell
What is interesting in particular? I find the idea of a 3rd-person perspective interesting because, it removes some of the constraints of movements within the physical world without losing the immersive component of the experience.
How this could adapt to our project’s requirements? I imagine we can have our VR space where multiple users can have a different perspective within the game, One perspective is the 3rd person view, another is the first person and others viewing on a desktop or mobile device might have some sort of 2D perspective of the whole environment.
What are the challenges to adapt to our project's requirements? The challenge is inherent in the idea of what the experience would be. I imagine we may face challenges in determining what kind of interaction the multiuser would have with one-another and the space itself. Another challenge from a 3rd-person perspective would be determining how the camera movement will be in sync with the crossfading of scenes without putting too much pressure on the neck of the player or user.
Boluwaji Adeyanju
What is interesting in particular? The Museum of Other Realities captivates with its immersive experience, inviting visitors into virtual realms where art transcends traditional boundaries. What sets "The Museum of Other Realities" apart is its embrace of interactivity, allowing audiences to engage directly with the artworks, fostering a dynamic and participatory environment.
How this could adapt to our project’s requirements? Museum of Other Realities allows multiple users to enter the virtual gallery at the same time and to interact and communicate with other users. The artworks on display are presented in three dimensions, creating an immersive environment that allows users to interact with the art from multiple perspectives. Through its audiovisual elements, Museum of Other Realities enhances the immersive experience by combining sound and visual effects that respond to user actions and interactions. Museum of Other Realities's dynamic nature ensures that the virtual world is persistent and constantly changing, with regular updates and new exhibitions encouraging users to return to discover new content and experiences. In addition, Museum of Other Realities's responsive environment adapts to user input and environmental factors, creating a more personalized and engaging experience for each visitor. With generative systems in place, artists can create dynamic and evolving artworks, adding layers of complexity and interactivity to the virtual environment. Thus, Museum of Other Realities makes it a place worth visiting and returning to for ongoing discovery and inspiration.
What are the challenges to adapt to our project's requirements? Will have challenges in both technical compatibility and content creation. Ensuring seamless functionality across multiple platforms, including mobile devices, desktop computers, VR headsets, and physical spaces like ACW103, demands careful optimization and testing to maintain a consistent user experience. There's also the amount of time it takes to make compelling 3D artwork, audiovisual experiences and dynamic content.
Andressa Zhu
What is interesting in particular? I think that in general, the half-life game works great in the VR environment offering very interesting visuals and experience. I would like to focus on the object interaction in the game since I feel it was done exceptionally well. Despite some advanced hand mechanics as well as animation, I would like to focus on the physics part and the general concept of movable and interactable elements in different situations. The first interesting mechanic was interaction with movable random elements leisurely standing on some surface. I was particularly amazed when the player had to move the items with the hand to reveal a hidden gun on the shelf. The level of interaction with physics to reveal the element and grab the item you need is very interesting considering completion of the tasks requires not only grabbing action but also you need, “with your own hands”, to move the items to reveal what you need to grab. That creates an interesting physical randomness that leads to interaction with the environment and modification of the object’s position based on physics similar to the real world. What is even more interesting is that elements that have some specific purpose can be interacted dynamically and paired with other physical objects. A great example is the bucket interaction which is emptied to search for some elements inside. The physical gesture required to perform such action combines an interesting coupling of steps: (1) Turning the bucket and (2) moving around the tossed items to find a desired object. It seems a very interesting mechanic in the VR space, not only because of the interaction but also because the task requires some searching ability. What is interesting is that the player doesn't know what's in the bucket therefore, those mechanics boost the cognition of the game and make the reveal interesting. Last but not least, it was interesting to watch a brief interaction of the elements based on position. In the video, the user had to put an object in a specific place to enable some action. That presents an interesting spatial interaction where the object as part of the “terrain” is interacting with physical objects. That mechanics creates interesting possible activities that beyond interaction serve a more extensive spatial purpose. Items moving by hand - https://youtu.be/O2W0N3uKXmo?si=gKB43oJlBLbN3BJb&t=57 Bucket Interaction - https://youtu.be/xRSF31dbLBU?si=xmtuw5BKVYFi-O-P&t=31 Placement of Element on the Surface - https://youtu.be/O2W0N3uKXmo?si=Hdivq-VPFIZ5aCS9&t=72
How this could adapt to our project’s requirements? I believe that Half-Life Alyx offers great fidelity in terms of interaction. Despite its advanced mechanics, I feel that the concepts presented are very interesting and certainly can be adapted to three.js as simpler concepts. It would be interesting to interact with physics-enabled elements by finding them, moving, placing, colliding, and throwing them with some hand movement or gesture. That enables possibilities to create generative living creatures that move around and interact with the world and each other, reveal/erase the visibility of parts of the map, and change map/environment/agents as well as create some human-influenced constraints and variables that would modify whole ecosystem behavior. That would create a bond between the user and the ecosystem as well as create an unknown value of how the ecosystem will proceed which is a factor to come back to the “digital space” to see the changes. It would be particularly interesting to give the player the ability to create dynamic agents by colliding with elements like spheres similar to the collision system seen in the Half-Life Alyx. In the event of some random spheres lying on the ground, the player could have a possibility to interact with them by chaotic hand movement to create a “generative” agent or “grab and place” to create the desired shape. It would be particularly interesting if the agents would have some generative species and based on those perform some actions. It would also be interesting to grab the agent and have the ability to move it somewhere else, modify it, or examine its hidden parts by moving it around similarly to the Half-Life Alyx’s bucket scene. That would create interesting interaction between the player and the environment making the user more curious about the environment and influential on the ecosystem. I feel that the cognition part of VR is also important therefore, there should be some object to pick items/abilities from as well as search for the desired objects in some hidden places or stack of items similar to the Half-Life Alyx’s bucket scene. Those objects found could have some extra abilities for example accelerate or freeze time or partial map reveal based on microphone sounds. That would create some persistence and change in the world based on some outside of ecosystem variables. As seen in the Half-Life Alyx’s video, it would be interesting to create spaces designated to place the object where the user can place some spheres to attract, and repel agents, or have some action like food source. That definitely will add to the responsiveness of the environment. Assuming that agents can replicate and die and have to eat in certain spaces, it would create an interesting user and time-influenced ecosystem. Such interaction shapes the project as a place that is worth returning to, since the user placement decisions will influence the environmental ecosystem changes and will go on without the direct intervention of humans. Because of that, the user would go back to see the general expanded ecosystem as well as dynamic real-time changes. Geometries seen in half-life can be introduced as simple objects as instanced geometries to interact, which you can throw around and place to perform some action. I believe that a lot of generated interactive objects to create some feedback and action would be a good idea for functionality and can be achieved with the three.js physics library cannon.js (https://schteppe.github.io/cannon.js/) and raycaster which is not difficult to implement.
What are the challenges to adapt to our project's requirements? I believe that in general using a library for physics and raycaster to pick some objects is quite simple. On the other hand, physics and lots of elements require a lot of CPU computational power therefore fidelity similar to half-life interaction might be low, considering it should work on multiple devices. What is interesting and might be difficult to implement is an examination of the agent. I’m not sure if there is a possibility, in the controller to rotate the agent around on the other axis than X therefore that could be difficult to implement movement/rotation on a 3-axis basis. Another difficult thing is to create a rigid body based on some generative geometry based on other objects' positions. It might be interesting but not too intuitive to move objects around to create a desired shape, and also extremely difficult to merge those objects as meshes and animate them. I feel there is a way to just copy some positions to the buffer geometry to create a general shape but it's worth examining further. I also haven't personally created interactive spots to put some objects on it but I feel it requires some radius and collision which might be theoretically not too difficult.
Philip Michalowski
https://joonmoon.net/Augmented-Shadow
What is interesting in particular? I like how the project presents an intersection between the physical and virtual worlds in a playful way. It's interesting how the shadows react with other things around them.
How this could adapt to our project’s requirements? We could have an element of physical computing, or a physical component, which affects what appears in the virtual world and how they appear which is then explored by users connected by VR/desktop/mobile platforms.
What are the challenges to adapt to our project's requirements? How can users interact with the built elements/each other instead of just the physical elements and how will this affect the virtual world? I also think adding a physical component and trying to connect it to everything else will take time.
Alice Chai
https://hackaday.io/project/2598-vr-tooth-brushing
What is interesting in particular? The interesting point of this project is its controller, this diy specialty controller caught my eye.
How this could adapt to our project’s requirements? I think our project could also do with some external specialty controllers, like pull-sticks or other ways to interact.
What are the challenges to adapt to our project's requirements? I think the difficulty in adapting this idea to our project is that it requires some mechanical related knowledge, such as microcontrollers or circuits.
Yinqi Li
https://tweenjs.github.io/tween.js/examples/11_stop_all_chained_tweens.html
What is interesting in particular? The demo showcases how by using Tween.js, objects in the demo smoothly transition between states, which makes the scene more dynamic and engaging.
How this could adapt to our project’s requirements? By incorporating smooth animations for UI elements in a 3D space (as shown in the demo), we can significantly enhance the user experience, making interfaces feel more fluid and responsive.
What are the challenges to adapt to our project's requirements?
Haobang Deng
What is interesting in particular? I really like the idea of letting the user to explore the world via their decision. I really like the spray of the ink, the mechanics and the way that the user don't know what to expect (at least visually, they only see it after the ink interactions)
How this could adapt to our project’s requirements? The ideas are very similar in a way that everything is refreshing (new, unexpected). This can also be done in a month since mostly of the things saves a lot of design time ( colors, materials can be black and white) in creating generative arts
What are the challenges to adapt to our project's requirements? we have to be different in a way that combines the ideas of generative arts and interactions.
Lau Wai Kwok
https://threejs.org/examples/#webgl_interactive_buffergeometry
What is interesting in particular? The object is formed by buffergeometry, and it rotated automatically. The colour is changing smoothly.
How this could adapt to our project’s requirements? This geometry could be designed into our project effect. To make something like an energy spot.
What are the challenges to adapt to our project's requirements? The main challenge is programming skill, there are a lot for me to learn about.
Yilin Ye
https://artsandculture.google.com/
What is interesting in particular? Google Arts & Culture's AR features allow users to view artworks and cultural artifacts in actual size at home, offering opportunities to walk through virtual galleries and closely engage with collections from museums worldwide. These technologies enhance the interactivity of art education and provide global audiences with barrier-free access to cultural treasures from around the world.
How this could adapt to our project’s requirements? To adapt the project for multi-user support across multiple platforms, including mobile, desktop, and VR headsets, the use of the WebXR API is recommended for compatibility. Utilizing Three.js will help create immersive three-dimensional visual content, while the WebAudio API enhances the audio experience. Additionally, integrating a user-friendly interface that allows students to upload and display their Three.js-based projects will enhance interactivity and sustained engagement.
What are the challenges to adapt to our project's requirements? To adapt this project to its requirements, several key technological challenges must be addressed. Firstly, ensuring that the project runs smoothly across various devices and platforms—such as mobile, desktop, and VR headsets—necessitates resolving compatibility and performance optimization issues, especially for 3D visual and audio content to ensure a fluid user experience.
Jiaxiang Song