Experiential Design - Task 1 : Trending Experience
21/04/2025 - ( Week 1 - Week 4 )
Ho Winnie / 0364866
Experiential Design / Bachelor's of Design Honors In Creative Media
Task 1 : Trending Experience
1. Lectures
In addition to the module breakdown, Mr. Razif also took time to explain the key differences between Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR). He highlighted that Virtual Reality (VR) creates a fully immersive digital environment that completely replaces the real world, typically experienced through headsets like the Oculus Rift or HTC Vive. On the other hand, Augmented Reality (AR) overlays digital content onto the real world using devices like smartphones or AR glasses—for example, Pokémon GO or Instagram filters. Lastly, Mixed Reality (MR) blends both the physical and digital worlds, allowing real and virtual elements to interact in real-time. MR requires more advanced technology, such as Microsoft HoloLens, and is often used in complex simulations or interactive training environments.
To give us a better idea of what we’ll eventually be creating, Mr. Razif also shared a playlist of seniors’ past works. These examples served as inspiration and gave us insight into the level of creativity and technical execution expected in this module. It was both motivating and informative to see how previous students interpreted the brief and pushed the boundaries of immersive media.
As part of the lesson, we also explored user mapping and customer journey maps. Mr. Razif emphasized the importance of understanding the user’s perspective—what they go through before, during, and after interacting with a service or space. These tools help designers visualize the complete user experience, including touchpoints, emotions, and behaviors, enabling us to identify areas for improvement.
To put this theory into practice, we were divided into groups for an engaging activity. Each group was assigned or allowed to select a specific location—such as a café, museum, or train station—and tasked with creating a user journey map. We had to chart out the user’s steps from the moment they approach the space to the moment they leave, highlighting key pain points (challenges or frustrations), gain points (positive experiences), and propose solutions to enhance the overall experience.
A unique twist in the task was that some of our proposed solutions had to incorporate Augmented Reality (AR). Our group chose to do Disneyland Tokyo as the location.
2. Task 1 - Trending Experience
-
Example: The GazePointAR system lets you say, “What’s that?” while pointing or looking at an object—and the AR assistant knows exactly what you mean. This is valuable especially in museums, where you might want to explore artifacts without reading long text panels.
-
In retail, voice-driven AR mirrors are being tested, where customers can say, “Try this shirt in blue,” and instantly see it on themselves.
What I found most inspiring here is the accessibility. Users who might struggle with touch interfaces—like the elderly or people with disabilities—can now engage with AR in a more natural, inclusive way.
![]() |
| Fig 2.2 GazePoint AR |
Today’s leading AR platforms go even further, combining voice, eye tracking, and gesture control:
-
Apple Vision Pro supports eye tracking and voice input simultaneously. You can look at an icon and say “Open Messages,” and it works—no touch needed.
-
Meta’s Orion project introduces neural interfaces that detect finger movements from your wrist, allowing for micro-gestures. This is particularly promising for contexts like surgery or factory work where precision and cleanliness matter.
![]() |
| Fig 2.3 Meta's Orion Project |
This type of place-based storytelling blends historical facts with sensory engagement—helping travelers emotionally connect with the local culture and history, rather than just observing it from a distance.
![]() |
| Fig 2.4 Singapore Tourism AR |
Tourism-focused AR doesn’t just enhance famous landmarks—it also reveals hidden gems. A tucked-away alley might be enriched with an AR reenactment of a forgotten street opera performance, or a nondescript building might reveal its role in an anti-colonial resistance movement.
Moreover, some platforms use location-aware progression, encouraging users to “unlock” stories as they physically walk through different areas—creating a gamified sense of discovery akin to a digital treasure hunt.
It’s a reminder that AR is not just about visual overlays—it’s about building meaningful, interactive experiences that connect people to places, histories, and communities in a more intuitive and personalized way.
As I started building an AR pet experience, one of my biggest goals was to make interactions feel as natural as talking to a real pet. That meant integrating voice commands—so users could simply say “sit,” “come here,” or “play dead” and have their virtual pet respond in real-time. To do that, I explored several Unity-compatible tools that could handle speech recognition effectively.
I first looked into Microsoft Azure Speech SDK, which turned out to be one of the most reliable options. It supports real-time voice input, multiple languages, and works well with Unity. For an app where kids or adults might speak casually to their AR pet, this SDK gives me the flexibility to interpret both short commands and full phrases like “can you fetch the ball?”
Another option I considered was Google Cloud Speech-to-Text. Its accuracy and broad language support make it great for understanding different tones and speech patterns. This is especially useful if I want my AR pet to respond to users with different accents or even in non-English environments.
Step-by-Step Breakdown of the Experience:
1.Patient Identification & Scan Sync
The system syncs with the patient's latest scan results (e.g., CT, MRI, X-ray).
A 3D model of the relevant organ or body system is generated instantly (e.g., lungs, heart, spine).
2.Interactive AR Visualization
The organ appears as a floating 3D hologram in front of the patient (on a tablet or via AR glasses).
It rotates slowly to show different angles.
Problem areas (e.g., tumor, blockage, inflammation) are highlighted in red or animated to show abnormal movement.
3.Layered Explanations
Doctor can toggle between:
Healthy vs. affected states (e.g., normal vs. damaged lung tissue).
Step-by-step animations of disease progression or surgical procedures.
Touch & Voice Interactions
Patients can ask, "What is this area?" and a label will pop up.
They can tap on a highlighted spot for text-based info or to hear a simplified explanation.
4.Treatment Simulation
Doctor shows how surgery or medicine would work:
A virtual stent expands a blocked artery.
Medication animation shows reduced inflammation.
Post-surgery visuals depict expected recovery.
Furthermore, using AR to guide patients on how to use medical devices at home post surgery.
![]() |
| Fig 2.5 AR Mockups |
Marker-Based AR uses predefined visual cues—usually images or objects called markers—that the AR system can recognize through the camera. When the system detects the marker, it overlays digital content (such as 3D models, animations, or information) on top of it. This type of AR is known for its precision in positioning digital elements and is commonly used in books, posters, product packaging, and educational materials.
In contrast, Markerless AR does not rely on specific visual markers. Instead, it uses device sensors like GPS, accelerometer, gyroscope, and camera data to place digital content in the real world. Markerless AR allows for more flexible and interactive experiences, such as placing virtual furniture in a room (as seen in apps like IKEA Place) or overlaying digital directions on streets through AR navigation.
By starting with Marker-Based AR in Unity, we’re learning the foundational principles of AR development—such as camera setup, image recognition, and 3D object interaction—which will later support more complex Markerless AR projects.
To begin developing a Marker-Based AR experience, the first step is to download and install Unity, a popular game engine that supports AR development. After setting up Unity, we need to register for a Vuforia developer account—Vuforia is an AR SDK (Software Development Kit) that integrates with Unity and allows for the recognition and tracking of real-world images.
Once registered, we proceed to download the Vuforia Engine package and import it into our Unity project. Within the Vuforia Developer Portal, we then create a new database and upload our chosen image(s) to be used as image targets. These image targets act as the visual markers that the AR system will recognize through the device camera.
It is important to ensure that the image rating is at least 3 stars or higher. Vuforia assigns a rating based on the image’s visual features—such as contrast, complexity, and clarity. Images rated below 3 stars are generally more difficult for the AR system to recognize accurately, which can negatively impact the user experience.
![]() |
| Fig 2.6 Creating Target Manager Using My Image |
![]() |
| Fig 2.7 Creating License |
In Unity, I made sure to set the cube as a child of the Image Target in the hierarchy. This setup is important because it ensures that the cube only appears when the image target is scanned using the device camera (refer to Fig. 2.1). If the cube isn’t properly nested under the Image Target, it won’t be tied to the marker and won't show up during scanning.
![]() |
| Fig 2.8 Nesting Cube Above Image |
I also double-checked that I selected the correct image target and confirmed that my Vuforia license key was entered correctly. To do this, I clicked on the ARCamera in the scene, opened the Vuforia Configuration, and pasted the license key into the appropriate field. This step is crucial because the key enables Vuforia’s AR features in my project.
![]() |
| Fig 2.9 Inputing Correct License Key |
Once I had everything set up, I switched on the camera within the Unity play mode to test the AR experience. As soon as I pointed the camera toward the printed image target, the system successfully recognized it, and the cube immediately popped up on top of the image in real time. This confirmed that the cube was correctly linked as a child to the image target and that the marker detection was functioning as expected.
Seeing the cube appear precisely when the marker was detected gave me a clear sense of how Marker-Based AR works. It felt rewarding to see all the configurations—such as the image database, license key, and Vuforia settings—come together to produce a working AR effect.
Final Outcome Of Exercise :
We learned how to create and customize a Canvas in Unity, which acts as a container for all UI elements. Within this canvas, we added buttons that serve as user inputs to trigger specific actions. By assigning scripts to these buttons, we enabled basic interactivity such as making AR cubes appear or disappear with a single click.
Next, we added two buttons to the canvas by right-clicking on the canvas in the Hierarchy panel, navigating to UI → Button (TextMeshPro). We renamed the first button to BTN_HIDE and changed its label to “HIDE”, and did the same for the second button, renaming it to BTN_SHOW with the label “SHOW”. This was done by expanding each button object and editing the child Text (TMP) component from the Inspector panel.
After placing and arranging the buttons in the scene, we wrote simple C# scripts to connect these buttons to our AR cube. Using Unity’s EventSystem, we linked each button’s OnClick() function to a method that either hides or shows the cube. This allowed us to test and confirm that clicking “HIDE” made the cube disappear, while “SHOW” made it reappear—demonstrating how user interfaces can control 3D AR objects effectively.
![]() |
| Fig 3.1 Adding UI buttons |
To implement this, I modified each button's Color Tint settings in the Button component (Inspector panel). I adjusted the Pressed color for each button to reflect the desired visual feedback when the user taps on them during runtime.
Additionally, I connected the buttons to actual functionality by using the OnClick() event section of the Button component. We assigned the GameObject containing the AR Cube to the button’s OnClick() field, and then used Unity's built-in function:
GameObject.SetActive(bool).
-
For the "HIDE" button, we set
SetActive(false)to make the AR Cube disappear. -
For the "SHOW" button, we set
SetActive(true)to make the AR Cube reappear.
![]() |
| Fig 3.2 Connecting Functions To Button |
We started by selecting the Cube and opening the Animation window, where we recorded a new animation clip. By adding keyframes on the timeline, we created a looping animation sequence that made the Cube rotate smoothly over time. This gave us a hands-on understanding of how keyframe-based animations work in Unity, and how motion can be defined frame-by-frame using position, rotation, or scale changes.
![]() |
| Fig 3.4 Animation Timeline |
To refine the interaction further, we introduced buttons to represent the animation’s state—whether it's playing or stopped. This was done by enabling or disabling the Animator component directly using the button’s OnClick() event.
Specifically, we used Unity’s Animator.enabled property:
-
When the "PLAY" button is clicked, we ticked (enabled) the Animator component, allowing the cube’s animation to play.
-
When the "STOP" button is clicked, we unticked (disabled) the Animator component, which stopped the animation playback.
![]() |
| Fig 3.5 Animator Component On Buttons |
Over the past few weeks, engaging with both conceptual and hands-on AR activities has significantly broadened my understanding of how Augmented Reality works—not just technically, but also from a user experience and design perspective.
In Week 3 Exercise 1, imagining a hospital consultation scenario helped me realize the powerful role AR can play in solving real-world problems. By visualizing how a 3D organ model can be projected in front of a patient, I understood how AR can simplify complex information, bridge communication gaps, and improve decision-making. It made me think more critically about empathy in design—how AR can empower users through visual clarity and interactivity, such as toggling between healthy and diseased tissue, or asking questions via voice commands. This exercise sparked my appreciation for AR not just as a cool technology, but as a meaningful tool for education, communication, and emotional support.
In Week 3 Exercise 2, learning about Marker-Based vs. Markerless AR gave me the foundational technical knowledge I needed to build functional AR applications. Setting up Vuforia in Unity, uploading image targets, and experiencing the real-time detection of an image marker helped me grasp the underlying logic of how AR systems track, detect, and anchor virtual content. I also learned the importance of image quality and hierarchy in Unity—ensuring the cube was a child of the image target taught me how spatial linkage between digital and physical elements works. This technical setup demystified how AR apps "know" where to place objects in the real world.
Moving into Week 4 Exercise 1, designing UI buttons in Unity gave me my first real taste of interactive AR. I understood how a basic interface could allow users to control 3D content with touch—such as hiding and showing an AR object on demand. Learning about Canvas, TextMeshPro, EventSystems, and OnClick functions made me realize how important user control and feedback are in AR experiences. The color feedback we added to buttons made the interaction more intuitive, teaching me that good UI design in AR needs to communicate clearly and respond visually to user actions.
Finally, Week 4 Exercise 2 allowed me to animate a 3D cube and control its motion through buttons. This activity helped me link animation with interactivity, a core element of immersive AR experiences. By working with Unity's animation timeline and the Animator component, I learned how AR experiences can become dynamic and engaging, rather than static and passive. The ability to start and stop an animation through UI control deepened my understanding of event-driven programming in Unity and how motion can be used to convey behavior or narrative in AR.
3. Feedback
4. Reflection













Comments
Post a Comment