DH Project: A (Virtual) Room of One’s Own

In the 1929 essay, “A Room of One’s Own”, Virginia Woolf argues that women– and all people– need not only a figurative “space” for writing, but a very real, tangible space as well. This concept of personal space is something nearly everyone can relate to, for writing or a wide variety of other reasons.

My DH project was inspired by this need for a personal space, or a room of one’s own. Recently I’ve watched as a close family member struggle against depression and anxiety. When I was figuring out what to create with the DH tools, I was inspired by the way in which she found peace within her own bedroom. Even if she wasn’t alone in there, the space itself gave her comfort. The problem is that she isn’t– and can’t be– always in her room. Additionally, bringing herself to that state mentally doesn’t always work; when she’s flustered, sometimes her imagination doesn’t work exactly as she’d like it to. The human mind is strong, but it work both ways. Sometimes, it provides such a strong adversary to her that she can’t overcome it.

Instead of forcing her to by physically present in  the room or to imagine herself in  this safe space, what if the room could be visually recreated by combining Google Glass and the iPad Structure Sensor?

My idea is to use the iPad’s camera and Sensor Structure to map out a space that makes people feel comfortable, and then use the Google Glass to display this image to the wearer. Firstly, the room would be designed by the person intended to wear them. They would create the ideal location to make them feel safe, calm, and comfortable. Then, the room would be slowly and carefully mapped and recreated in a computer program. This digital image would be a 360 degree view of the space. It would then be input into the Google Glass. Using some sort of orientation technology, the Google Glass would show different parts of the room based on where you’re looking, moving alongside your “range of vision”. Additionally, the wearer could walk in reality while the Glass makes it seem as if they are walking in the virtual room. This would make it seem as if you were truly in the room.

My first step was to attempt to create a “first person experience” using the Google Glass in my bedroom. While replaying this over the Glass would simulate an experience, it’s merely a simple set of prerecorded movements.

In other words, this doesn’t “recreate the space” in an immersive way (the user can’t walk among reality and experience their “safe space”). This is simply a replaying of a recorded scenario. If it’s not immersive, then it’s essentially just a video.

My next step was to map the room I was using. As shown on the right, the room was able to be mapped and explored, but there were many problems with the scan, even after spending several minutes on it, in spite of the room being so small.

Finally, I made a video to compare the experience of the Glass video and the interaction with the 3D model

While the Google Glass video shown earlier clearly is cleaner and smoother, the Structure Sensor video has more flexibility. While I tried to follow the same movement pattern using the iPad controls, it is clear that there is more freedom of movement in the second video.

For this idea to work, the two must be combined. The user must have the flexibility to move and look around as if they were truly in the room, but it must have the clarity and “realness” of a Google Glass video.

I came to the conclusion that both the Google Glass and the Structure Sensor need major improvements  before they would be able to be used in this way. The Glass needs to be able to store the 3D model, as well as provide fully immersive visuals to create this safe “environment” for the user. Additionally, it needs to be able to orient the positions and viewpoint in the user’s physical location in order to replicate movement/gaze in the virtual room. The Structure Sensor needs to be able to map with more accuracy and fluidity. These models need to be compatible with the Glass technology. Furthermore, it would be beneficial if the sensor could capture movement, such as a person waving in the room, or a fan spinning on the ceiling.

While my idea was inspired by mental illness, the creation of a room, or a “home away from home” could be used for many different purposes:

  • Homesick students studying abroad
  • Military members serving across the world
  • Alzheimer’s patients trying to recall an important location/ person

Further purposes (aside from person use) could include:

  • Filmmakers building sets based off of real locations
  • Scientists capturing a location/object to study later
  • Crime Investigators exploring a crime scene further and more closely

Thanks for reading, and I hope you enjoyed my experience with the DH tools Google Glass and the iPad Structure Sensor to theoretically build an immersive, in-room virtual reality experience.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s