A distant approach

Distant reading I believe is a way of quantitatively distributing the understanding of literature by statistically analyzing huge amounts of data. Sometimes, close reading may not talk about the true scope and nature of the literature as there could be many contextual pieces that surround the work. It is important to know about the prominent voices and when the conversation shifts. All this becomes simpler in the case of distant reading as the tools try to parse the whole passage by identifying positive negative, neutral tones and voices and provides a visual illustration to readers.

One of the tools I worked on recently was similar to the Voyant tool as we touched the base of probabilistic analysis to create a situation analyzer. The aim was to determine type of situation in real-time by not being present physically. This involved identifying multiple points in the picture that tell something about what is going on. The technology that comes into picture here is Machine learning where we feed our program some pre-identified images and our program learn from it to identify similar objects when presented to it.



The image above shows some different functionalities that provide illustration of what is going on in the real-time video. Such are the tools in digital humanities that provide us with all the relevant information about the topic and it also comprehends based on the specific parts we’re looking for. It is tough to imagine hundreds of thousands of different points and pieces of data about a sculpture that archeologists are identifying with the texture in it adding up to it’s complexity. The tools like Voyant, Photoscan, Sketchup etc eases the job tremendously.


bkr-Schulz-t_CA0-jumboIllustration by Joon Mo Kang (Source: Stanford Literary Lab)


This image looks too confusing at the first sight but talks in depth about a literature that would take hours to go through. As it connects different characters and different scenes, it provides an overview of the play as a whole in terms of what is happening, where is it taking place, who are the characters in the scene and how they’re related. All through just a small illustration talk about the power of distant reading and it’s usefulness. Though there is so much to dive deep into in this field, I would just conclude by saying that with the emergence of newer technologies, we are able to look back into the past and infer something out of it due to Digital Humanities.



A Stream of Consciousness


The past few weeks in this class have been very interesting for me. I’ve been connecting a lot of mental dots, from past classes and ideas I’ve had or seen recently. Our discussions of 3D modeling, mapping, and copyrighting have given me several ideas and are making me think differently about new technologies. My thoughts have centralized around intellectual property/copyright, 3D modeling, and 3D printing.

Medical Fields and Databases

3D modeling is a powerful tool. As with many things, I like to look at the implications of this technology. It’s easy to do, as we can tell by the fact that we were able to get at least some result in our quick experiment in class. People who devote more time and use better equipment get great and accurate results, and these models can serve as powerful tools. Being able to create an almost exact 3D model of things in the real world is a way of preserving them forever. A picture doesn’t quite do a sculpture or a three-dimensional object enough justice in preserving its legacy or what it’s really like. This means that at the moment, within a reasonable scale, we can digitally document anything. Sculptures, paintings, objects, and even things that aren’t yet real can be modeled in 3D. Things that can be modeled in 3D can also be 3D printed, which I think is fascinating. The two of these concepts can be combined to do amazing things. Obviously, people can design their own 3D models from scratch to be printed. But, if you want something 100% accurate to the real world, you can use a model from a real object. You could have a scaled replica of a Michelangelo statue, or maybe some day (when 3D printing is slightly more advanced) you could have an exact replica. The implications of this are far more reaching than I can comprehend at this moment, but I first think of the medical field. Being able to model body parts or organs after real humans bodies will allow medical professionals to build a “human anatomy” database.




In this article, it briefly discusses how a group of scientists were able to 3D print human liver tissue. 3D printing human cells is becoming a very real thing. This means that some day not too far from now we could be 3D printing kidneys, or skin, or brain tissue. Having this “anatomy database” of organs and limbs and other things, modeled off of real people, would mean that 3D printing these things would be very easy. It takes a lot of the human error out of the picture. I know I’m not great at explaining this, but you have to imagine a time in the future where someone needs a heart transplant, and instead of waiting for a donor (which is very very unlikely to happen) doctors are able to use and print a preloaded model that matches the need of the patient. My other thoughts on 3D modeling don’t relate to the medical field, but do relate to the idea of a database of sorts. Everything is able to be 3D modeled, but not everything is able to be in the way we did in class. This is a more accurate method than trying to design something by hand, though. Even still, everything that can be modeled accurately using the method we used and discussed in class could be available to everyone in the world. This is sort of a far fetched and largely conceptual idea, but it’s an implication I think of. A database categorized by keywords and names and preset categories, all of 3D models of real things. For example, if I modeled the fire hydrant I took pictures of for class, it would be filed under “fire hydrants”, a folder where thousands of other different fire hydrants exist. What would be the point of this? I have no clue. I can only speculate minors uses at this point in my imagination. Maybe this could help with different kinds of mapping, such as home design or city planning. This could even be useful in something like mapping Pompei. I know at this point in time, things like this sort of already exist, but I think it would be interesting to have a centralized place to have all things. A global center of all “things” to which anyone can contribute (if that makes any sense at all). A smaller scale example would be to have a digital museum of sorts, which contains all sorts of art from around the world in a central place. Sure, it’s not the real thing, but it’s preserved and its beauty can be observed in a different way.

Art, the Creator, and Ideas

That scattered thought piece leads me to my final and main thought about the past few classes we’ve had: copyright and intellectual property. In my senior year of english class in high school, we did a unit on “art and the artist”. Two ideas from that which caught my attention were counterfeiting and adding value to things based off perspective. The sparknotes version of those two ideas are this; a perfect counterfeit of a famous painting is indistinguishable from the real thing, yet if we know it’s not by the famous artist, it loses most if not all of its value. Also, people would be willing to pay a lot more for art if they thought it was created by a famous artist, even if it wasn’t created by them at all. This is a very deep topic, but we can see from just scratching the surface that the human perspective plays a lot into how we value things or ideas. This got me thinking a lot about copyright and intellectual property. Where do we draw the line between something that is able to be copyrighted or trademarked, and something that could be considered common knowledge? Why are we able to, in some instances, copyright an idea or thought that could be shared by many people? In some cases, people put a lot of work into theories or inventions or processes, and they deserve credit for the work they have done to advance someone else’s work. But at what point do we really have to give credit to the most basic of intellectual property? Does the drinking straw really need a patent? People are quick now to apply for a patent or trademark for their works or ideas, and I can’t blame them. We live in a time where people are desperate to put their name on their work so they can get the recognition they deserve. It sort of scares me thought that more and more things are becoming private ideas used for money, credit, and exclusion, rather than the benefit of society. Circling back to the art-part though- is creating an exact 3D model of an artist’s work (based off the real thing) part of the process of counterfeiting their work? And art in this context will be very vague. Art could mean a statue, or sculpture, or a special type of copyrighted product, or even a Frisbee™. If I 3D print a Frisbee brand flying disk based exactly off of their product, am I technically stealing? I have no clue. I feel as though we are in a time of a lot of ambiguity regarding this subject because it is so new and is changing so fast. Regardless, thank you for trying to stick with me during this slightly scattered and at times confusing stream of consciousness.


Documentation at All Angles

Our most recent classes focusing on 3D modeling have inspired me to dig deeper into what kind of technology is allowing these rapid advances in digital documentation. Not far into my search I stumbled upon a piece of technology revolutionizing the landscape for photography, videography, modeling, and virtual reality; the 360fly.


This state of the art piece of technology separates itself from the competition with its one lens design allows 360 degree video with no stitching required. These small mountable devices are shock-proof, dust-proof, and can be place virtually anywhere. This type of endless power and capability is also combined with unmatched simplicity, as footage and photos can be seamlessly edited and tied together. For those interested in how editing with the 360fly works, the video below shows how simple and easy it is for users to create and edit the photos and videos taken on their 360fly.

While I could go on all day about why this technology is so revolutionary and such a breakthrough for the advancement of technology and videography, I’d rather pose a “what if?” scenario with this device. Recently in class our primary focus has been on 3D modeling with some talks mixed in on virtual reality. Now just think of what is possible if  the technology used in this device were to be incorporated on a massive scale for the purpose of 3D modeling and virtual reality.


While I’m sure many of you tech-nerds just got the chills thinking of the endless possibilities of wide-scale use of the 360fly, given that you’re all on my blog post, I would like to take the time to talk about the gaming and virtual reality potential that comes with this device. While there have been many incredible advances in photography recently that have helped the progression of VR, the simplicity and capability of the 360fly are nearly unmatched. People who have this product have been mainly using it as a “super-GoPro”, as they use the 360fly to document great views and athletic stunts. While this is awesome stuff for camera geeks and athletes alike, this device has yet to be utilized in a major fashion when it comes to virtual reality. My big idea for the use of this device, would be a Grand Theft Auto style game that captures every angle of a particular city using this device. Players would be able to move around the city and interact how they please VR style, as these small devices placed all around the city make the experience as real as possible. While this all seems extremely far-fetched, this is not at all the case. The 360fly has the range, price ($300-$500), portability, and durability to make this idea into a project worth pursuing. Not to mention the security benefits of having an arsenal of these devices at disposal would bring to whatever city chosen for this project.


While personally I would be most interested to see this devices used on the gaming and virtual reality side of things, I wanted to reiterate how the 360fly can be used to document nearly anything. At first I was in shock at the 3D modeling example we looked at in class, but after realizing that this type of technology exists it will be incredible to see the type of  advances made in modeling technology over the next several years. I will be thoroughly shocked if the 360fly does not completely change the landscape for 3D modeling and virtual reality, and it is my hope it will be integrated into gaming technology as well.


Attempting SketchUp

As someone relatively unfamiliar with computer programs, I figured a good use of my time could be attempting to work with SketchUp and to become more familiar with its features.

My first impressions were that it was very easy to download and get set up with, and it reminds me of this feature on the Jordan’s Furniture website where you can use the dimensions of your furniture to map out where you could put things and what your room can be set up like; in order to prevent moving furniture just to realize the bed does not perfectly fit between the desk and the bookshelf. Interestingly enough I was not expecting them to feel so similar, but SketchUp actually runs a bit slower than the Jordan’s Furniture site; considering I imagine the processing and 3-D axes are what make it more complicated.


My sister and I actually used to use this site a lot when we were younger and shared a room. Mostly it was used first, so we could play on the computer, second, to waste time measuring all the furniture in our room, and third, to unintentionally annoy our parents by making a mess and rearranging our furniture about every 5 weeks. To answer your questions, yes, we are quite the pair, and yes, I still entirely rearrange my room and all my things much more frequently than most.

So, besides the miracle that is the Jordan’s Furniture room planning website, a website I am now itching to use again after all this reminiscing, I had a few bumps with the more intricate SketchUp which I found interesting, but enjoyed learning about.

Some of the features I stumbled with were the building ones, initially. I found it difficult to get the correct third dimension, because if i moved my mouse up on the screen, the program would assume I wanted to move the shape out instead of up, and I would have to shake the mouse around until it saw things my way. Capture1

Somehow, after a while of clicking, I ended up with that “structure” on the left. It is something, which could be considered better than nothing, but it is still rather unimpressive. I also found a feature that let me insert previously created 3-D models, and I though that “garden” that is for some reason black, would be fairly cool to try and put in. I initially thought I would have to change the fill, and put in my own colors and textures, but as I learned how to use the rotate and zoom features, I found that for some reason it was just black when you were zoomed out farther. As you can see below I also tried to change the colors of the flowers and grass, because purple grass and orange flowers would be interesting, right? Wrong. The only color it allowed me to change by furiously clicking on the flower petals was the color of the ground in the garden, which is hard to see anyways which was a bit disappointing. Although a garden on the color of water texture I guess is enough for me. Capture3

Forging on, I created a triangle/prism structure and played around more with the different textures I could use to fill in the walls. I tried to familiarize myself with the tool that makes circles too. I thought it would be more straightforward, but it is almost better that its not, it allows for more variation and you do not just have to close the circle, you can make more creative choices and have another side of the circle come out, close it with a straight line or a “V”. Being me, I closed the circle like a circle, as I was already fairly proud of my water-textured triangle hut. However, I did in fact fill it with a concrete texture. Thinking back, mistakes were made because it clearly would have made more sense to use concrete on the hut and water on the circle, but to each their own. I am just familiarizing myself with the program and reality is already distant when there is a land-water garden and a faceless man standing in the corner with only axis lines stretching as far as the eye can see. A feature of this program that I really appreciated though was that you can change the color style. In particular, they have a color style specifically to make the program more accessible to color blind people; which, in itself, is incredible.Capture6

Ultimately, I had fun playing around in this program, as I am sure is evident, but I do think that this kind of software is something that can make real life jobs like architecture planning more marketable. Additionally, what if it were to become a Google Drive-type program, people looking to build homes could work in real-time with their architects; a chat box could be included at the bottom to facilitate that. Under the section where I found the “garden” in 3-D models, vendors could advertise their products and buyers would be able to insert them into their virtual home as they would envision it being built with the exact appliances. Paint companies could advertise their colors, and people would be able to essentially see a finished-product home before ever buying a thing (besides the program). At least for me, I have had one too many experiences of picking up clothes off the rack and thinking it could look so great, only to try it on and be entirely disappointed. What if you thought that a deep red kitchen would look better than off-white, only to find that upon painting, the kitchen looks so much smaller because of the darker paint? If this program could be expanded upon and taken in that direction I am definitely someone who would buy into it if I were looking to build my own house. It could even be taken in the direction of interior design, including more vendors and having designers to answer questions for you on the other side of the screen.

Modifying Archaeological Databases Towards Becoming Interactive

As I was sitting in my archaeology class I began to wonder why archaeological databases were designed more towards those who are in academia rather than the general public, as the sharing of information is so often necessary for  the research to be acknowledged and used. I also pondered upon how one might be able to change the formatting of the information so that it would better appeal to the public. The answer that I concluded was that adding interactive features to the archaeological databases could make the information about the excavation sites and artifacts more accessible, provide the opportunity to add new information, and allow the viewer to easily understand the material. So in this blog post I will be exploring the design and reasoning of current archaeological databases, and explore my ideas on why an integrative archaeological databases could help revolutionize the field of archaeology.

Archaeological databases online arose out of the need to share valuable information concerning artifacts, and also helped towards creating less of a messy paper trail. Integrated Archaeological Databases are created around the idea of cataloging, preserving, and sharing information on excavation sites and artifacts, yet has not progressed or changed significantly over the years alongside technology. By the databases not being adaptable to the changing technology it could prevent information from being widely shared and thus hinder the acknowledgment of the research done on archaeological sites. Some recent improvements on these databases are the inclusion of context recording sheets and the inclusion of associated metadata.

The modern databases are not often accessible to the general public because of the lack of standardization, the complexity of the structure, the monotony of the presentation, and the fact that most of the online databases are not user friendly. These issues prevent the sharing of information about the archaeological finds, because it is the general public who disperse information on a greater scale than those in academia would. By including interactive features to a spreadsheet database it would provide more accessibility for researchers, and those interested, to obtain various aspects of information about the artifacts excavated, provide opportunities to see the artifacts and the excavation site, allow for the ability to interact with the models of the artifacts and maps of the site, create a standardization for future databases, and display the information in a way that is easy to understand and  in a way that is also engaging.

The Day of Archaeology website had an interesting take on what could also be defined as a database. On their website they described in detail how they created their interactive maps and presented a collection of details on site excavation and archaeological work. It was a database not for the site, or the artifacts, but rather the process of the work. This type of database could also be included, perhaps, alongside the one containing the metadata and catalogues of information on an integrative online platform.

How would an interactive database be created? The first step would be to collect all the extensive information about the excavation site and artifacts and catalogue it into a digital spreadsheet. Due to the lack of standardization of categories or information needed concerning each artifact, the information in the spreadsheet will vary. Photos of the excavation site and various angles of the artifacts will be added to the spreadsheet or an accompanying digital collection. The interact features will be dependant upon the creator of the database skill set. A website could be created to have a platform to place all the information that is on the spreadsheet.

Clickable links could be placed to lead the user through the categories. An interactive map and 3D models of the artifacts could be placed on the website so that the user can navigate through where the artifact was excavated and analyze it in closer detail. Programs such as Agisoft could be used to create 3D models easily through the taking of several pictures of a structure and rendering them into the program. Once the 3D model is created it can be transferred into a program similar to Sketchfab and then shared or converted to another platform. 

CyArk is a great example of excavation sites and artifacts being put into interactive 3D models. The user can zoom in and explore the fine details of the structures and rotate them. Next to the model is a brief description of the structure. On the top of the page is the option to see an integrative map that has realistic imagery and the option to explore the information about the object more in depth. There is also options to see the photo gallery of the excavation site. Adding the option to see the full work of the archaeologists on the site, and an option to see the Integrated Archaeological Database could turn this website into an integrative archaeological database for example, as it is already user friendly and has options to explore the site and artifacts in an easy and engaging way that would be shared. It would also be interesting to be able to zoom into the maps and see a reconstruction of ancient cities with models of examples of  the people of those cities present. Animation could be used to have the models of the people move around. Videos could also be added as well within the online platform that the integrative archaeological database would be on.

Digital humanities has a public spirited aspect to it in which it concerns with how to make things better for humanity as a whole. In since archaeology is one of our only means of  “traveling” to the past and gaining insight from it, it is important to be constantly changing methods and employing new forms of technology to it. Archaeological databases are one means in which information is stored and shared, so revising it or making it interchangeable with today’s technology could alter how humanity receives information about the past. Integrative databases could be the next step in the direction towards that.


The Timeless Need for Preservation

Humanity’s most innate desire is to survive.  It is our biological and evolutionary need to repopulate and pass our knowledge down through generations, enabling others to survive after us.  But humans went a little farther than other animals in the sense that they developed a sense of sentimentality.  Their passing on of knowledge exceeded the lengths of teaching offspring to just survive – they passed down oral stories of culture and religion.  They then preserved literature and artifacts.  And eventually, they evolved to the point of preserving pretty much everything they could see… or could simply imagine.  We have reached a point where we have the technology to rebuild ancient structures, recreate cracked vases, and preserve all ancient findings for all future generations to experience.  This sense of sentimentality has led to the creation an immense archive for the future: one where everyone can access the past, all thanks to the internet.  It is really quite incredible.

What’s even more incredible is how early we were able to find ways to document the past and present using technology.  I was shocked to learn that Albrecht Meydenbauer pioneered the use of photogrammetry as a way of documentation in 1858.  He believed that photographic images could store an object’s information in great detail.  He created graphical reconstructions of buildings through the use of his photography and geometry – a method that proved quite reliable.  In 1885 he  succeeded in establishing the first photogrammetric institution in Berlin for cultural heritage objects.  The institution recorded roughly 2,600 objects using around 20,000 images.  It is important to keep in mind that a functional camera (as we know it) did not exist until around the 1830’s!  This incredible dedication to the preservation of culture is an excellent exhibit of the lengths people take in order to ensure future generations can experience the art, architecture and objects of the past.


Here are a couple examples of the type of projects Meydenbauer worked on.  By combining geography with photography, he was able to create projections of 3D objects on paper.


Nowadays we apply the same techniques and mentality in a more advanced way.  The current leader in photogrammetry is Agisoft.  This software company allows anyone to piece together an object in 3D virtual space using just a few simple photos!  After uploading a set of photos of an object (whether it be a small flower or a large building), the software takes over and creates a replication of the object on screen.  It can be spun 360 degrees, zoomed in, zoomed out, etc.  So unlike Meydenbauer, these photogrammetric projections can be accessed by anyone, anywhere, in little time!  In addition, as shown by our demonstration of Agisoft in class, the software is incredibly user friendly!  It takes just a few minutes and a basic iPhone camera to create a 3D projection of an object.  In an age of technology, the tools to preserve our world are so accessible and easy.

(However it is important to mention that there are downsides to this technology – as it is fairly new, it still has its problems!  Also proved by our experimentation in class, the software occasionally has trouble recreating an object depending on its texture, background, shape, etc.  With the constant improvement of our technology, these bugs out should be worked out soon enough!)

In conclusion, I believe that humans have an internal need to be remembered.  If they create something, achieve something, or discover something, they want people to know.  But not only do they want to be remembered for their accomplishments, they want to share the accomplishments of others who may have inspired them or created something worth remembering.  That is why techniques like photogrammetry exist.  Whether it be used to recreate one’s own work or that of another, its entire purpose is to spread art and knowledge.  And this human desire clearly has not changed much in the past couple hundred years!  I imagine that in the future, even more advanced techniques will exist to create objects in virtual spaces – whether that be through virtual reality or holograms!  Human accomplishments are being preserved in better and more permanent ways.

Blog Post #2: 3D Models – They’re Not As Easy As They Look

After our adventure last week gathering pictures of potential 3-D modeling subjects and having them not go so well, I decided to give it another go myself to see if I could create a model somewhat resembling the real-life subject of my choosing. I tried to keep all of Professor Poehler’s remarks in mind when choosing my subject; it would be best to be able to get a picture of the top, but also the bottom without moving the subject. He also mentioned that if the subject is too uniform, then the program will have a tough time differentiating the different sides (the same goes for the surroundings of the subject). These guidelines, along with my desire for the subject to be somewhat significant to myself (and the desire not to leave the comfort of my dorm room), led me to choose the plush Clefairy I keep sitting by my desk. For those of you unfamiliar with Clefairy, it is a first generation Pokémon (and my favorite Pokémon).

The setup I d20170413_112859ecided on involved balancing Clefairy on a clear water bottle on a step stool so that I would be able to get all the way around it, as well as see the underside. I figured that the fact that the background (my room) was a little busy would make it easier for PhotoScan to match the little pieces of Clefairy. I felt that I was extremely thorough in my photo-taking process; I took a total of 35 pictures of Clefairy. Then I set off for the Digital Humanities lab.

With the help of Professor Poehler, I loaded all of my photos onto the special computer in the Digital Humanities lab and created a chunk in PhotoScan to begin building my model. We aligned the photos on low to see what the program would come up with with the least amount of effort and… it was a large semi-pink cloud of no form in particular. About two thirds of the photos had actually been aligned, meaning that the software couldn’t make any matches in the remaining photos. We decided to align the photos again, but this time on medium to see how many matches we’d get with a little more computational effort. This time the cloud had a bit more form; the top of Clefairy’s head and ears were somewhat distinguishable and there was a shape that sort of resembled the stool and/or water bottle. However, upon attempting to build geometry fitting these points together, the result was complete nonsense. It was clear that these pictures were simply not going to cut it. 20170414_121512

Some probable sources of error were that the background was too busy, my clear water bottle was confusing for the matching software, and that my photos were so zoomed in that the matcher saw only a mass of pink in most of the photos. Luckily, I had decided to bring Clefairy with me to the DH lab in my backpack, so all hope was not lost. With the instruction of Professor Poehler, I set Clefairy up in the middle of a table on top of a black box and took 24 photos: 4 sides and 4 corners for each
of the 3 camera angles I took. I tried not to get as close to Clefairy in the pictures this time so that the photo would not be entirely pink and there would be a better chance of the program being able to make matches. When I tried aligning these photos on low I got another pink blob of dots but it was still better than with the other photos. On medium, only 12 of the 24 photos were aligned and the dots seemed to only represent the table. Seriously discouraged, I made one last attempt at creating my Clefairy by aligning the photos on high and I was rewarded with a cloud of dotSnapchat-526615032s that certainly resembled my plush. I could clearly see the tops of Clefairy’s ears and the swirl on top of its head, which renewed my hope.

The next step was to then build the geometry and it wasn’t quite what I was hoping for. I had the shape of Clefairy sitting on the table, but like my plush had been mixed with a rotten apple making a disturbing zombie Clefairy. Adding the texture did not help that much; it was also clearly Clefairy but the color from the box leached up onto the plush and added tSnapchat-1387530896o the rotting effect. I thought perhaps restarting the process and cropping out the dots that make the table after the aligning process might help with the color leaching and the rotted shape but unfortunately, that model was not much better. I exported all of the important files for both models and sent them to myself so I could upload the model to SketchFab from my own computer.

When I finally had the time to set up my model on SketchFab I ran into another problem: I couldn’t get the texture onto my geometry within SketchFab. I scoured the internet for a solution (and asked my computer-savvy dad for help) and ultimately decided the likely cause of the problem was that there was no file that instructed SketchFab on how to map the texture image over the geometry. I used an editor to read the files that I exported from PhotoScan and sure enough, there is no mention of the texture file. I may be wrong and there might be a very simple solution to my problem but I just don’t understand the software well enough. That’s okay though; this doesn’t have to be perfect, I just wanted to give it a shot. My model still looks like Clefairy and it’s much less disturbing without the texture anyways. For next time, however, I will definitely pay closer attention to what I need to do in order to load my model onto SketchFab.

Cyberwarfare: Our Inevitable Demise

“I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.” 

This famous quote by Albert Einstein was first published in Liberal Judaism 16 in 1949, only a decade after Alan Turing’s “Universal Computing machine” (the ancestor of the modern computer) made its debut and turned the tides for the Allied Nations during World War II. Neither Turing nor Einstein could have known just how influential the concept of stored programming and the subsequent development of the modern computer would be in both warfare and our day-to-day lives nearly 70 years later. If they had, they might have been equally awe-inspired and horrified. The computer and, more recently, the Internet have completely revolutionized politics, to the point that more and more countries are introducing new units to their military specifically designed for the prevention (and perhaps launching) of cyber attacks – that is, hostile acts committed exclusively through or with the help of computer/virtual network technology.

Each of the three branches of the US Military now includes a subdivision dedicated to responding to cyber attacks, with the Department of Defense having published its framework for cyber war, known as the “Five Pillars,” in 2011. That same year, the White House also published its first International Strategy for Cyberspace, stating in not so many words that, when warranted, the United States would not be afraid to respond to hostile acts in cyber space with appropriate force, using any and all means necessary to protect national interests.

It took less than 100 years for computers and the World Wide Web to penetrate our infrastructure so completely that activity which damages our ability to connect is considered an act of aggression. With a few carefully-developed codes, a country’s entire banking system can be destroyed, national secrets can be leaked, and missiles can be launched. While digitalization may be extremely efficient, it also leaves us vulnerable.

According to the official White House website, over the next four years, President Trump intends to “make it a priority to develop defensive and offensive cyber capabilities at our U.S. Cyber Command.” Under the Obama administration, cyber espionage was a frequently-used military tactic, although direct attacks on other nations’ cyber infrastructure was relatively limited. It remains to be seen whether the same discretion will be undertaken by the new administration, though the White House’s phrasing does seem to emphasize offensive measures in addition to defensive.

It is no longer a question of ability, when it comes to how we use technology in our politics – it is a matter of morality. For some nations (cough Russia cough), this is no barrier to the development of more sophisticated cyberwarfare tactics. For others, it is more about maintaining the façade of morality than actually adhering to any tangible moral code. In fewer words, this means that the future of the Internet as a weapon is almost entirely dependent on a handful of world leaders (few of whom have proven themselves to possess the maturity and/or foresight to be making such influential decisions). Between Putin, Xi, Kim and Trump, the morality factor seems to be coming into play less and less. And who can blame them? We’re the ones who based our entire infrastructure on one extremely susceptible network. It’s only a matter of time until someone completely shuts down someone else’s network, and Party B responds with a full-fledged missile launch. World War III could potentially be started without deploying a single soldier, thanks to the World Wide Web.

Obviously, all this is not to say that we should divorce ourselves completely from the Internet. While that method may have held merit twenty years ago, it’s simply too late to untangle ourselves now. Instead, we must prepare ourselves for the very real possibility that we have already found Einstein’s mystery weapon, and allowed it to penetrate every dimension of developed society.

They say not to put all your eggs in one basket – we’ve squeezed in the whole farm.

Virtual Reality: A Primer

Over the course of the past five years the tech industry has seen unprecedented and rapid growth in the area of virtual reality. While this is not the tech industry’s first foray into the realm of virtual reality, it is by far the most successful attempt that has been made for a number of reasons. Numerous HMDs (head mounted displays) were developed in the 1990s for commercial use. The most well-known and infamous example of this was gaming giant Nintendo’s Virtual Boy. The Virtual Boy is infamously remembered as the biggest failure in the company’s history, and was even named in Time magazines’ top fifty worst inventions of all time. Part of the reason for its failure was the due to the technical limitations of the time. It could only project flat two-dimensional images and in only one color. This was nowhere near detailed enough to evoke the sense of presence required for an effective virtual reality experience. In 2012 Oculus revived the VR industry with its landmark Kickstarter campaign. The goal of the Oculus Rift was not to be a commercial success but instead to be, as Palmer Luckey CEO of Oculus put it, the Model T of virtual reality. Something affordable that people will use and can sustain growth. Through their crowdfunding campaign Oculus not only acquired funding, but demonstrated to other major tech companies that there was enough consumer interest in virtual reality that designing their own products would be a good idea. This has created an explosion of innovation at both the hardware and software level resulting in significantly improved products for the consumer.

VR is an experience unlike any other and provides an undeniable realism that is revolutionary. It is an experience that can only be properly explained by experiencing it firsthand. Having owned one of these devices myself, I can confirm there are simply no words that can accurately describe the sensation of being in the Rift. This experience is what is known as “presence” among the virtual reality community. People often mistake their feeling of presence with immersion. Immersion is the sense that you are surrounded by the virtual world, but presence is the feeling that you actually are somewhere else. Presence is critical to creating realistic and compelling experiences within virtual reality and it can only be achieved through the manipulation of our low level perceptual systems. By tricking our unconscious perceptual systems, our brain interprets the world we are perceiving as though it were reality. Presence, however, is not an easy thing to produce. It requires many different technical aspects of the Rift to be operating at optimal levels simultaneously. When not all aspects of the technology are working together properly, the user starts to feel disconnected from the reality in an undefinable way. For example, if your body perceives yourself to be running at a decent pace and you all of a sudden stop instantaneously your body experiences this unnatural occurrence as though it had actually been moving. Motion sickness is one common possible result from your body’s perception of reality being broken. As it exists currently the technology is nowhere near perfect, but even with just the significant improvements seen in the past half a decade, we have an incredibly powerful tool that  can be used to create all kinds of unique experiences.

Blog Post 1: How Everyday Technology Can Save A Dying Language

Technology in today’s modern world is advancing at a rate that is faster than ever before, and much of this technology is not only easily accessible, but has been integrated into daily life in such a way where many have come to expect a certain standard of living that includes those very technologies. With the advancement of the household technologies that aid humanity in their daily endeavors begins the question of how these technologies can shape cultures, social structures, and ways of communication for the better or even for the worse. Devices so small that they can fit in the palm of your hand or around your wrist, or bigger devices that can be installed all around your home, can do incredible things that were never thought possible many years ago. With a touch of a button on any of these devices you can contact someone across the world instantaneously. If there is a language barrier, that’s no longer an issue for modern devices with translation apps or even wearable technology that will translate spoken speech into another language.

With all the strives towards making modern communication more convenient, easy, and creating the devices to be more user friendly, one has to wonder how languages are influenced by this. Not only are new words related to technology added to various languages, but those who are creating and distributing the technology are making their own languages more mainstream. This presents a slight problem, as it is estimated that more than half of today’s 7,000 languages could vanish within our lifetime (Saving Vanishing ‘tongues’). The impact of this estimation leads researchers to a question that has recently arisen among the digital humanities topics of research; If technology has the power to control the survival of various languages, how might technology be used in preserving a dying language?

Many researchers of near extinct languages use devices such as tape recorders, video cameras, or data charts to catalog the spoken language. Other means of technologies they have used in the past include writing a dictionary or a novel in the dying language, or by sharing the language on a variety of websites. What is being done today however is the implementation or creation of apps dedicated to the language, posting videos of native speakers on Youtube, posting comments on websites in the language, and using social media to gain popularity for the language. This is usually done on smartphones, laptops, tablets, or desktop computers.

These efforts by researchers are done in the hopes that the language in its entirety will be accessible and therefore will be able to be revived at a later time. This is the same goal my Anthropology professor Emiliana Cruz has for the language of her people. Professor Emiliana Cruz speaks the Chatino language, which is at risk of extinction as it is no longer being commonly used due to the influence of Spanish dialects. In order to raise awareness for the language she teaches the language’s grammatical structure and its history to her students, as well as having native speakers present their knowledge to her classes. The goal she has for my class however, is to develop an online Chatino dictionary that is also in Spanish by first using google excel, with the future hope to make it easier for Spanish speakers to learn the Chatino language. We are also creating pedagogical grammar worksheets, videos, or books that can be used by anyone who wishes to learn the language at various ages, which we as a class will also post online. Emiliana Cruz, alongside other native speakers trying to revive the language will also use our work to create material on the language so that it may be taught in South American schools, or even in the communities in which Chatino is spoken. The work that we are doing as a class has inspired me to want to employ and find new ways of making endangered languages accessible through user friendly technologies such as in an app or on a website, where all the information about the language and teaching technics such as those that I have done for the Chatino language in my class, could be brought together in an interactive and enjoyable way.

Language helps create identity for many people in various cultures. Without their traditional spoken language, many feel as though they have lost a significant part of their identity or history. By making languages accessible through technology, researchers grant the opportunity for others to revive the dying or extinct language and claim their cultural identity. How we choose to communicate, and what we choose to communicate with, shapes the world in seemingly small yet vastly significant ways. Therefore, it’s profoundly important to be aware of how the technologies we use affect those aspects of our everyday life.