For a long time the traditional structure of training has not been questioned. Whether it be the academic setting or the athletic setting, the mode of instruction has been typically a classroom/practice setting. This meant a majority of training from professionals was accessible for a short period of time during the day and if you wanted to train outside of that you would be unable to utilize a teacher or coach’s instruction. However, through incredible leaps in technology this idea has been challenged. Most college courses are now implementing some online/video component that has been shown to aid students in understanding the subject material. My hope was to take this new online/video component and integrate concepts of athletics, specifically the sport rugby into the new method of training. The plan was to make an instructional video teaching the spin pass in rugby using video/audio from a traditional camera, Google Glass, and images captured on the structure sensor. Although instructional videos for sports are a brand new concept, upon searching the internet I was unable to find extensive rugby instructional videos. Therefore the goal of this paper will be to evaluate this experience of making the instruction video using the DH tools (structure sensor and Google Glass), and evaluate how effective they were in creating an adequate training video.
I wanted to do a training video for rugby because it is a rapidly growing sport. It is a sport I love and I have worked on in my town back home to develop my high school program further since I have graduated. My target audience was young developing rugby players and coaches of youth rugby. My hope is with this instructional video I will be able to assist players in acquiring the basic skills of rugby and enhance rugby nationwide.
To set a foundation for this paper I will first explain the role each tool played in the creation of the video. First, I took the structure sensor and scanned a 3D image of a person in each of the 3 major steps of the spin pass: cocking back the ball, bringing it across the body, and releasing the ball. Once I completed the 3D scans using the “Room Capture” app I moved onto the video phase of my plan. I wrote an outline of the subject material that I wished to cover and prepared dialogue for the 7 scenes that were to be shot with a traditional video camera. After shooting those clips, I then went onto to the phase with the Google Glass. The aim of implementing the Google Glass was to capture video of the pass from the first person perspective. Where a lot of instructional videos seem to fail is that the audience is watching instruction from an outside perspective only. Therefore I shot five clips of the pass, two shot from the position of the passer and three from the position of receiving a pass. Once I completed all of my shots to a satisfactory level, I edited the video/images and created the instructional video.
Before creating this video, I had a very specific idea for how the structure sensor and Google Glass were to come into play. I had hoped for slightly better results from the structure sensor. After a fair amount of time trouble shooting to create an effective 3D model of Jason Girouard, my rugby player model, I emailed the 3D models to myself. I believed that when I went to my computer and opened the document I had just sent to myself it would be a 3D image. However, to my peril it turned out that it simply sent a 2D image to my email. At this point in time, the 2D image was blurry and basically useless. Had I been able to open it as a 3D image I would’ve used a screen capture device to record a narration of 3D image discussing its various elements. The Google Glass was slightly more successful in accomplishing what I had hoped it would. I wanted to record passes from the first person perspective and for the most part I was able to. One problem I ran into was that it was somewhat difficult to show passing form from the first person when you are the passer. This is because the first person view doesn’t allow you to be able to see very much of the actual pass you are completing. When you pass the ball you are supposed to be looking at your target and not looking at your arms, hands, feet, etc. However, it was successful in showing that you are supposed to look at the person you are passing to. That is an often over-looked technical point and can be the cause of a lot of bad rugby practices if not addressed early in development.
Although I was eventually able to create a product with both DH tools, they were quite difficult to use and very frustrating. The structure sensor didn’t have the simplest of instructions in the app or on the web to aid you in the creation of a model. It left you to guess you way through its use and didn’t allow you to save work right to the app or IPad. Several times it timed out after creating a model, and when you went to re-open the app all of the data you just captured was lost. The Google Glass, although fostered better results, was far more frustrating to use. The Glass constantly timed out. The screen would go blank in the middle of what you were doing for no apparent reason. After recording a video, it is really difficult to play it back to yourself and the voice control often mistook the phrase “play this video” for “delete this video” leading to some obvious frustration. In addition to this it was hard to line up to your eye, and when you did get it lined up looking at the lens caused discomfort and headaches. After much troubleshooting, I was able to shoot several useful clips, but even at that the device will only record 10 second clips unless you extend the clip, which proved to be difficult as well. I understand that the DH tools were given to us under the guise that we should figure out how to use them on our own and use that as an education experience. However, I believe that the time we have them signed out for is such a short amount of time that some addition in class training would’ve served me well in preventing limitations cause by unfamiliarity with the devices.
Outside of additional training, I believe there isn’t a lot else that this project needs to satisfy my original intended objective of creating an instruction video for how to pass the rugby ball. The purpose of including the traditional video was to compare its ability to complete this objective to the ability of the structure sensor and Google Glass. The conclusion I have been able to come to is the structure sensor is basically useless in the creation of a tutorial video teaching a pass in rugby and Google Glass was mildly helpful, but not significantly. Even if I had created a screen capture of me spinning a 3D model of the stance, you could’ve gotten the same results simply from taking a camera and walking around the person in that stance once. Doing this would’ve taken far less time and far less frustration. Second, google glass did capture a good first person view of the rugby pass, but it wasn’t until actually capturing the first person view that I realized how it wasn’t very useful in teaching the pass. The Google Glass could however be useful in future, more advanced instructional videos. Having a first person clip of open field rugby play and decision making skills when incorporation defending would actually be very helpful in instruction. The structure sensor may not be an effective tool for athletic tutorial videos but perhaps the Google Glass could be, in more extreme circumstances