After our adventure last week gathering pictures of potential 3-D modeling subjects and having them not go so well, I decided to give it another go myself to see if I could create a model somewhat resembling the real-life subject of my choosing. I tried to keep all of Professor Poehler’s remarks in mind when choosing my subject; it would be best to be able to get a picture of the top, but also the bottom without moving the subject. He also mentioned that if the subject is too uniform, then the program will have a tough time differentiating the different sides (the same goes for the surroundings of the subject). These guidelines, along with my desire for the subject to be somewhat significant to myself (and the desire not to leave the comfort of my dorm room), led me to choose the plush Clefairy I keep sitting by my desk. For those of you unfamiliar with Clefairy, it is a first generation Pokémon (and my favorite Pokémon).
The setup I decided on involved balancing Clefairy on a clear water bottle on a step stool so that I would be able to get all the way around it, as well as see the underside. I figured that the fact that the background (my room) was a little busy would make it easier for PhotoScan to match the little pieces of Clefairy. I felt that I was extremely thorough in my photo-taking process; I took a total of 35 pictures of Clefairy. Then I set off for the Digital Humanities lab.
With the help of Professor Poehler, I loaded all of my photos onto the special computer in the Digital Humanities lab and created a chunk in PhotoScan to begin building my model. We aligned the photos on low to see what the program would come up with with the least amount of effort and… it was a large semi-pink cloud of no form in particular. About two thirds of the photos had actually been aligned, meaning that the software couldn’t make any matches in the remaining photos. We decided to align the photos again, but this time on medium to see how many matches we’d get with a little more computational effort. This time the cloud had a bit more form; the top of Clefairy’s head and ears were somewhat distinguishable and there was a shape that sort of resembled the stool and/or water bottle. However, upon attempting to build geometry fitting these points together, the result was complete nonsense. It was clear that these pictures were simply not going to cut it.
Some probable sources of error were that the background was too busy, my clear water bottle was confusing for the matching software, and that my photos were so zoomed in that the matcher saw only a mass of pink in most of the photos. Luckily, I had decided to bring Clefairy with me to the DH lab in my backpack, so all hope was not lost. With the instruction of Professor Poehler, I set Clefairy up in the middle of a table on top of a black box and took 24 photos: 4 sides and 4 corners for each
of the 3 camera angles I took. I tried not to get as close to Clefairy in the pictures this time so that the photo would not be entirely pink and there would be a better chance of the program being able to make matches. When I tried aligning these photos on low I got another pink blob of dots but it was still better than with the other photos. On medium, only 12 of the 24 photos were aligned and the dots seemed to only represent the table. Seriously discouraged, I made one last attempt at creating my Clefairy by aligning the photos on high and I was rewarded with a cloud of dots that certainly resembled my plush. I could clearly see the tops of Clefairy’s ears and the swirl on top of its head, which renewed my hope.
The next step was to then build the geometry and it wasn’t quite what I was hoping for. I had the shape of Clefairy sitting on the table, but like my plush had been mixed with a rotten apple making a disturbing zombie Clefairy. Adding the texture did not help that much; it was also clearly Clefairy but the color from the box leached up onto the plush and added to the rotting effect. I thought perhaps restarting the process and cropping out the dots that make the table after the aligning process might help with the color leaching and the rotted shape but unfortunately, that model was not much better. I exported all of the important files for both models and sent them to myself so I could upload the model to SketchFab from my own computer.
When I finally had the time to set up my model on SketchFab I ran into another problem: I couldn’t get the texture onto my geometry within SketchFab. I scoured the internet for a solution (and asked my computer-savvy dad for help) and ultimately decided the likely cause of the problem was that there was no file that instructed SketchFab on how to map the texture image over the geometry. I used an editor to read the files that I exported from PhotoScan and sure enough, there is no mention of the texture file. I may be wrong and there might be a very simple solution to my problem but I just don’t understand the software well enough. That’s okay though; this doesn’t have to be perfect, I just wanted to give it a shot. My model still looks like Clefairy and it’s much less disturbing without the texture anyways. For next time, however, I will definitely pay closer attention to what I need to do in order to load my model onto SketchFab.