I’ve always liked playing around with the latest and greatest in the world of photogrammetric apps. At CAA2012 I sat in on a Field and Lab Recoding session on the last day that dealt with photogrammetry just to see if anything dramatic had happened in the year since I had attended 3DARCH in Trento. The papers all presented very interesting case studies of the use of photogrammetry and laser scanning in the field but they were light on practical details. As Rachel Opitz noted during her presentation with Katie Simon, photogrammetric research has moved beyond the point where the method is an end in itself. Research that uses photogrammetric tools can now be treated as normal science in the Kuhnian sense. This shows how far photogrammetry has come in terms of accessibility within archaeology and so I made a mental note to do a bit of online research into the current state of play when I had a bit of free time once I got home.
During the Opitz-Simon paper a question was asked as to which tool they preferred to use and there was general agreement voiced in the room when the ‘Russians’ were mentioned and this was where I began my search. Typing in ‘Russian photogrammetry tool’ eventually leads you to The Racurs company website. I downloaded the free version of their PHOTOMOD app and after suffering through a number of barely audible video tutorials and a UI that looked like it had been designed sometime in the 1990s I decided that PHOTOMOD was not the easy click and build solution that I had hoped it might be. Its primary focus seems to be on creating DEMs or TINs from aerial photography. So the case study that I had in mind of using a series of photos of a park bench taken last year when I was living in Cambridge didn’t seem to fit their typical use-case scenario.
Some more research then led me on to Autodesk’s 123D series of products. Now, I’ll admit that I’m a great fan of Autodesk especially since they had decided to make their products available for free to the educational sector. I’ve used 3ds Max for a few years now and while I wouldn’t call myself an expert (I think that that title probably requires a lifetime of dedication), I can at least find my way around the product reasonably well. A hadn’t heard anything beforehand about these 123D products and was intrigued. The general idea seems to be that they are a series of 3D content and animation apps that utilise the power of cloud processing but they’re still in BETA so they haven’t quite been released out onto the market just yet.
The 123D Catch product is specifically targeted at producing 3D models from a series of photo inputs. I downloaded the app and within a few minutes had selected and submitted all of my 31 hi-res (4752×3168 pixels) photos of the Cambridge bench. The photos are then uploaded and, having entered an email address that is contacted once the operation is complete, I went off to make the dinner. The results once I returned were pretty impressive considering that I had run the same test with Microsoft’s Photosynth app a year ago and while acceptable I still needed to run the resulting point cloud through MeshLab in order to generate a mesh.
The 123D Catch app did it all automatically. It even textured the object. I was able to export to OBJ or FBX and there is a nice feature that renders a movie. I used the default fly-through path for the example that is shown below.
So in summary then, it looks like the pain of creating 3D content might be becoming that bit closer to becoming a distant unpleasant memory. I will be watching the progress in the coming months of 123D closely.