The competition asked entrants to capture and modify an object that they use for their ‘favourite hobby’. We considered adapting a piece of our photography kit used for photogrammetry but opted instead for a more playful approach and hacked a scan of Ramesses II, one of the largest sculptures in the British Museum:
Next we were required to customise it to best suit our needs, it may seem surprising but we have quite a few 3D prints hanging around our Bloomsbury HQ yet few cool places to store them. Cue light bulb moment, why not make a giant Ramesses and use him to store a bunch of smaller prints!
We identified six scans that we could place within niches inside the big Ramesses including a smaller Ramesses bust (Ramception) and then got to work using Fusion 360 to modify the original scan.
First we had to reduce the polycount in order to open and edit the sculpture in Fusion which was then swiftly sliced in half. A hinge was then created by extruding a circle into a cylinder and splitting it into five parts which were then alternately combined to the front and back bodies. We also modelled a simple pin to lock the two halves together completing the hinge that would enable the secret stash of models to be opened and closed.
The final steps involved scaling-down and reducing the polycount of the six smaller models and positioning them where best, then all that remained was to trace a rough outline of each onto the flat plane, cut away each niche and insert the models.
Unfortunately we didn’t win the competition otherwise we would almost certainly have our heads buried in VR right now but nevertheless we’re very happy with the outcome and the awesome job MyMiniFactory did of printing it!
The conference focused around the long-term durability and accessibility of 3D models and scan data for future uses, uses which as we discovered in some of the talks may not be immediately obvious. I thought I’d take a moment to reflect on a few thoughts and favourite take-aways.
Having a good understanding of photogrammetry (primarily by probing Tom for tips and tricks) I opted to skip the workshops and stick with talks for the whole day, it was intense but informative and an eye-opener into a community that I didn’t even know really existed! So… a few highlights:
Stuart Jeffrey from the Glasgow School of Art (GSA) discussed a use-case where an old 3D model of the GSA Mackintosh Building which suffered severe fire damage in 2014 provided evidence that a substantial lean on the West gable wall was historic and had not come about as a result of the fire. Members of the GSA Digital Design Studio produced a second model the day after the fire to compare the lean and save a large portion of the building from demolition an impressive feat and one which illustrated the importance of making good data accessible in the long-term.
Anthony Corns from the Discovery Programme talked about his experiences of archiving and reusing 3D data as well as the steps and software involved in the creation of a model. One slide showed a standard software stack consisting of about 12 programmes which was somewhat surprising, working with Tom to process various models I am slowly but surely becoming aware and familiar with the wide range of tools out there.
Anthony also spoke about using scan data to asses pressure on different sites his example being Skellig Michael which has witnessed a surge in tourist numbers since Luke Skywalker decided to hang there in Star Wars: The Force Awakens. This also demonstrated when are where it may be appropriate to sell 3D data such as to film/production crews.
Chris Moran who heads the Wellcome Trust legal team gave an insightful talk on Intellectual Property Rights (IPR) an area where people often become a little tangled. I was listening from a design perspective so it was interesting to see examples where cases had been argued and won based on the potentially loose definitions of what constitutes something as an ‘original creation’ or even a database, his example being a newspaper’s website. Star Wars references were also utilised here in the form of the IP rights of a Stormtrooper’s helmet… I sense a pattern developing.
Vincent Rossi and Jon Blundell of the Smithsonian appeared via Skype to discuss their work on digitisation and also show off their amazing work on the Apollo 11 command module ‘Columbia’ check it out here.
I had the opportunity to ask a question to our speakers from across the pond which was kinda cool!
Finally perhaps the most insightful moment was the final ‘Round-up chat’. Here following a panel chat the audience were invited to reflect on: what is to be done and how to address the gaps in our knowledge?
It was clear there was a desire for good collaborative practise and several rousing speeches were made, there was a great deal to get off the chest! A key agreement was that to work with better tools and formats, instead of trying to create new ones, complain about a lack of essential features, and live in fear of formats going extinct, why not establish a line of communication with the developers and those behind the existing platforms. The software stack slide that Anthony showed sprung to mind and it became apparent there was a need for openness and better communication between all parties involved in 3D work not just in the short term and not just for individuals and independent organisations but the community as a whole.
Last Friday, Tom and I had the best intentions in the world of presenting to the attendees of Virtual Heritage Network Ireland in Cork. We were all set to talk about Letting Objects Speak for Themselves, and show folks some working boxes. It was also going to be my first visit to Ireland! But, our journey ended at Stansted, after a remarkable slurry of travel woes I shan’t bore you with. Suffice it to say that everything that could have gone wrong, did.
Stansted security folks curious about the 3D prints
With our tails between our legs, Tom and I knew we wanted to send something in our place, so we headed back to HQ to see if we could make a video version of what we’d planned to talk about. Fate steps in again – we’d both not brought our office keys! (And Charlie was off at the Wellcome’s 3d4ever meeting!) Gah!
Luckily, the little office next door was open, so we were able to suck the office wifi, and we put together a version of what we would have said on stage to send over. Phew!
One thing I love about making a 3D scan of an object is that you can do multiple things with the resulting digital data. You can post it online for people to examine in their web browser; you can beam it across the the globe (or into space) to someone with a 3D printer and they can effectively replicate it; you can put it in a video game or VR scene
I wrote a couple of days ago about how the 3D scanning that we conducted for the Cuming Museum – a museum with no building (it burned down) but with enthusiastic staff that help people connect with the surviving collection through events, outreach, the web and social media.
In the video above, you can see the 3D models we made popping up from postcards through the clever tech of an augmented reality (AR) app called Augment.
We first started having fun with this tech at a residency at Somerset House way back in March, 2015 as part of The Small Museum. We used the tool to reveal the true colours and (maybe more significantly) the true scale of a Colossal Foot from the British Musem (of which, it turns out, there are many.)
The steps you need to go through to work this magic is fairly straightforward – upload your 3D model, indicate it’s size, upload your image, indicate it’s size, associate the two and you’re done. Fire up the Augment app (Android / iOS), point it at your image and – boom! – you’ve got some very cool AR happening in front or your eyes!
You can also have some fun with how the image that triggers (or “trackers” as Augment calls them) the AR relates to the 3D model that pops up. While we simply used a couple of collection images as triggers. In our experiments, an image of the poor giraffe statuette in pieces after the fire to trigger 3D of the lovely complete version after careful conservation. The 3D scan of a poor malnourished tiger’s skull from the long defunct Surrey Zoological gardens is triggered by an illustration showing Queen Victoria and Prince Albert visiting the Surrey Zoological Gardens in 1848 – complete with wholly unsafe jack russel terrier in the cage!
By by playing with the combination of image and associated 3D, you can help tell an artefact’s without any words. Of course if you add words and sounds you’ll be hitting all kinds of learning styles. Plenty to explore here….
Try it yourself, print off the images below at A5 size and scan them with the Augment app!