The competition asked entrants to capture and modify an object that they use for their ‘favourite hobby’. We considered adapting a piece of our photography kit used for photogrammetry but opted instead for a more playful approach and hacked a scan of Ramesses II, one of the largest sculptures in the British Museum:
Next we were required to customise it to best suit our needs, it may seem surprising but we have quite a few 3D prints hanging around our Bloomsbury HQ yet few cool places to store them. Cue light bulb moment, why not make a giant Ramesses and use him to store a bunch of smaller prints!
We identified six scans that we could place within niches inside the big Ramesses including a smaller Ramesses bust (Ramception) and then got to work using Fusion 360 to modify the original scan.
First we had to reduce the polycount in order to open and edit the sculpture in Fusion which was then swiftly sliced in half. A hinge was then created by extruding a circle into a cylinder and splitting it into five parts which were then alternately combined to the front and back bodies. We also modelled a simple pin to lock the two halves together completing the hinge that would enable the secret stash of models to be opened and closed.
The final steps involved scaling-down and reducing the polycount of the six smaller models and positioning them where best, then all that remained was to trace a rough outline of each onto the flat plane, cut away each niche and insert the models.
Unfortunately we didn’t win the competition otherwise we would almost certainly have our heads buried in VR right now but nevertheless we’re very happy with the outcome and the awesome job MyMiniFactory did of printing it!
One thing I love about making a 3D scan of an object is that you can do multiple things with the resulting digital data. You can post it online for people to examine in their web browser; you can beam it across the the globe (or into space) to someone with a 3D printer and they can effectively replicate it; you can put it in a video game or VR scene
I wrote a couple of days ago about how the 3D scanning that we conducted for the Cuming Museum – a museum with no building (it burned down) but with enthusiastic staff that help people connect with the surviving collection through events, outreach, the web and social media.
In the video above, you can see the 3D models we made popping up from postcards through the clever tech of an augmented reality (AR) app called Augment.
We first started having fun with this tech at a residency at Somerset House way back in March, 2015 as part of The Small Museum. We used the tool to reveal the true colours and (maybe more significantly) the true scale of a Colossal Foot from the British Musem (of which, it turns out, there are many.)
The steps you need to go through to work this magic is fairly straightforward – upload your 3D model, indicate it’s size, upload your image, indicate it’s size, associate the two and you’re done. Fire up the Augment app (Android / iOS), point it at your image and – boom! – you’ve got some very cool AR happening in front or your eyes!
You can also have some fun with how the image that triggers (or “trackers” as Augment calls them) the AR relates to the 3D model that pops up. While we simply used a couple of collection images as triggers. In our experiments, an image of the poor giraffe statuette in pieces after the fire to trigger 3D of the lovely complete version after careful conservation. The 3D scan of a poor malnourished tiger’s skull from the long defunct Surrey Zoological gardens is triggered by an illustration showing Queen Victoria and Prince Albert visiting the Surrey Zoological Gardens in 1848 – complete with wholly unsafe jack russel terrier in the cage!
By by playing with the combination of image and associated 3D, you can help tell an artefact’s without any words. Of course if you add words and sounds you’ll be hitting all kinds of learning styles. Plenty to explore here….
Try it yourself, print off the images below at A5 size and scan them with the Augment app!