Shelley Mannion on how museums have tackled the challenge of creating AR applications that deliver value for museum visitors.
On 20 January 2010 a single tweet sparked a lively debate on Twitter. “Is augmented reality really useful for a museum/gallery or is it over hyped?” was the question posed by @MuseumNext. Augmented Reality (AR) is a hot topic in the museum community. It is a frequently used buzz word in conferences and meeting rooms. It is widely discussed, but not well understood. Since that original debate, several museums have tackled the challenge of creating AR applications that deliver value for museum visitors.
Among the forerunners are the Stedelijk Museum in Amsterdam which used AR to install artworks in a local park (ARTours), and the San Francisco Exploratorium which turned an evening event into a surreal AR playground (Get Surreal). In 2011, the British Museum’s digital learning team embarked on a plan to explore AR’s potential in museum education. We ran a series of experimental projects that allowed us to push the boundaries of the technology and evaluate its benefits in learning programmes. Our experience confirmed that AR – although technically still immature – has both the unique ability to engage visitors and quantifiable learning outcomes. It is a useful tool in our arsenal of interpretive tools and techniques.
What is AR?
Augmented Reality is the ability to see (or hear) contextually relevant information superimposed on your view of the world. Usually this is the view you see through the camera lens of your phone or mobile device. But the view might also be a video feed from a webcam on a laptop or large screen. Mediated by the camera, you see virtual content suspended in space as if the device has magically uncovered it. A classic example of AR is the fighter pilot’s heads-up display. Data such as altitude, speed and fuel level appear on a transparent visor worn by the pilot. With critical information easily accessible in his or her visual field, the pilot does not need to look down at a control panel and can focus their attention on the potentially deadly sky before them.
Our museum data may not be crucial to survival, but it is essential to the visitor’s experience. The right information delivered at the right moment increases engagement and enjoyment; it makes the difference between an ordinary visit and a lasting memory. Although the definition of AR does not specifically address interaction style, the range of AR applications available demonstrate its potential for delivering content in ways that genuinely delight visitors.
How are museums using AR?
About a year into our AR experiments at the British Museum, we attempted to classify the different types of interactions it was being used for. Initially it seemed they could be grouped into four categories: 1) Outdoor guides and explorers; 2) Interpretive mediation; 3) New media art and sculpture; 4) Virtual exhibitions.
Category one includes the majority of early AR applications which functioned outside museum walls. Around Sydney (2009) from the Powerhouse Museum and Street Museum (2010) from the Museum of London exemplify these location-based AR applications. Both draw on archival photographs from the museums’ collections to reveal what areas of the cities looked like in earlier eras. The photos are plotted on a map which displays them as “points of interest” to the user when he or she is in the vicinity. These applications rely on GPS positioning and so cannot work indoors.
The second category, Interpretive Mediation, contains some of the most creative uses of AR as well as some of the earliest. In 2005, media artist Hugo Barroso’s installation Pret-a-Porte at the National Centre for the Arts in Mexico City introduced wearable AR in the museum. Children stood in front of a digitally augmented mirror wearing simple garments and headgear decorated with AR markers. Depending on which markers they wore, different costumes appeared in the mirror superimposed over children’s own clothes. Four years later the V&A worked with Open Frameworks developers Hellicar & Lewis to create Mirror, Mirror (2009). The screen-based application used a webcam that was triggered by facial recognition. A mask composed of baroque patterns from the museum’s collection was drawn over the user’s face. The algorithm was generative and each mask was unique.
These and similar apps paved the way for our first project at the British Museum which launched in November 2011. Passport to the Afterlife is a family trail where children use mobile phones provided by the museum to scan markers that displayed 3D models of ancient Egyptian objects. An important distinction between this style of AR and location-based applications is that virtual content is triggered by the markers rather than the user’s location. Marker-based AR is ideally suited to museums where user’s location cannot be determined either by GPS, wifi triangulation or other means.
AR markers and QR codes
These markers should not be confused with QR codes, which are scanned by barcode applications and send the user to a web page. AR markers can lead to a website, but are more powerful. They can be used to trigger or display rich media content such as images, videos, 3D objects or environments and animations. This is far more compelling than the QR code model. Another key point is that markers do not have to be black and white squares. Almost any 2D image can be used as marker. This includes contextual images such as maps or photographs that appear on interpretive panels in existing displays. These are less intrusive than traditional markers and less likely to draw criticism from curators or designers who are sensitive to the aesthetic of the displays.
Media artists and guerrilla exhibitions
Innovation in the third and fourth categories of new media art and exhibitions has come from artists, who continue to push the boundaries of AR with guerrilla interventions in museum galleries. Artists were the first to recognise AR’s potential to challenge the curatorial hegemony over galleries. By installing their own artworks virtually and telling the public where to find them, artists like Sander Veenhof of the Manifest.AR collective, succeeded in exhibiting their work in some of the most famous venues in the world without an invitation. On 9 October 2010, Veenhof and Mark Skwarek staged an ‘invasion’ of MOMA New York by creating an AR application to display virtual artworks inside galleries. Veenhof and Skwarek showed visitors how to load the application to view the exhibition on their mobile devices. The pair’s next major intervention with Tamiko Thiel and other Manifest.AR artists was at the Venice Biennial in 2011 where they placed their own works alongside the official selections in the Giardini.
Inspired by the work of Manifest.AR, the British Museum’s digital learning team decided to create its own ‘guerrilla’ exhibition. In collaboration with science fiction author and game designer Adrian Hon, we ran a workshop for young people based on the clocks and watches collection. Participants toured the gallery, identified their favourite timepieces and then invented their own futuristic timekeeping device. They made images of their inventions in Photoshop which were then installed in the gallery through Augmented Reality. The result was the equivalent of a virtual exhibition that could be viewed by members of the public. This example illustrates one of the strengths of AR for museums: its ability to provide multiple layers of invisible interpretation in galleries. The technology allows us to create any number of virtual layers in the gallery, each displaying different content.
We had already explored the idea of layered content in an earlier project, Talking Objects. In two days, teenagers from a local college designed and built their own AR multimedia trails on topics that interested them. They selected a theme, chose objects and placed AR markers in galleries throughout the museum. When markers were scanned, they triggered student-made videos, slideshows or images. More than any other project, this demonstrated the speed with which new AR trails could be produced. The trails ran inside the free, cross-platform Junaio AR browser which made it possible to quickly create new channels using its developer API.
Technical platforms for AR
Our projects used Junaio because of its low barrier to entry. We investigated Layar, another cross-platform browser, but rejected it because it did not support marker-based AR when we began our work in 2010. Other platforms we considered were Second Site, which runs on Sony PSPs and the AR Toolkit, an open source programming library. There are a number of other low cost platforms for museums including Aurasma, Vuforia and doPanic AR. Initially, our overriding goal was to keep costs down, so we did development in-house with Junaio and spent what little budget we had on creating 3D models and other content.
As we move ahead with AR developments, user experience is becoming increasingly important and we will likely move away from the channel model of Junaio and Layar. These browsers require users to navigate their own interfaces before our own applications are even be launched. Usability issues often arise as users struggle to find our applications or to access them again if they exit the browser unintentionally. If these apps are to run smoothly on users’ own devices and without facilitation by staff then the experience must be as intuitive and efficient as native apps.
Blurring the categories
The four categories of AR applications are useful, but the more projects we did the less clear these categories became. AR is one of the few technologies that uses all of the functions available on a mobile device. As a result, the potential interactions it offers are incredibly varied. Categories have given way to a set of questions focused on both technical issues (Is the application location-based or marker-based? Does it deliver 2D or 3D content?) and user experience paradigms (Does it involve physical interaction? How is content delivered?). To determine how to go forward with AR in museums, it makes sense to look at compelling examples and ask: How can I use this with my own collection? Here are a few to get excited about:
• Virtual reconstruction. AR has the potential to show things at scale, such as a building, room or massive objects like ships. Using 3D models, it is possible to reconstruct these large scale contexts around objects which respond to users’ movements. As they rotate their device, for example, new elements of the models are exposed and can be explored by zooming or tapping the screen for more information.
• Multiple views on the same gallery space or narrative. Museum interpretation is becoming increasingly sensitive to the needs of different audiences. It is impractical to cater to a variety of needs with printed panels and labels because of space limitations. AR allows invisible content suited to different users to be embedded in galleries and accessed by users on demand.
• Bringing creatures back to life. Using animated 3D models to show what an extinct animal or plant would have looked like is another ideal use of AR. Holding your device over a skeleton or fossil to reveal an animated model answers an age-old interpretive challenge. The Natural History Museum in London uses this technique to populate a multimedia theatre with early humans, dinosaurs, fish and other animals in the interactive film Who do you think you really are? This is an expensive bespoke implementation with custom hardware, but these types of applications are increasingly easier and cheaper to realise.
As a technology platform and interaction style, AR is still in its infancy. Many applications are mere proof-of-concept rather than robust solutions integrated into museums’ existing programmes and interpretative strategies. But this does not diminish its potential for creating engaging and meaningful experiences for museum visitors. AR may have been overhyped in the beginning, but we are now entering a more serious phase during which its usefulness will become evident.
Shelley Mannion – Digital Learning Programmes Manager, The British Museum