Spatial Narratives

The Spatial Narrative is where the “action” takes place.

Multiple subjects provide their own point of view on the story/subject/plot of the AR Movie. To do this they produce content designed to be placed in space using some form of AR Content Management System (CMS).

This scenario opens up an incredible amount of opportunities for investigation and discussion. Among these are:

  • the idea of creating a narrative through a multiplicity of points of view
  • the idea of creating a narrative which is not intended for sequential, linear viewing, and which is enacted each time in a different way according to how the viewer traverses space
  • the idea of creating a narrative of a new kind, whose elements are thought to be experienced in a specific place or, even, at a specific time
  • the idea of creating a narrative involving a massive number of points of views (a movie with 100 million actors! it could happen anytime soon)
  • the idea of creating a narrative which is emergent (if I add another contribution to the AR movie today, the narrative changes accordingly)
  • the idea of not creating a narrative, but a new form of expression involving spatially disseminated elements of expression, knowledge, information and interactivity (we could call it emergent environmental narrative, for example)

All these issues have been confronted while performing the workshop and different participants used very different approaches, even suggesting advanced scenarios involving peculiar approaches to time (narrative elements in the same space, accessible simultaneously, but referring to different instants of time across past, present and future) and tosubjectivity and identity.

An issue that definitely emerged was the engagement of the territory. This kind of project suggests multiple ways in which positive effects could be created for the local inhabitants of the territory in which the AR movie takes place. The AR Movie has been experienced as a complex mixture of entertainment, tool for awareness, innovative tourism, instrument for local development, kowledge sharing framework, atypical marketing and more.

Workshop participants engaged local people into acting and expressing themselves, and an interesting experiment was also performed filming parts of the movie in a local traditional bakery in which the narration has been interweaved with the beautiful experience of the traditional production of Babà desserts.

To create the Spatial Narrative the NeoReality WordPress plugin has been used. NeoReality is a WordPress plugin that can be used to easily produce spatialized content: images, sounds, text and 3D objects can be placed in space (coordinates, height, orientation) using a visual editor; the plugin includes a ready to customize and compile iPhone/iPad application that implements an AR browser like the ones offered by Layar and Junaio. NeoReality is offered to the community as Open Source software distributed under a GPL3 license, and is accessible on Art is Open Source and on the GitHub project page.

Spatial Narratives represent an incredible opportunity for innovation to multiple disciplines.

The creation of an “environmental” narrative, a story which unfolds through space, by traversing it, by walking across it in our own peculiar way, is a process which is relevant for architecture, design, engineering, sociology, anthropology, communication science, cognitive sciences.

In its non-technological form it is not even something new: it is what we constantly do during our lives. Each thing we do in our cities, in our social environments, contributes to the creation of an emergent story: the story of our life and of how we interpret and live the world. By choosing how to dress, how to move, when to move, how to interact with people and things, we describe a story. Each of us has a different story: some of them have more things in common in respect to some other ones, thus creating the illusion of being able to create “social classes” and things like that, but our personal story is different from each other. As soon as we open our eyes we see something different from anybody else, and then we go a different bathroom, have a different breakfast, walk along a different trajectory, see from a different pair of eyes, hear from a different pair of ears, touch things with our own hands, look at things which capture our own personal curiosity, speak with a certain voice, make gestures which people who know us well have no problem in recognizing among thousands of other ones. And so on.

We are in a constant state of creation of an “environmental narrative”.

Each “environmental narrative” is a point of view.

The idea behind creating a spatial narrative is to populate a certain space (if not all of it) with multiple points of view and to make them accessible by “being” in the space, by traversing it, by looking at it, by touching it, by hearing it.

This process is really exciting from the point of view of architecture, for example. If you think about it, architecture is a pretty powerful thing to do: under which authority can I design what you see out of the window? Or when you walk out of your home?

Architecture is in fact an act of authority under this point of view: as an architect you actually enforce your design onto the vision and onto the daily lives of people. The shape of that building, the form of the office, the type of that facade are all expression of a single “voice”, of a single perspective and view of the world: the architect’s.

Of course this is a pretty simplified vision of all the process, as architects work in collaboration with multiple other roles, from professional to institutional ones, but it makes an idea clear: the architecture of your city is decided behind closed doors.

Architects have, over time, developed methodologies which allow them to become more sensible to the desires and expectations of the people who will live in those buildings and streets. And they are not bad people, in general: they tend to be very sensible to important issues such as wellness, environment, beauty, comfort, usability, accessibility. But this doesn’t change the point: when they draw a building or a space, they are alone and their pencil (or mouse) move according to their single will and strategy, and your point of view on the world is not represented in the process, if not along some highly mediated form.

This discourse can be applied to multiple other disciplines, such as design and, even more, to engineering.

Several methodologies are being  developed in current times to provide different scenarios. We can start hearing ideas about peer to peer architecture, or p2p design or even about entire p2p cities.

The discourse about creating a spatial narrative are very similar to these, as they are about the creation of frameworks which allow for expression of multiple points of view onto the same space, and about the possibility to make all these points of view accessible and usable.

The creation of a Spatial Narrative, thus, should engage in the problematics by which multiple people add content and information to a certain space, and to how this content/information should be made available directly from the space.

If we think about how we live in the world, some suggestions come about. If we research on the ways in which we “see” the world, we will find out how the process of vision is actually a multi-stage set of smaller processes, some of which are related to one another, and some of which are not.

When we “see” we are actually performing a multitude of different tasks: from the simple ones such as the geometric interpretation of the things which are in our field of sight, to identify objects, spaces, colors and things like that; to the more intimate ones, by which “objects” and “sets of objects” are put in relation with the symbols stored in our memory, to recall emotions, and other neural stimulation patterns that make us actually re-live our memories and enact a series of mechanisms which are based on them.

In this complex procedure, we gather the information about the world mostly through our senses, and among them vision bears a specific significance and power, as it establishes the general (and detailed) context among all the other ones work.

This is a very powerful mechanism, and through vision we interpret most of the things we see from aesthetic, cultural, symbolic and sensorial points of view.

Sight is also the way in which we interpret other people’s “environmental stories”: the way in which we perceive what it means for us the way they dress, the way they move, the way in which they touch things and so on.

But this is our interpretation of their story.

What would it be like if  we could “wear” their story and experience it first hand?

This is a very intriguing question, with resonance in multiple scientific disciplines, starting from Anthropology. One of the most powerful concepts of Anthropology is the idea of self-representation, and on the idea that an Anthropologist can uncover how people interpret the world, how they live, collaborate, learn, express.

And this is also a theme which is really radicated in our imagination: movies like Strange Days and Avatar have expressed these ideas in powerful and suggestive ways.

But this idea is also a paradox: how can we perceive the world from someone else’s eyes? (without even trying to start to imagine how we could perceive through multiple of these eyes).

We are obviously very far from achieving a true answer to this question, and we also need to realize that it is not only a question of technology, but a really powerful philosophical question (as if we would be able to see something that someone else sees, we would still continue to see it with our eyes and mind, and, so, we would still be interpreting it in our own way, thus maintaining an closer but still external point of view; there are some beautiful descriptions of this in the book “the Mind’s I”).

But recently we are experiencing high accessibility of technologies and methodologies which allow us to confront with a simpler (yet still incredibly complex and important) version of the question: “how can we represent a certain point of view and perception of the world, and make it accessible to other people?”

This question is truly an important one, and it has profound implications for arts, sciences, business, economy and culture.

Its possible answers instantly foster new business models as well as entirely new forms of arts and expression.

During the RWR workshop we have faced this question and multiple interesting discussions emerged. From the most philosophical ones to the most business-oriented ones.

This idea is also closely connected with the emergence of digital cultures in the last few decades: the structure of the Internet itself seems to be an enormous effort towards defining the possibility for people to design, enact and make accessible their presence and point of view in the world. And, if we step back for a second from the consideration of the dangerous schemes for control which are currently being enacted by the larger social network operators of our times, we can see that the social networking model is a further effort in this direction, and advanced experiments are currently being performed to create empowering, free forms of expression. (the Diaspora, Thimbl and n-1 systems are an example of this).

We now can move to the next step, and try to understand how we can bring this form of expression, communication and information outside of monitors and into the physical world.

A Spatial Narrative

By using location based technologies and augmented reality we are able to add digital information and content to any place or object.

Location based technologies basically associate content to geographical coordinates. When a person traverses a space and has the availability of a device which is able to determine his/her current coordinates (such as a GPS), a software can research for content nearby and make it accessible to that person.

Augmented Reality works in a multiplicity of ways. The main objective of augmented reality is to populate our field of vision with a certain number of digital contents which are positioned coherently with the objects and visual elements in our field of vision: if I attach a video to a chair, the video should remain attached to that chair even if i move or turn, and if i come closer to the chair it should become bigger just as much as the chair does, etcetera.

Add digital content to physical objects and places.

AR can be enacted in a series of ways.

Location based AR is enacted using geographical coordinates: by considering the difference that runs between my current position and the position of a certain digital content (which I may have stored along with the content), I can determine the content’s position relative to my own. If my device also has a compass (to determine direction) and an accelerometer sensor (to determine how I am holding and moving the device) I have everything that I need to determine where I should draw the digital content on screen: the coordinates determine distance and general relative position, the compass determines wether that relative position should be adjusted by drawing the content in front, left, right or behind the viewport, and evaluating the accelerometer sensor allows me to determine further adjustments in 3D (up, down, left, right, forward, backward).

Marker based AR uses special images (named Markers) created with specific characteristics so that they are easily recognizable by a simple, fast, computer vision system. Markers can be of various types, and for example the most common ones are the amoeba shaped ones (such as the ones used in the default configuration of the Reactivision project, or square based ones, such as the ones used in FLAR Tolkit. All strategies used to create the markers ensure that they can be easily recognized by a computer vision system, including their position and orientation.

Computer Vision based AR is the most advanced of the three strategies and it uses CV techniques to try to understand the 3D configuration of space around us, eventually identifying specific 3D forms onto which digital content can be attached. This is a very complex task, and it requires complex algorithms.

Each strategy has its pros and cons:

  • location based AR is very easy to implement, it doesn’t require much processing power and is perfectly suitable for information which is relevant to relatively wide spatial areas, such as turism or in the case of our AR Movie; on the other side it does not work in closed spaces (because the GPS receivers cannot connect to the satellites they use to get their position)
  • marker based AR is fairly easy to implement and it can work anywhere (for example by attaching the markers to objects or even bodies under the form of stickers, stencils or even tattoos); the main drawback is the presence of the marker itself, which is not always suitable for the aesthetics of the “thing” you are attaching it to, or for those things which you cannot attach it to;
  • computer vision based AR is really complex to implement and requires lots of computing power to ensure a good usage experience, but if implemented correctly it  achieves the “complete” effect of augmenting reality, as the digital contents can be positioned perfectly onto the 3D model of the world;
According to the different strategies, different techniques will be used to augment reality, and they will have impacts on the ways in which you will create and disseminate your content, ranging from simple ones only involving the click of a button on a map in the case of location based AR, to the intermediate ones in which you will have to print and attach a marker onto objects and locations for the marker based AR; to the necessity of capturing and re-synthesizing entire 3D models of parts of the world in the case of the CV based AR.
The Spatial Narrative in our AR Movie
What we wanted to create was a way to publish multiple points of views on a storyline in space.
To do this we used the NeoReality WordPress plugin to transform WordPress into a Location Based AR Content Management System.
The NeoReality plugin can be installed as a regular WordPress plugin.
We will give for granted that you already have a WordPress CMS installed as shown in the small tutorial found in previous sections and dive right into installing NeoReality.
To install it:
  • unzip the NeoReality ZIP archive found in the RWR Software toolkit
  • upload the “neoreality” directory to your WordPress plugins directory (it’s off the root WordPress directory, in directory “wp-content/plugins” ) using FTP
  • make sure that in the plugins directory there is a single “neoreality” directory with a bunch of files in it (among which is the “neoreality.php” file) and not a “neoreality” directory with a “neoreality” directory inside it which contains the files (according to how you unzip the ZIP archive, this might happen)
  • go to your WordPress dashboard, in the Plugins section half way down the left side menu
  • in the installed plugin list you will see the “NeoReality” plugin
  • click on “Activate” to turn it on
When you activate the NeoReality plugin three new sections will appear in the left side menu titled “AR Content”. They are used, respectively, to add the following types of AR content to your system:
  • Images
  • Videos
  • Sounds
It means that by using the functions in each section you will be able to position images sounds and videos in 3D space.
The way you add them is fairly similar among each three.
Let’s click on the “Add New” command under the ARVideos section.
You will be presented with an interface which is really similar to the ones used to edit WordPress posts and pages.
You can add a title up top and, right below it, is a map and a place to specify the location of your content. You can specify a position:
  • by directly entering coordinates and height
  • by clicking/panning/zooming on a map
  • by entering an address and clicking on the search button below it
  • by dragging the pointer as soon as you do the first positioning, to fine tune the position of your content
You can now add a description and copy and paste the embed code for the video you wish to position in space (get one from Youtube, so that you know it will work on most devices).
At the very bottom you will find an UPLOAD button that will allow you to add an image to be used as a spatial icon for your video: when you are far from the video it will be used as a placeholder for it, so that when you click/touch it it will launch the video fullscreen.
This image can actually be of any size, but using an image which weights over around 50-80Kb might result in bad user experiences, due to the devices lack of processing power.
When you’re done, click on the “Publish” button on the right side.
You’re done! You’ve positioned your first video in 3D space.
Make sure you experiment a bit and that you load at least one video in places around and close to you, as you will experiment with it shortly.
Placing content in 3D space might require some tuning: different devices, different environmental conditions and different GPSs might produce slightly different coordinates for the same place, and when it comes to coordinates even a small fractional difference might result in hundreds of meters of displacement. So don’t panic and get ready to tune the position of your content multiple times.
Seeing the AR Movie
The NeoReality plugin contains a ZIP archive named “NewReality-XCODE.ZIP”. It contains a ready to compile XCODE project that will form your mobile AR application.
So unzip this file and open its content in XCODE.
There is one thing to configure to make it work with your WordPress installation.
On the left side of the XCODE interface, in the Project tree, inside the “NeoReality/Suporting files” folder is a file named “Prefix.pch“.
Click on it to open it on the right side of the screen.
You will see a standard C pre-compiler define that reads:
#define kHOSTBaseName @"http://rwr.artisopensource.net/wp"

 

This line of code connects your CMS to the app. You have to replace it with your own installation of WordPress (unless you just want to test it out along with the RWR website, which is perfectly fine: just remember, if you do so, that most AR content there is positioned in the geographical area around Cava de’ Tirreni near Salerno, in the center-south of Italy and so, to see any of the contents, you actually would have to go there).

To connect NeoReality to your WordPress, replace the internet address with your own, making sure that your domain name and the WordPress root directory (the one which contains the “wp-config.php” file) are correct.
Before compiling an XCODE project you will need the provisioning certificates provided by Apple at the developer portal.
To obtain them:
  • connect to http://developer.apple.com/
  • click on “iOS Develompent center”
  • click on “iOS provisioning center” (you will be asked to authenticate with you Apple developer account)
  • perform the provisioning procedure
The provisioning procedure is quite complex, and you can follow this tutorial to execute it with no problem at all. There quite a few steps to perform to do this: don’t worry! none of them are too complicated: just stick exactly to the directions provided by the tutorial and the procedure will be up and running in a short time.
You will probably be asked to choose for a new name for the app during the process: don’t worry and be creative!
When you have successfully installed your app on your device test it out. Remember to put at least one content near you in geographical space, because otherwise you won’t be able to see anything but an empty app.
When you’re done, give yourself a pat on the shoulder. :)
You’ve gone a long way since we began.