Folkestonomy


Spider Map


Having met up with Andreas and Kathrin we've solidified more how the mapping language will work, and I can see it coming together in my head as a technical and software implementation.

Kathrin has produced a layered spider map showing the various parts of the mapping process and how they all fit together physically, this is also going to be used to work out the breadth of the language and the various parts that will need to be made.

Mapping Areas
The mapping will be broken down into three basic areas, a 'Portrait' which provides some background about the person being mapped, their links to the Folkestone Triennial, which may reflect any involvement with the triennial, and finally there reasons they are interested in the Triennial. On the right is the collector device we are going to use to trigger the collection


This shows how the language will be split up into groups of 'storage signs' and 'collector signs'. The storage signs will be used to hold numerous 'clip on' signs that are used to represent parts of our mapping language. Each storage sign will cover a context, for instance location, or artworks at the triennial.


Here we see the collector signs in place

Storage Areas
How items are grouped in the storage areas. The storage area provides an implied context for each clip-on.


A representation of the clip-ons. These will be small versions of the storage signs, but each will have it's own icon representing it's purpose in the mapping. There are going to be a lot of these.


First Proof of Concept


I've managed build a first, quite lo-tech, prototype to make sure that my idea of using 1-wire networks is sturdy enough. My initial look at ibuttons proved them to be too fiddly for the kind of use we need, and so my thoughts turned to something more robust: 1/4" Jack(TRS) Plugs, as used in telephone exchanges in the days of old.

I connected up a number of sockets in a small circuit that had one Silicon Serial Number chip attached to it, and then hooked that back to the 1-wire reader using a guitar lead, and could read one or more chips on the network without a problem. I built a second box and could daisy chain them together, using 10 meter cables. This makes me confident that this will work in a real environment.

The low cost of TRS plugs and the Silicon Serial Number chips, and the simplicity of the network, really appeals to me in making this practical. Earlier networks I'd imagined with lots of intelligent collectors for each network, feeding back to a computer; which would have been expensive and more prone to errors, this one is much more simple.

The only real issue is one of context. I'm unable to detect which chip is plugged into which box when they are chained together, and so this is pushing me towards having multiple 1-wire readers: Though I'm not sure how well the host computer will handle that.

It's looking much more likely that we'll use a micro controller somewhere as well, so it may be possible to read the multiple networks from that. The use of a micro controller comes from the desire to have our computer hidden away, so the focus is on the mapping, not a computer screen.

We've realised we want to capture other input, such as photographs or video, and an easy way of doing this. I've suggested a 'start/stop' mechanism, where a button get's pressed to start the mapping, then pressed again to stop the mapping. This will allow us to grab a time stamp, then when we copy images from a camera we'll be able to tell which mapping they belong to by the time they were taken, to make reconciling maps to collected data easier.

the start stop will also allow us to work out how long it takes to do a mapping, and capture other data such a GPS, or weather, or anything we decide on.


Starting a prototype

I can read one ID, therefore I can read many.
I can read one ID, therefore I can read many.

Some of the 1-wire samples I've ordered from Maxim have turned up. I ordered a 1-wire Evaluation Kit, but it appears to be on back order, which could hold things up. The fact that it has to ship from the US, via the ever painfully slow UK customs, is making me nervous. In the meantime I ordered some parts from homechip to allow me to get on with testing the network ideas.

First step is wiring up a 1-wire reader and a 1-wire Silicon Serial Number ID chip, and seeing if I can get them to show up on my MacBook. The hardware itself is simple enough, but the software is more challenging.

Some googling throws up a Mac OS X compatible version of the one wire viewer. After some playing round, and reading all the comments, I manage to get it working, and I can swap serial numbers in and out on the board all nicely.

The software isn't really going to be much use in the final thing alas; There are a few options here though, I can use the basic libraries in here and extend oit to get the data out and into whatever database I want, or I can use other available software, such as the One Wire File System to read what's on the network. The problem seems to be that the mapping we need would require the network to be cut into segments, whereas the default one-wire network is just one long string of IDs.

I've been thinking about using an Arduino or Wiring microcontroller to talk to the one wire network; there are some libraries which appear to allow talking to one-wire nodes directly. The catch here maybe one of timing and accuracy: each network we have may be quite long, and so the timing issues on the 1-wire network may become a problem.

Again there are some options here using 1-wire hubs. It doesn't fit with my feeling of keeping it simple however.

More work to be done...


Handling the input.

Having worked out the scope of the language we may be dealing with: 100's of icons in many combined groupings, the initial thoughts of RFID have gone out of the window: readers are too costly to implement multiple times, and mutli-tag readers are expensive. RFID was an interesting possibility initially as we though we may be able to locate the tags within a 3D space, but that's also a bit more expensive, and not as accurate, as initially imagined.

Increasingly, however, the language seems to have developed into one of placing items together to make the map, and I've been thinking about these as some form of network. Having been searching around for similar ideas to RFID, I was reminded of iButtons. I'm going to look at how iButtons and 1-Wire Networks might be used to identify each of the components of the language.


Storytelling by Signs, Language as a Network

The primary challenge for me is trying to work out how to turn a physical interaction by visitors into a set of data that can be used to produce maps of the cultural spaces in Folkestone.

From my earliest meetings regarding this I've always felt that the visitors are telling their stories. As the project has proceeded and the idea become refined the story has always been a core idea, with the refinement being the language used to describe the story, and the subject of the story we are asking to be told.

The initial brief has a strong concept for a visual language that would be used to tell the stories, based upon street signs and road-works. This would be a very physical process - not someone inputting data into a computer - and we wanted to keep the need for active digitisation to a minimal; the person who's looking after visitors should be focusing on the people, not a computer.

Early on this was imagined as having a map of Folkestone that players told their stories on, by placing markers on. The markers representing actions, connections and objects. This would allow for the story to be laid out across Folkestone, creating each persons map within that space.

Looking into how this could be digitised I started looking into ways of tracking the objects, initial thoughts pointed at
RFID
, but whilst it's possible to track RFID tags in space, it wouldn't have been accurate enough to track the placement of objects on the map.

More recent discussions, including the Mapping Workshop at the Stanley Pickering Gallery, has move us away from using the town map as the scope of the mapping tool, and move towards a more abstracted storytelling. This may, or may not, be easier to build an input system for.

Breaking the story down into it's components - essentially verbs and nouns, or items and contexts - has allowed me to view the stories as a simple set of star networks: core actions that are related to places and objects, which have another structure - time, which is (generally) a line.

I can imagine ways of mapping this physically, but the challenge is now how to get that to merge with the overall ideas of the graphical language and possible interactions. It's going to be a challenge to make that work with the visual ideas, and then produce meaningful and interesting maps from the data collected.


In on the idea

Having met with Andreas and Kathrin about this project this is what I wrote to them:


Subject: Folkestone Triennial

To: Andreas Lang (public works), Kathrin Böhm (public works)

...

Folkestone Triennial is a more challenging project, but I like a challenge. It seems you want to create an iconic pattern language for describing social relations, over time amassing data that can be retrieved for presentation in a number of ways.

The language, and the structures it creates, will need to be extendable, allowing for changes which may not have been perceived at the planning and development stage, allowing for new icons and relations, to be added. This will need to work effectively with all presentation systems.

To this end it seems you need to find a way of describing the relations that may happen, and the icons and their properties, storing them in a database and retrieving them. The input and retrieval will mostly be abstracted from the data, and the gathering of this will come about in a variety of ways: for instance via physical interaction, or by second hand narration.

I would approach this with you by first of all trying to describe as many icons+parameters and relations+parameters as possible to try and see what consistencies and inconsistencies exist, as well as ironing out the meaning of the relationships. This would allow for a database to be built for storing the gathered information.

I'd then create a simple input mechanism, one which required a lot of knowledge of how the system worked, but allowed access to all parameters and options. This would be used to input some initial sets of data, allowing us to see how the parameters were used and which were important, and allowing us to revise that.

The third step would be to create some simple output mechanisms for the data, so that it could be presented in one or two forms. This would allow us to see how the data could be visualised, and what parameters were the most interesting to represent.

The next step would be to create an abstracted input method, such as the RFID based system we discussed, or a screen based or keyboard based input system for accessing the iconic language and parameter set to build the stories.

Finally a rationalised output interface would be built to allow the envisaging of the data gathered.

Each of these steps could be very open ended, but it appears you have already started upon the steps of defining the language and parameters. I managed to gauge some of the intent here with the language you had created, but have still to unlock how you tie all this together building the narratives, then translating that into a map... a further conversation I'm sure.

I can see this being quite an involve process, each step of the above taking between 5-7 days works, so my initial thought of 30 days work seems about right - though obviously visualisations could be produced for a long time I we'd get a lot of what you were tying to achieve honed out in the first step.

I'm happy to site down with you and try and work out some more specific details in the future if you want to.


a public works project. site design and build by dorian