Film: Individual Instability [4K, iPad, CRT]
by Beauchamp Art
In Individual Instability I hyperbolized the camera stabilisation of the faces of those around me by scrambling a series of images depicting people looking directly into the camera with the camera shaking, taking 25 fragmenting stills from short video sequences, which were conjoined into one another creating a mutli-layered, multi-faceted film. The first frame of the total sequence was also placed at the, so the slowing down and optical flow frame blending resulted in the final image feeding back to the start, allowing it to loop fluidly; forming an endless cycle of 8 blurred faces.
The transition between different figures interjected with the word ‘Share’ on a black background in Helvetia Neue, the typeface Facebook uses on portable Apple devises, such as the iPad on which the video was displayed at an event, in an interactive format being passed between members of the audience. This haptic interface may typify the consumption machine, whilst the object provides emotional and thought provoking companionship, [Turkle, 2007: 5] becoming subverted in daily activity and used to interact socially.
In Individual Instability I wanted to produce a piece to follow on from Unfamiliar, Instability, but rather than hyperbolising camera stabilisation using my own face, I turned the camera on those around me; using the faces of those people whom I encounter regularly, particularly in the studio. Ideally I would have been able to collect a greater number of images, however, due to time restraints, I did not have long to gather the footage and process it, especially after some of the initial problems with the piece.
Although some of the people shown were also featuring in the exhibition the piece was intended for, there were a number of them who were not, and making it specific to that group was not my goal; and a confusion of the two gave some of the faces greater familiarity in the exhibition display of the work.
Individual Instability 4K
The process to create this work was somewhat complicated when described in technical terms, but it followed a similar pattern of working which I used to multi-layer other films made from non-linear image sequences; from this and previous projects, such as in the Cultivator, Colchester, and Epitaph videos. This primarily involved scrambling the images into a series of interconnected, multi-layered versions of the same film playing simultaneously.
I began by going around the studio and filming people’s faces for short intervals whilst deliberately shaking the camera. Each person acted differently whilst being filmed, and due to the slightly imposing oddness of having a camera shook in there faces, there was inevitably some mild amusement on there faces, mixed in with the faces’ natural movements as part of engaging in a conversation. Some clips seemed to work better than others; with each having its own unique qualities but unified aesthetic of wobbling face on white background. The two most contrasting ‘portraits’ would probably be Jack and Kelly’s, as Jack stayed very still whilst being filmed, so most of the movement was that of the camera, and Kelly did the opposite, moving and interacting as if in regular conversation.
After gathering enough footage, I loaded this into Final Cut, and selected areas of each person’s clip to play in sequence, primarily those with the greatest movement distortion potential, but the least active on behalf of the filmed subject, to draw more attention to the media and process of digital distortion; by hyperbolising the pixel and generated artefacts; which was offset by the banality of the familiar face, which would be looking inexpressively into the camera’s lens.
The original plan involved having the camera shake steadily, then suddenly more to transition to the next person, however, after applying the image stabilisation, the figure, once central to the frame would not remain in one place, as part of the algorithm’s compensation for the movement, which I had not anticipated in making such a significant difference to the film, to the point at which producing a fluid film of one person blending into the next would not be possible.
I resolved this by taking 25 photographs of each section of footage, which I then transformed into a new image sequence. I also place the first frame at the end of the video, so once it was slowed down and the optical flow frame blending was in place, the final image would feedback to the start, allowing it to loop fluidly; forming an endless cycle of 8 blurred faces.
This was then duplicated and each individual had their sequence play at numerous speeds and varying scales with a range of opacity settings that meant the multi-faceted imagery would blend together within the frame imperfectly, jittering between images blending together. A faint version of the whole sequence was also played throughout, running through the 200 image cycle, creating unique imagery with every frame being dissimilar to the last, but equally ambiguous.
With reworking the footage from digital film to photos back to film again, it also meant that the video was upscaled to 4K resolution, meaning highly quality still images could be taken from the film; savouring the uniques aesthetic qualities of particular arrangements between the frames. This also became the master copy from which to produce the two subsequent versions for Part-I, to be shown on a CRT monitor and a Tablet computer. Both would be accompanied by audio drones.The audio was described by one member of the audience, James Snelling (an active first year Fine Art student) as being a “teasingly ambiguous ambient soundtrack, pulling off the admirable feat of both assaulting and soothing the ears at the same time.” [Snelling, 2014]
The primary body of sound came from distorting and looping a section of a pop song, covered by Gary Jules, called Mad World, focusing on the leitmotif the verses, the ascending, descending arpeggio, singing the words ‘All around me are familiar faces, worn out places, worn out faces.’ Which has a particularly distinct melancholic tone that remains faintly recognisable even after being looped, stretched, distorted, and thoroughly reprocessed. In this video I used the actual song’s audio (taken from the single which accompanied the release of the film Donnie Darko), whereas in this video’s predecessor; Unfamiliar, Instability, I used a sequenced MIDI version of the same song in full. Though here, I wanted more of a connection to the obscured lyrical content.
Along with the looping first version, I also used the full song discretely in the background of the video, which only had to shortened slightly to fit with the length of the video, along with very elongated versions of the verse used to create the especially low, rumbling sound qualities. I wanted to use a piece of popular music as the basis of the piece in a similar way as I had used the faces; because it would seem somewhat familiar, but unidentifiably.
(Although the video elements were considerably more distinguishable from one another than the sections of the sound, especially when displayed in the exhibition on speakers beneath a pillar, becoming an even more muffled rumbled, and the tinny messy of the iPad’s built-in speakers rendered the audio nearly redundant, which was somewhat frustrating given the time I had put into it, but on the other hand, the uncanny synetic process of “making the familiar strange, and the strange familiar” [Bloom, 2014] is the cornerstone of much of this, and other works. It becomes especially disconcerting when the subject matter being lost into obscurity is a human one, as one can empathise with a human figure and feel the self warped by this distortion, and because individuals’ internalised anthropocentrism is inescapable until it is examined plainly or deconstructed.)
Alongside this reprocessed soundscape was a quiet recording of me say the names of each person repeatedly to accompany each figure, which was equally distorted and woven into the fabric of the soundscape. Furthering the sense of the instability of the individual’s identity through representational media, and reflecting the relationship between the artist, subject, and audience. However, when combined with the rest of the audio, it was not particularly decipherable, becoming as lost in the fray as the melody and the clear image. However, in the iPad version of the video, subtitled Share, I increased the volume of this speech to make it more distinguishable from the background, though it can only really be heard when listen to though reasonable quality headphones or speakers; compression means a loss of information.
Rather than synchronising the tremolo effect with ever small modification of the video stability, I used a number of effects together to produce a randomised drone from the source material, adding a stronger cutting of the sound during the transition between the character clips, to emulated the natural visual rippling between the sections; invoking signature of the glitched; the desirable, designed failure; like the “planned obsolesce” [Shenk, 1997: 81] of all consumer goods. A constructed conclusion to the continuum and perpetual renewal of technological developments designed to encourage greater consumption of the same-but-different devices and materials presented to the individual.
Individual Instability iPad
Alongside the video display on a TV with speakers, I wanted there to be an interactive, participatory aspect to the video, beyond that of the inclusion of the figures. I decided to use a tablet computer, an iPad, to display the video, as this haptic interface seem an interesting and provocative format for the contemporary era, as they exemplify consumption machines, and an object of companionship; especial so when they are used to interact socially.
Objects provide emotional and thought provoking companionship, [Turkle, 2007: 5] becoming subverted in daily activity. For example; 30% of UK adults now use tablet computers [Ofcom, 2014: 4]. Not simple a synthetic window; the multi-faceted haptic screen contains and presents optic objects. Unlike the laptop, it is productively limited, but may be more convenient for consuming: [Pogue, 2010] a specialised consumption machine for information glut. They active encourage benign interactions with media content by specialising in facilitating this process.
Within the consumerist overload of online media, “the new challenge is to share this information with one another, to manage it thoughtfully, and to transform it into knowledge inside billions of individual brains. This is not so much fact hunting as data gardening” [Shenk, 1997: 168]. Information can be shared easily, but knowledge has to worked at, and involves focussing attention onto a subject to make it functional. Richard Lanham describes how it is “the kitchen that cooks raw data into useful information is human attention” [Lanham, 2006: 7], though this knowledge is always socialised by the individual’s environment. Without discourse, understanding cannot prosper.
To convey this necessity to engage with the social, I created a slide between each of the 8 sections in black displaying the word ‘Share’ in white text, using the same typeface (Helvetica Neue) that use by Facebook when accessed through iPads (as they use four slightly different fonts depending on the platform used). I also superimposed a low-opacity version of the original looping 200 frame animation over these sections, though this was not particularly visible when viewed on the iPad screen, it gave a discrete sense of dynamism to the otherwise flat darkness. This was planned into the early stages of the piece, when I was still planning on using the footage, as a way of resolving the transitional problem between the figures.
However, I believe it was a highly effective part of the final design, as it meant that when the sections of the video concluded, there was a short pause in which one person holding the tablet could pass it onto the next, and meant the work could be experienced more easily in short visual bullet points rather than an ongoing video drone. This also meant I had to produce a modified version of the soundtrack, where the audio would cut away when the ‘Share’ screen appeared. Rather than simply have the sound stop, I exported each section of audio individual, and included the ‘Audio Trail’, the sound of the delay and reverberation effects lingering on after the main sound has stopped. This was then realigned with the video, so both would stop and start whilst flowing into one another.
Moreover, the final ‘Share’ panel at the end of the video also allowed me, or another member of the audience to rewind the video, as I was unable to find a way to get it to loop automatically without having to install a third-party application, or by creating a longer version of the file within it containing multiple copies of the video playing sequentially (this was not possible due to the small hard-drives of such Tablet devices, only designed to store temporary internet files, cookies, and basic programs, to stream low-bandwidth images, not store not high-quality video).
I did try streaming the video, however his cause problems at it drained the battery faster, was dependent on wi-fi signal, it compressed the video to 720p, a fraction of its full detailed scale, and it did not rotate properly (the whole video player would rotate, not just the main image) – essentially, playing it through an online video play was not effective, though it did mean that users could have instantly shared the video from the device (however, the would have to be signed in to social media, defeating the communality of the piece). Due to a small oversight in the setting of the aspect ratio of the film, it had to be scaled-up in the video player; though this also meant when the tablet was portrait the face would fill the screen.
Furthermore, the solitary viewing experience of the tablet seemed to contrast from that of the television or cinema, as for each person to watch the video they had to physically share the object, not just the space to gaze. So to view media socially on tablets requires a reliance on online social interactions over mutually embodied physical space. In other words, mobile devices encourage people to buy multiple media, rather than genuinely share in one; they commercialise the new forms of online social engagement even more prolifically than the cinema experience; as rather than each person buying one ticket to see something, everyone must buy the ticket and the screen (much as online multiplayer games are replacing split-screen games, as they forces users to buy multiple consoles to engage socially.
Much of the ideas that came about here fed from and directly into my dissertation draft, as a reciprocal means of simultaneously thinking through making and making through thought.
Individual Instability [CRT]
Although the version of the video designed for being displayed on a TV would seem fairly straightforward, when downscaling from 16:9 4K to HD (to be compatible with the media player the film would run off), I had to read through a fair amount of information online on scale and aspect ratios, as there is no one fixed 4:3 ratio for HD/1K videos. Though after some investigations, and a revision of my secondary school maths knowledge, I managed to resolve that issue. The still images from this film where therefore also less detailed than those of the 4K video, so when I was producing some small GIFs based on the video, I would revert back to the use of the higher-quality film. Besides the rescaling of the film, there were no other particular alterations made for the CRT monitor display.
However, as the TV was to go on a plinth, then the speakers would have to be place out of sight, but would have to remain proximate to the screen due the limited length of the cable. Therefore I resolved to place the speakers underneath the plinth in the Part-I exhibition (using a similar set up as in the workshops with Mark Wilsher last year). By hiding all of the equipment under the plinth, the display would appear tidy, and more professional. Although it meant making modifications to the set up difficult. For example, as the media player had to be operated by a remote aimed directly at the sensor on the device, I had to lift up the plinth each day and manually reset the play though, and set the video to repeat.
When I experienced some technical difficulties due to loose cables, I had to ask for assistance to hold the plinth at an angle whilst I adjusted the wiring; imprecisely fiddling with audio-jack and power leads until the image would stutter onto the screen. The compatibility between fairly new devices, like the media player, and the ageing television is questionable. Though this comparison of new and old technology was reflected in the juxtaposition of the CRT and iPad display; the pre-‘social’ new-media; which is arguably more social in physical space as more people can view it at once than the tablet, though the digital file can be shared instantly around the world, so post-humanist analysis could indicate that the online version is more transmittable, therefore more social.
However prioritising face-to-face interaction over mediated communication has been a source of concern for as long as their has been media; Plato feared that writing would mean people would no longer communicate, and their understanding would be a false one, in that only true knowledge can come through direct discourse. Nevertheless, hybridised remediated interactions offer new potential for communication beyond the limits of the body or previous broadcast media, allowing for a direct feedback with audience. The CRT display could be seen and discussed during its display and afterwards, possibly through emailing and online forums. Whereas the iPad display could take on even more of a direct, performative role, as people could leave comments instantly on the video, share it ubiquitously, whilst simultaneously engaging in physical interaction; or at the very least, going between mediated and un-mediation interactions rapidly (such ceaseless hyper speed is the post-media, post-human, post-fordist condition; such is the atomic explosion of information.
Although the different versions of the video could function separately, it was in the interrelationship between the multiple facets of the piece that it truly came into fruition. Being passed person to person, in my protection, under the eyes of the telescreen, the tablet’s webcam; switched off, but the mere knowledge of its presence can alter behaviour, inducing a self-regulatory internal-panopticon, much like “the ‘normal’ work environment is the panoptic work environment” [Galloway, 2012: 108]; a Big Brother-like paranoia.
- Bloom, Jaygo (2014) Chaos Neatly Defined. [Fine Art Guest Lecture]. Norwich University of the Arts, UK. 14 November.
- Galloway, A. R. (2012) The Interface Effect (Paperback). Polity Press. Cambridge, UK.
- Lanham, Richard A. (2006) The Economics of Attention. Hardback. University of Chicago Press Ltd. London.
- Ofcom (2014) Adults’ Media Use and Attitudes Report 2014. Ofcom [Online] – http://stakeholders.ofcom.org.uk/binaries/research/media-literacy/adults-2014/2014_Adults_report.pdf – Accessed 23.9.2014
- Pogue, David. (2010) First iPad Reviews Are In. The New York Times. http://gizmodo.com/5506824/first-ipad-reviews-are-in Accessed 3.4.2010, Cited in Manovich, Lev. (2013) Software Takes Command. INT Edition. Bloomsbury Academic. London, UK.
- Shenk, David (1997) Data Smog. Harper Collins, Abacus. London: UK.
- Snelling, James (2014) Review of Individual Instability; Part-I. Interview.
- Turkle, Sherry. (2007) Introduction: The Things That Matter; Evocative Objects: The Things We Think With. Cambridge, MA. MIT Press: USA. Cited in Farman, Jason (2011) Mobile Interface Theory: Embodied Space and Locative Media. 1st Edition. Routledge. London, UK : 2