Ubiquitous Computing & Hauntology

22 May

It’s interesting that while reading about ‘ubiquitous computing’ and ‘hauntology’ (the word for the week) and thinking about ‘the future’ I continuously attempted to make sense of things through the science fiction films that I had seen. Ubiquitous computing involves the integration of processing technologies into inanimate and non-human objects. In a world where ubiquitous computing is prominent, users would most likely be unaware of their engagement with such technologies. In the useful summary on Wikipedia, a hypothetical example was provided where personal biometric monitors woven into clothing could possibly control light and heat sources in a room. This immediately reminded me of the Star Trek film that I had watched the previous night (in anticipation of the latest film about to be released) where members on board Starfleet could monitor the location, heart rate, oxygen levels etc. of members who weren’t wearing any obvious types of monitors and were outside of the ship. While witnessing this I actually considered how it would be possible, it seems ubiquitous computing must have been the answer.

‘Hauntology’ is a term that was initially coined by Derrida in reference to Marxism continuing to haunt history despite it often being declared defeated by liberal democracy. What has been taken from this is the idea that the present exists only in relation to the past, and when what he refers to as “the end of history” is reached, society will turn towards the ideas, and aesthetics that are thought of as vintage, curious or ‘old-timey’ (Derrida 1993). This brought to mind the most obvious science fiction example of Ridley Scott’s Blade Runner, which employs a ‘retro-futuristic’ or ‘tech-noir’ film style. This is similar to Andrew Niccol’s ‘Gattaca’ where, if not for the advanced technologies depicted, the setting could be mistaken for 1950s Las Angeles.

When researching the topic of hauntology I found Charles Beckett’s ideas regarding architecture and design and how they construct and help define ‘futuristic’ extremely interesting (and perhaps this relates to Easterling’s thoughts on the agency of architecture, I’m not sure). Beckett highlights that unlike technology, architecture and design from the 1920s up until the 1960s that were described as futuristic, still have a powerful hold in our imaginations as being ‘futuristic’. An example of this is Frank Lloyd Wright’s design for the ‘Illinois’ Sky Scraper in 1959. What Beckett suggests is that when we refer to futuristic in terms of an aesthetic, we are really referring to concepts related to “simplicity, efficiency, geometrical cleanliness and abstraction” and this is essentially connected to the imagery of a classical era. He suggest that it’s these features evident in the design of the Egyptian Pyramids, that are central to the mythological links made between Ancient Egypt and alien civilizations. Here, we see hauntology occurring (Beckett 2012).

So what will the future be like? Hopefully, it’s like a sci-fi/cyberpunk/retro-futuristic film, where everyone and everything looks like it’s from some period between 1920-1960 and ‘ubiquitous computing’ means that our houses, cars and workplaces will to talk to us, answer our questions and follow our commands. Like Tony Stark’s house in Iron Man. (I’ve realised now that I watch too many movies).


Anon. (n.d.) ‘Ubiquitous Computing’ on Wikipedia < http://en.wikipedia.org/wiki/Ubiquitous_computing >

Anon. (n.d.) ‘Hauntology’ on Wikipedia < http://en.wikipedia.org/wiki/Hauntology> accessed 19/05/2013

Beckett C. (2012), ‘Hauntology’ on How to think about the future.com < http://www.howtothinkaboutthefuture.com/?p=75> accessed 19/05/2013

Derrida J (1993), Specters of Marx, the state of the debt, the Work of Mourning, & the New International, translated by Peggy Kamuf. Routledge: NY

Keller Easterling (2011) ‘An Internet of Things’, e-flux journal, < http://www.e-flux.com/journal/an-internet-of-things/


Open Science

14 May

I have to admit that until this week I had assumed that publishing in the world of science had undergone many of the same changes and shifts in established structures that other areas of publishing were experiencing in the twenty first century. In fact I had never really thought about it. After engaging with the readings for this week, I’ve learnt that this is not the case. Science, which is primarily responsible for the digital, the ‘information age’, web 2.0, P2P networks, and pretty much all other elements that have lead to the interconnected nature of modern life and modern society, has not yet fully embraced the nature of its creation.

Both Wilbanks (2011) and Pisani (2011) emphasize that the scientific publishing industry has not altered its publishing systems or structures so that it can be widely accessible. The industry has maintained the medium of print as its main form of distribution and this has raised issues associated with the limitation of public access and the ‘dissemination of vital knowledge’ that has become increasingly expected in the digital age. This differs largely from the nature of music and news publishing, which has changed significantly in the last 20 years because of scientific and technological advancements.

It is important to note that, as argued in Seed’s (2011) article on Science Transfer that was included in the readings this week, it is in fact the conservative nature of science that has allowed it to exist over many millennia. Science has always involved the need for an idea to be considered commendable before it is widely communicated. In addition to this, publishing scientific papers provides credibility and funding for scientists. However, by not partaking in the shared nature of the digital age, science is preventing itself from reaching its full potential for advancement. An open scientific publishing system would allow science to progress faster in the next 10 years, than it has progressed in the last 50. A key element of the digital age is collaboration, and this has occurred in so many other areas of society. It must occur in scientific publishing. Science needs to include itself within the network culture that exists today, so that scientific knowledge is not limited to the confines of a scientific ‘paper’. Both Wilbanks and Pisani emphasize that in a networked culture, scientific publishing will have no confines, scientists from different areas of the world will continuously refine and expand on research, and this is ultimately beneficial.


Pisani, Elizabeth (2011) ‘Medical science will benefit from the research of crowds’, The Guardian, January 11, <http://www.guardian.co.uk/commentisfree/2011/jan/11/medical-research-data-sharing > (Accessed 14/5/2013)

Seed (2011) ‘On Science Transfer’, Seed < http://seedmagazine.com/content/print/on_science_transfer > (Accessed 14/5/2013)

Wilbanks, John (2011) ‘On Science Publishing’, Seed, < http://seedmagazine.com/content/article/on_science_publishing > (Accessed 14/5/2013)

Government 2.0

1 May

The word for the blog this week is ‘transversally’. I found this week’s readings highly interesting, particularly where the idea of Government 2.0 was addressed. I had not come across this concept before. Wikipedia always provides quite reliable definitions for the topics that we discuss in ARTS3091 lectures and this is also the case for Government 2.0. Wikipedia states that:

“Gov 2.0 refers to a government that utilizes collaborative technologies to create an open sourced, computing platform in which government, citizens and innovative companies can improve the transparency & efficiency of government, thus improving daily lives of the people. This movement incorporates Web 2.0 fundamentals with e-government, making problem solving and innovation a collaborative effort between both the public and private sectors” (Wikipedia, 2013).

I do agree with Lessig’s concerns regarding the problems with transparency, which don’t just stem down to an invasion of privacy (in some cases members of congress would have to make their calendars publicly available). A major concern for Lessig is the fact that a Government 2.0 structure or more specifically a ‘transparency’ policy, would most likely lead to misguiding the public. In his example of financial contributions made by big business to parliamentary campaigns being made transparent, he highlights that unsubstantiated conclusions will be made regarding policy making. He highlights that such financial contributions are an accepted norm in American politics and he emphasises that ‘naked transparency’ would lead to hypocrisy and finger pointing. People would assume corruption has occurred and opponents and the media will use transparent data, often taken out of context, to undermine political decisions (often with bias motives). He emphasises that for transparency to be successful, financial donations made by big business should not be allowed in American political campaigns (Lessig, 2009).

I agree with Lessig’s concerns however I believe that aspects of Government 2.0 are highly relevant and should be considered in all democratic systems. Government 2.0 involves the establishment of infrastructure that allows for direct collaboration between governments and the public in terms of problem solving and policy making. I believe that in a nation like Australia, where approximately 75% of the population has access to the Internet, the public should be able to contribute to this type of decision-making population (International Telecommunication Union, 2012). A Government 2.0 in Australia would mean that issues and policies would be decided upon specifically in regard to the opinion of the public. I understand the need for government intervention in some areas of Australian society (such as media ownership), but in a Government 2.0 the disagreements between political parties that prevent good policy making to transpire and which occur purely for the sake of conflict, would not be an issue.

An excellent area where a Government 2.0 would be useful is in the current discourse that surrounds gay marriage in Australia. While a majority of the population is in favour of legalising gay marriage, both political parties will not change legislation purely for the fact that certain influential publics are against it. The failure of all political parties to act upon this issue is primarily due to a fear of loosing popularity. If a Government 2.0 was in place in Australia, the data would speak for itself, and no layperson, or politician, would be able to argue otherwise.

Styles highlights that a starting point for a Government 2.0 in Australia would be the organisation of information from national archives. The objective of this organisation of data would be to “describe every organisation and agency, keep track of which agency does what, maintain a set of functions common to many agencies, develop sets of agency-specific functions and host the functions thesaurus” (Styles 2009).


Lessig L (2009); ‘Against Transparency: The perils of openness in government’ on New Republic;http://www.newrepublic.com/article/books-and-arts/againsttransparency?page=0,0; (October 9 2009); Accessed 30/4/2013

International Telecommunications Unions (2011); ‘Internet Users’ in Key ICT indicators for the ITU/BDT regions, Geneva (16 November 2011); Accessed 30/4/2013

Wikipedia (2013), ‘Government 2.0’; http://en.wikipedia.org/wiki/Gov_2.0#cite_note-1; accessed 30/4/2013

Styles, Catherine (2009) “A Government 2.0 idea – first, make all the functions visible’ on Making Manifest, < http://catherinestyles.com/2009/06/28/a-government-2-0-idea/ > (28 June 2009)

Personal Data Tracking

25 Apr

Central to the readings and the lecture this week was assemblages of data and media (the word for the week was data). Both Quilty-Harper and Wolf make continual reference to the personal being the final frontier in terms of the seeking and interpretation of data. This includes everyday activities such as diet, exercise, sleep, mood, location, alertness and productivity. I found this highly interesting when engaging with this week’s readings. It brought to mind my utilization of the smartphone application and personal data-processing platform ‘Map My Run’ and its significance in altering my approach to fitness improvement.

The Map My Run website highlights that the app is “a fitness tracking application that enables you to use the built-in GPS of your mobile device to track all of your fitness activities. Record your workout details, including duration, distance, pace, speed, elevation, calories burned, and route traveled on an interactive map. You can even effortlessly save and upload your workout data to MapMyRun where you can view your route workout data, and comprehensive workout history”. (here is a link to the website: http://mapmyrun.com ). As well as this, the app allows you to share your personal data with other users so that they are able to measure their personal fitness data against yours. Wolf highlights that in terms of the development of personal data tracking “first, electronic sensors got smaller and better. Second, people started carrying powerful computing devices, typically disguised as mobile phones. Third, social media made it seem normal to share everything” and all these factors can are evident within the development of the ‘Map My Run’ application.

Through the utilization of this application I have been able to progressively improve my personal fitness levels in ways that have only been possible because of my access to specific data, which only a few years ago would have been either extremely difficult or highly costly to retrieve. Wolf highlights that personal data trackers take up tracking with a ‘specific goal in mind’ and he states that “they continue because they believe their numbers hold information that they can’t afford to ignore” and this is significant in considering my personal data tracking. Essentially, with access to this data I no longer am just ‘going for a run’. Instead I am exercising efficiently, making sure that I am reaching my full potential in terms of my age, height, weight etc.

It’s fair to note that some people might consider the access to such personal-data as hazardous. Access to this type of personal-data may lead to significant amounts of unnecessary concern or obsessiveness amongst individuals. An example of this can be seen in the utilization of recently developed newborn breathing monitors. These devices allow parents to record the breathing patterns of newborn babies with the intention of ensuring that they are healthy. However doctors have encouraged parents to stay away from these devices due to the fact that lapses in the breathing of new-born babies is quite normal and would go unnoticed if this personal-data collection was not occurring. However in terms of achieving personal goals of any kind I feel that personal data collection and interpretation is extremely effective.



Quilty-Harper, Conrad (2010) ’10 ways data is changing how we live’, The Telegraph, August 25, < http://www.telegraph.co.uk/technology/7963311/10-ways-data-is-changing-how-we-live.html >

Wolf, Gary (2010) ‘The Data-Driven Life’, The New York Times, <http://www.nytimes.com/2010/05/02/magazine/02self-measurement-t.html>

Reality Check

15 Apr

The lecture this week was about ‘Reality—actual, potential and virtual’. After having some difficulty with getting my head around the topic I found that my understanding of the actual, potential and the virtual was significantly improved after we had small debates in the weeks tutorial. I know that the word for this week was ‘augmented’ but in this blog I will discuss the subject that my group had to debate, which was whether ‘the binary of the real and the virtual was unhelpful in understanding our mediated experiences’. My team argued that the binary was helpful and I will outline the arguments that we made in this blog post. We agreed with Andrew’s ideas regarding the existence of two sides to reality, he stated that both were equally as real. These two sides were the virtual and the actual.

However we believed that regardless of this there needed to be an acceptance of the actual before there could be an understanding of or an immersion in the virtual. We argued that the virtual alone was not sufficient in terms of physical existence. In terms of general nutrition, the virtual was insignificant. Humans need food and water for survival and total immersion in the virtual would not allow this. We drew on several examples and one was the particular instance where an avid Starcraft player believed he could physically fly off a building in Korea following hours of intense gaming. This not only brings into play several arguments regarding media effects theory, but was also a one off incident and several other factors may have contributed to the occurrence, however in terms of our understanding of the actual and the virtual we could state that the gamers understanding of these interrelated paradigms had been blurred. This was how we separated the real and the virtual.

We also argued that the binary of the real and virtual could in fact enhance our experience of the virtual. This was because the limitations that we experience in the actual are removed in the virtual. In the case of Starcraft this obviously includes gravity. In video game series such as gran theft auto it can includes things like societal laws and values. Because we acknowledge that the experience is virtual we can enjoy the deconstruction of the limitations of the actual, this is because there are often limited consequences.

When we bring things like military drones into the argument things obviously get quite perplexing. Controlling a drone from an office essentially imitates a virtual experience, however it is in fact real. Unlike Starcraft of Gran Theft Auto, there are actual, physical consequences and this is somewhat unsettling.


Murphie A (2013) ‘Is The Virtual Real?’ in Advanced Media Issues Course Outline, University of New South Wales, http://www.andrewmurphie.org/3091/course-outline-and-readings/#virtuality

Photo: Strickland J, ‘How virtual reality works’ on How Stuff Works.com; http://electronics.howstuffworks.com/gadgets/other-gadgets/virtual-reality7.htm


4 Apr

The topic for this week’s blog is mnemotechnics and quite suitably the word that needs to be included is ‘experience’. I was fascinated with the amount of thinking that surrounds the subject of memory. Although at first I was confused in terms of perception and the ‘present being the past’ or the ‘present is anticipating the future’, I feel I have been able to wrap my head around it thanks to the allocated readings.

I’m not certain if this is absolutely correct but in terms of this type of thinking; what I’m doing at any given moment in time is in fact making sense of what had just happened. What I think is the present is actually what Andrew referred to as ‘the very recent past’. Although this type of thinking about the present leaves me feeling tangled and confused I agree with it somewhat. To further complicate things, while I’m living this whole ‘past/present’ type of experience, I’m simultaneously anticipating what’s about to happen. So while I’m writing this sentence, I’m making sense of what I’ve written (not even seconds prior) while also, almost simultaneously, proceeding towards its completion. My present is actually spent (very rapidly) reflecting and anticipating. Does my present really exist? Hmmm.

I also found the notion of what Steigler called ‘tertiary memory’ extremely interesting. What was fascinating was the idea that technical supports can trigger ‘natural memories’, that the nervous system and the physical environment, the world, work together and prompt neurological processes. I was reminded of an odd experience I had with a particular type of cologne I once owned. I had taken the cologne on an overseas trip to Germany when I was in high school and wore it daily while over there. I had returned home and forgotten about the bottle that was lying at the bottom of my suitcase. Several years later when preparing for another overseas trip, I found the cologne bottle and thought little of it. This was only until I used it later that day and had experienced an unusual sensation of being back in Germany. This was quite an amazing experience, and it was all due to the work of the subconscious. I suppose it’s similar to someone feeling ill after tasting or even smelling an alcoholic beverage that has made them sick in the past (unfortunately I will never enjoy Jagermeister again).


Murphie A (2013) ‘Some Notes on Memory, Media, Time and Perception’ in Advanced Media Issues Course Outline, University of New South Wales; http://www.andrewmurphie.org/3091/course-outline-and-readings/#memory


25 Mar

It’s week 3, and the word for this blog is ‘metacommunication’. We had to discover for ourselves what this term, or perhaps concept, meant and how it was relative to media ecology. By just typing the name into Google I found a site that spoke about metacommunication and how it affected people’s love lives. It also discussed that the term’s origins lie with the thinking of Gregory Bateson who used the term to describe the underlying messages portrayed through non-verbal elements of face-to-face communication like body language and facial expression. These elements can obviously enhance or undermine what we say in words.


After spending quite a bit of time trying to find not only a definition for the word, but also a way in which it was relevant to media, I came across an article titled ‘Meta-media and meta-communication’ by Klaus Bruhn Jensen (2011) and it helped me somewhat in getting my head around it (I think). He speaks of ‘three degrees’ of media; these are ‘bodies and tools’ (as in human beings and the writing utensils and instruments which are extensions of the body), ‘technologies’ (which he describes as mass media) and ‘meta-technologies’ (which are “digital technologies which reproduce and recombine all previous media of representation and interaction on a single material platform of hardware and software”). Maybe in some ways this is similar to McLuhan’s description of four ‘epochs’ which are categorized by different communication media, these are the tribal, literate, print and electronic era’s. What Jensen suggests is that in the ‘meta-technological’ degree, we have come full circle and we are seeing a return of the ‘multimodal forms of interchange that characterize face-to-face setting’ through technologies such as mobile phones (primarily because of sms), online computer games as well as ‘the sense of being virtually present in some literally absent world’ that can be attributed to the internet (for example being able to tweet and share comments on TV shows such as XFactor or My Kitchen Rules, as well as programs which allow face-to-face communication such as Skype). Hence the original thinking associated with metacommunication, regarding non-verbal elements of communication, are re-applicable when communication in the ‘meta-technological degree’.


In terms of media ecology, with these developments, technologically mediated actions such as speech and reading have become prominent components of everyday life. Youth often utilize ‘text speech’ in everyday vocabulary with abbreviations such as LOL (laugh out loud) and YOLO (you only live once) being quite popular. Similarly the way we now read and interpret information and data has been altered due to our constant interaction with new media. The following video ‘the machine is us/ing us’ conveys this message quite well.



Jensen KB (2011), ‘Meta Media and meta-communication – revisiting the concept of genre in the digital media environment’ in Journal of Media and Communication Research, Society of Media Research in Denmark: Copenhagen