Enactive-performative perspectives on cognition and the arts

AI & SOCIETY. Jan 2018. Springer-Verlag London Ltd.

Abstract
The practices of the arts – plastic and performing – deal in direct sensorial engagement with the body, with materiality, with artifacts and tools, with spaces, and with other people. The arts are centrally concerned with intelligent doing. Conventional explanations of the cognitive dimensions of arts practices have been unsatisfying because internalist paradigms provides few useful tools to discuss embodied dimensions of cognition.
Conventional internalist conceptions of cognition can say little which is useful about the kinds of sensorimotor integration which are fundamental to action in the world, and practices of the arts epitomize and refine these sensorimotor intelligences to a high degree. In doing so, arts practices implicitly refute the paradigmatic separation of matter and information, of mind and body. Thus, internalist paradigms only confuse attempts to discuss creative intelligent practice. This explanatory crisis has hobbled useful discussion of cognition and the arts for much of the last century.
Happily, concepts arising from the post-cognitivist paradigms which have emerged over the past 30 years provide leverage on the qualities of intelligent action in the world – which is what artists do. Here I will explore how we might deploy concepts arising in Situated, Enactive, Embodied and Distributed paradigms (SEED) and explain how these fields can provide the basis for a new discourse on arts practices which in the words of Maxine Sheets Johnstone, gives the body its due. Or rather, begins by refuting mind-body dualism, acknowledges the performative, the processual and the relational dimensions of practice.

View Paper [doi.org]


A review of Daniel Dennett's From bacteria to Bach and back. 2017

AI & SOCIETY. 2018. Springer-Verlag London Ltd.

Introduction. While Daniel Dennett's writing always addresses big questions, the reach of From Bacteria to Bach is audacious. His goal – to explain the emergence of mind as a biological (and postbiological) phenomenon, beginning from first principles, ie protolife. In this work, Dennett is to be commended for his combination of broad scholarly reach combined with readability. Dennett has no need to awe and obfuscate with neologisms or obscure terminology. A thoughtful high school student could get most of this. While humanists have sneered at his scientism, from this reviewer's point of view, he negotiates the hoary old two cultures problem with generosity and finesse. Throughout, Dennett maintains a (qualified) posthumanist stance. He argues, quite reasonably, for human-exceptionalism in terms of our mental capabilities, but, human-exceptionalist as he is, he is emphatically biologically materialist on the matter of mind and maintains that he is also non-dualist (more in this below).
Part one is plain sailing (Dennett likes his maritime metaphors and so do I) an easy argument about evolution where he frames up the book, anchoring (heh heh) his argument in the idea of evolution as 'mindless' R+D, searching the design-space of possibilities for local optima. A guiding notion throughout the book is that this evolutionary process creates competence without comprehension (refuting creationists along the way). This 'strange inversion of reason' in both Darwin and Turing, is a theme to which he returns regularly. Competence without comprehension depends in turn on another key concept 'free floating rationales'. Dennett expresses some regret at the naming of this idea, and I stumbled on the terminology 'free floating rationales' every time, but I get the concept, and I think it is useful. A 'free floating rationale' is a 'reason' in the logic of evolutionary design, that determines a quality or capability of an organism, without the organism knowing it.

View Paper [doi.org]


What Robots Still Can't Do (With Apologies to Hubert Dreyfus), Or: Deconstructing the Technocultural Imaginary

Keynote for Robophilosophy 2018 conference, Feb 15, in Vienna, Austria.

Abstract.
Art has always exploited the most sophisticated available technologies. The current generation of AI and anthropomorphic robotics are just the most recent iteration of this trend. The idea that inanimate matter can come to life is, it seems, as old as humanity itself: Pygmalion and Galatea, The Golem, Pinocchio, Frankenstein. Characters like Star Trek's Data are our versions of an old, old preoccupation. Why are we so driven to make machines that look like us, or some idealized version of us? Anthropomorphic robotics is not just a technology, it is our technological vehicle for our myths. In 1995, I wrote a short article for the 150th anniversary issue of Scientific American subtitled "why do we want our machine to seem alive?" (main title "Living Machines", September 1995). It seemed a relevant question then. It seems more relevant now.

Download Paper [pdf]


Two Decades of Interactive Art: Digital Technologies and Human Experience

in Practicable: From Participation to Interaction in Contemporary Art
Edited by Samuel Bianchini and Erik Verhagen MIT press, 2016

Introduction
As I write this in 2011, it is sobering to reflect on the fact that after a couple of decades of explosive development in new media art–or "digital multimedia" as it used to be called–in screen-based as well as "embodied" and gesture-based interaction, there seems not to have been much advance in the aesthetics of interaction. At the same time, interaction schemes and dynamics once only known in obscure corners of the world of media art research/creation have found their way into commodities from 3-D TV and game devices (Wii, Kinect) to smartphones (iPhone, Android). While increasingly sophisticated theoretical analyses (from Lev Manovich to Wendy Hui Kyong Chun to Mark Hansen, and more recently Nathaniel Stern among others) have brought diverse perspectives to bear, I am troubled by the fact that we appear to have advanced little in our ability to qualitatively discuss the characteristics of aesthetically rich interaction and interactivity–not to mention the complexities of designing interaction as artistic practice–in ways that can function as a guide to production as well as a theoretical discourse. This essay is an attempt at such a conversation.

Download Paper [pdf]


Robotics and Art, Computationalism and Embodiment

in Robots and Art: Exploring an Unlikely Symbiosis.
Eds - Herath, Kroos, Stelarc. Springer Verlag. 2016

Abstract. Robotic Art and related practices provide a context in which real-time computational technologies and techniques are deployed for cultural purposes. This practice brings the embodied experientiality, (so central to art) hard up against the tacit commitment to abstract disembodiment inherent in the computational technologies. In this essay I explore the relevance of post-cognitivist thought to robotics in general, and in particular, questions of materiality and embodiment with respect to robotic art practice – addressing philosophical, aesthetic-theoretical and technical issues.

Download Paper [pdf]


The Elephant in the (Server) Room: Sustainability and Surveillance in the era of Big Data.

In Ekman, Ulrik, Jay David Bolter, Lily Diaz, Maria Engberg, Morten Søndergaard, eds. Ubiquitous Computing, Complexity, and Culture. New York: Routledge, 2015.

I like to think (and
the sooner the better!)
of a cybernetic meadow
where mammals and computers
live together in mutually
programming harmony
like pure water
touching clear sky.

I like to think
   (right now please!)
of a cybernetic forest
filled with pines and electronics
where deer stroll peacefully
past computers
as if they were flowers
with spinning blossoms.

I like to think
   (it has to be!)
of a cybernetic ecology
where we are free of our labors
and joined back to nature,
returned to our mammal
brothers and sisters,
and all watched over
by machines of loving grace.




Introduction
Brautigan's poem might be the root document of the San Francisco hippy techno-utopian movement which spawned the Whole Earth Catalog and Apple Computer, which Theodor Rosak described in his essay From Satori to Silicon Valley, (Roszak) and which was later dubbed 'California Ideology' by Richard Barbrook and Andy Cameron. But what Brautigan et al probably could not conceive of was the wholesale reconfiguration of society and economy which would necessarily attend the infiltration of computing into diverse aspects of life. Typical of that context – the era of the 'giant brains' – is his portrayal of computers as coexisting but separate.
      There are, as I have noted previously (Penny 2012) and as others have concurred (Ekman this volume) various conceptions of 'ubicomp' (ubiquitous computing) which seem different enough to make the umbrella term of dubious usefulness. These include what we call Social Media, the Internet of Things and Mobile Computing. Discourse around ubiquity (in the HCI community) has (understandably) tended to focus on immediate human experience with devices. The faceless aspect of ubiquity, the world of embedded microcontrollers, sometimes referred to as 'the internet of things' has largely evaded scrutiny in popular media and press, precisely because of its invisibility (though it has attracted the attention critical media-art interventionists for over two decades).[ii]
      The seemingly inexorable trend to ubiquity, we were told, would result in a calm technology that recedes from awareness, and abides in the background, seamlessly lubricating our interactions with the troublesome physical world, or at least the contemporary techno-social context (Weiser). And this, somehow, would be better than the giant brains of the 60s, the corporate mainframes of the 70s, the PCs that chained us to the desk in the 80s, or the internet of the 90s. But as John Thackara has noted:
      'Trillions of smart tags, sensors, smart materials, connected appliances, wearable computing, and body implants are now being unleashed upon the world all at once. It is by no means clear to what question they are an answer – or who is going to look after them, and how'. (Thackara 2006, 198)
      Pragmatic as Thackara is, his observation prompts us to reflect upon ideas of 'progress', and the covert presence of a Victorian techno-utopianism in technological agendas, including ubicomp. A pithy summation of this syndrome is found in "An Interview with Satan" in which Satan explains:
      Technology is all about painstaking simplification, driven often by a desire for order and predictability, which produces complex – and unpredictable – effects. It's a kind of mania for short-cuts which leads to enormous and irreversible detours. Now this is my business in a nutshell … Imagine a world where every desire can be instantly frustrated, indeed where every desire can be guaranteed to arise precisely customised to the means for its dissatisfaction, where every expectation will be immediately, and yet unexpectedly thwarted … Technology cannot fail to bring about this world, since this would be a universe brought fully under control, consistent with the very nature of technology. (Dexter 1996)
      The entirety of the phenomenon we call ubicomp is underpinned by network infrastructure, server farms and Big Data. Like the cinema and the automobile, computing and digital communication has created entirely new industries and professional contexts. In the case of the automobile, the most obvious novelty was the emergence of automobile mass production itself. Further thought brings to mind the manufacturers of special parts – brake parts, engine parts, and the like. But beyond this horizon the automobile economy ramifies in all directions – mineral extraction and materials production, the oil industry, the rubber industry. The modern automobile has evolved symbiotically with the development of civil engineering and roadmaking and this in turn has had a huge effect on the shapes of our cities and towns. In the case of the cinema, the convergence in the second half of the 19th century, of several emerging technologies – photography, precision machining, chemical engineering, mass-produced optics, and electricity – to name the most obvious – led to the emergence of entirely new socio-economic phemomena, the most obvious being film studios and production facilities, with new career paths from cinematographer to producer to stuntman to 'special effects'. So it is with ubiquitous computing and it's complement, the internet.
      It is in the spirit of such holistic overviews that I here address ubiquitous computing and related matters. My goal in this paper is to draw attention to a range of 'big picture' issues pertaining to infrastructure, energy and resource use and socio-economic integration – centering on questions of sustainability and civil rights, touching upon some theoretical and historical issues where relevant.

Download Paper [pdf]


Emergence, Agency and Interaction – notes from the field.

Artificial Life. Vol. 21 No. 3. Special issue on Artificial Life Art. MIT Press. 2015

Abstract.
This paper describes the development of several interactive installations and robotic artworks developed through the 90s and the technological, theoretical and discursive context in which those works arose. The main works discussed in this paper are Petit Mal (1989-95), Sympathetic Sentience (1996-7), Fugitive I (1996-7), Traces (1998-9), and Fugitive II (2001-4) – full documentation at (www.simonpenny.net/works). These works were motivated by a critical analysis of cognitivist computer science, which contrasted with notions of embodied experience arising from the arts. The works address questions of agency and interaction, informed by Cybernetics and Artificial Life.

Download Paper [pdf]


Artificial Life

The Johns Hopkins Encyclopedia of Digital Textuality. 2014

Introduction.
The term "Artificial Life" arose in the late 1980s as a descriptor of a range of (mostly) computer based research practices which sought alternatives to conventional *Artificial Intelligence (henceforth AI) methods as a source of (quasi-) intelligent behavior in technological systems and artifacts. These practices included reactive and bottom-up robotics, computational systems which simulated evolutionary, genetic processes and animal behavior, and a range of other research programs informed by biology and complexity theory. A general goal was to capture, harness or simulate the generative and "emergent" qualities of "nature" – of evolution, co-evolution and adaptation. *"Emergence" was a keyword in the discourse. Techniques developed in Artificial Life research are applied in diverse fields, from electronic circuit design to animated creatures and crowds in movies and computer games. Historically, Artificial Life is a reaction to the failures of the cognitivist program of AI and a return to the biologically informed style of Cybernetics. Artificial life was also informed by various areas of research in the second half of the twentieth century– as discussed below.



The Aesthetics of Embodied Interaction.

Oxford Encyclopedia of Aesthetics 2nd edition. 2014 ISBN: 9780199747108

Introduction
Computer-articulated interactive art is diversely multimedia, involving technologies of sound and image, as well, commonly, as electro-mechanical, robotic, and lighting systems: an interactive Gesamtkunstwerk. Designing interaction is a qualitatively new aesthetic practice which is dependent on real-time computing technologies that have only been available for about a generation. Embodied interaction is a specialized aspect of interactive art practice which acknowledges the primacy of physical embodiment and sensorial experience in computer-based interaction. Embodied interaction refers specifically to sensor-driven, computationally articulated artworks which behave in response to the behavior of human participants. In order to elaborate the notion of an aesthetics of embodied interaction, it is necessary to provide some theoretical and historical context.



Improvisation and Interaction, Canons and Rules, Emergence and Play.

The Oxford Handbook of Critical Improvisation Studies, Volume 2
Edited by Benjamin Piekut and George E. Lewis. Online Publication Date: Dec 2013

Abstract.
Over the last two decades, availability of real-time computational technologies (hardware, software and peripherals) have permitted the development of categorically new kinds of cultural practices in which the machine system is constituted as a quasi-organism which responds to changes or perturbations in its 'umwelt', according to behavioral rules (most often) contrived by the artist/author. Such systems are found in 'new media' forms such as online interactive worlds, augmented and mixed reality work, locative media and fully physically embodied interactive installation and performance – in single and multiple participant, discrete and distributed modalities. They conform to, or derive from musical, literary, theatrical and plastic arts genres, but the fundamental creative/technical practices of designing behaviors and implementing machine perception is largely without precedent in such arts traditions. This paper proposes that a source for relevant aesthetic theory might be found in the improvisational forms which often exist as essential but informal dimensions of traditional arts practices and their knowledge bases. Within computational discourses and practices around the formal capabilities of computational systems there is a long and relevant history of discussion of questions of creativity, novelty and emergence. Computer based interactive art practices and traditions of improvisation thus provide as heterogenous an interdisciplinary polyglot as one could wish for. This paper explores that territory.



Art After Computing

in Evolution Haute Couture: Art and Science in the Postbiological Age vol II, Dmitry Bulatov, editor. Kaliningrad, Russia. 2013

Abstract
The computing revolution has had multiple impacts on the arts. This paper follows three related strands of techno-cultural development relate to the interactions between computing and arts practices over the last quarter century. The first strand, aesthetico-theoretical in nature, is recognition of the radically new kinds of cultural practices made possible by real time computing (especially interactive practices) and the complementary recognition that new modes of performative and relational aesthetics are called for. The second strand concerns the tacit or covert incursion of ideologies of computing into arts practices and the possibility that such ideologies may have had the pernicious effect of devaluing or disenfranchising, or simply rendering invisible or irrelevant, traditional practices and values which themselves may have been inadequately or poorly explicated. The third strand tracks the collapse of the cognitivist and simplistically Cartesian worldview around which 'good old fashioned artificial intelligence' (GOFAI) and cognitive science were framed, and the emergence of situated, embodied, enactive and distributed paradigms of cognition.
The argument of this paper is that these strands can now be woven together in the contemporary possibility that 'post-cognitivist cognitive science' (PCCS) might offer a new way of speaking about, and validating, the embodied and situated intelligences of the arts, which might both correct the relegation of the (plastic and performing) arts to second-rank intellectual status, and provide ways of considering the performative/embodied nature of the arts of real-time computing, leading to more a satisfactory theoretico-aesthetic corpus.

Download Paper [pdf]


What do we mean by interdisciplinarity and why do we care?

RESEARCH ARTS sept 2013.

Introduction.
Interdisciplinarity is a theme which dances around pedagogy and research, often in vogue, and lauded as a wellspring of innovation. Regrettably, just as often, the term is leveraged in initiatives which employ the term in limited or even counterproductive ways. The first question that must be asked of any such enterprise is why is the term being deployed and to what ends. The first observation that must be made is that interdisciplinarity is a symptom of disciplines. The idea that disciplines and disciplinary boundaries are somehow pure or stable and divide up the pie of human knowledge in ordered and rational ways for all time is of course nonsense. Disciplines are historically contingent, they rise and fall and change and adapt. Disciplines embrace fractious factions within them, and are defined in heterogenous and incompatible ways, by subject matter, by methodology, by philosophical orientation. Disciplines overlap, they share content, they share methods. The dotted lines that ostensibly separate disciplines are blurred and overlap. New disciplines come into existence, usually through the process of interdisciplinary formations. They usually come into existence due to a principled recognition that existing formation are not adequate to the task at hand. The emergence of fields such as computer science and women's studies are examples of the recent past. Media Arts and sustainability are more contemporary cases.
My experience in this field is largely in the realm of media arts. By 2001 I had spent over a decade involved in digital media art as a practitioner, a theorist, a teacher and an administrator. Over that period it had become clear to me that the field was so profoundly hybrid that neither an education based in the arts; nor one based in computer science and engineering; nor one based in media studies and critical theory; were sufficient to prepare a young practitioner to make well informed and well formed work. In 2001, I was offered an interdisciplinary position at University of California Irvine (UCI) jointly in Electrical and Computer Engineering and in Studio Art. I proposed to establish an interdisciplinary graduate program in media art theory and production. After two years of planning, proposals, and approvals, the development of premises and hiring of faculty and staff, the program opened as a masters program in fall of 2003. The program was called ACE, which represented the three schools which supported it – Arts, Computation and Engineering. In what follows, I will recount some of the lessons I learned in attemping to realize radical interdisciplinary pedagogy.

View Paper [research-arts.net]


Trying to be Calm: Ubiquity, Cognitivism and Embodiment

Throughout – Art and Culture Emerging with Ubiquitous Computing (anthology), Ed. Ulrik Ekman, MIT press. 2012

Introduction – After Virtuality
I have argued elsewhere that the discourses of technological virtuality during the 1990s were in part the result of the effects of an incomplete technology. The transition from the period of virtuality to the period of ubiquity was a result of the maturation of interface technologies missing from the technological palette of the 90s. In the interim, a variety of technologies linking the dataworld with the lived physical world have emerged. Small and large scale sensing and tracking technologies such as MEMS accelerometers, machine vision, laserscanners, GPS, RFID, and mobile communications technologies have been developed and deployed. This has had the effect of nesting the 'virtual' back into the lived physical world. This belated integration of data with the world has caused 'the Virtual' to evaporate. The Virtual has become doubly virtual, revealing it to be a panic around an explosive and messy technological transition period.
Over the same period, as recognition of the shortcomings of the cognitivist paradigm became more widespread, new modes of inquiry in cognitive science, AI and robotics emerged, all loosely related to post AI 'artificial life' approaches. Human interaction with the world and with technology, was addressed more intensively – as is evidenced by the rapid expansion of HCI, CSCW and related areas of research. Cognitive Science and HCI became increasingly interdisciplinary as psychologists, anthropologists and sociologists became involved. New modes of cognitive science emerged to grapple with the embodied, situated and social dimensions of cognition: the enactive cognition of Varela, Thompson and Rosch; the situated cognition of Lucy Suchman; and the distributed cognition of Edwin Hutchins. Advances in neuroscientific research revealed new dimensions of the mind-body relation which gave rise to new work in philosophy of mind. Lakoff and Johnson's 1999 volume Philosophy in the Flesh is perhaps the best known of these.
This movement met media artists coming the other way, as it were – exploring the application of computational technologies to embodied, material and situated cultural practices. The crafting of embodied, sensorial experience being the fundamental expertise of the arts, an expertise which is as old as human culture itself. It is a telling and persistent failure of interdisciplinarity – directly pertinent to the development of ubiquitous computing – that while media artists were at forefront of such research, the two communities had limited connection. The transition from VR to more nuanced augmented and mixed reality modes deploying VR's stock-in-trade tracking and simulation techniques indicates that ubiquitous computing on-the-ground is less the kind of antithesis of VR which Weiser envisaged, and more of a continuity. Likewise, various topics of critical discourse which had been lumped-in with discussion of the virtual have persisted, and in particular, it has become clear that many of the aesthetic projects of 'media-artists' are inherently concerned with the central issues of ubiquity.

Download Paper [pdf]


Art and Artificial Life, performativity and process: an intellectual genealogy of a heterogeneous field.

In VIDA 13 – Telefonica Foundation, Spain, 2012

Introduction.
In order to adequately position an intellectual history of a hybrid such as Artificial Life Art, it is necessary to draw lines through and between various histories in unorthodox ways. Discussions regarding the historical and theoretical origins of Artificial Life Art usually refer to the blossoming of the interdisciplinary scientific field of Artificial Life in the early 1990s. Here, I look back to the mid-century period to find forces which helped to form Artificial Life Art practices. In what follows, I will discuss post-war art practices and post-war technological practices together and in parallel. It is not possible to build a full sense of the intellectual patrimony of Artificial Life Art without such a double approach.
The decades immediately after WWII were technologically, culturally and geopolitically a watershed period. The rapid development of computational technologies, spurred by equally explosive developments in microelectronics, the transistor and the integrated circuit, were paralleled by the growth of social democracies in Europe and civil rights in the United States. Around the world, independent nations emerged from colonialism. In the arts, radical gestures were made and new practices were formed.
My concern in this essay is to discuss the cultural and intellectual antecedents of Artificial Life Art and to dwell upon themes of materiality and abstraction, process and emergence, as they arise in scientific and artistic discourses. This essay looks in parallel at developments in computational sciences and in the arts since the mid-20th century. It is not my intention here to analyse in depth the transitions that have occurred over two decades of Artificial Life and Artificial Life Art, or to describe the range of contemporary Artificial Life Art practices.



What is Artful Cognition?

in Art as a Way of Knowing. Exploratorium.San Francisco 2012

Excerpt – My argument here today is that the intelligences of the arts are largely or primarily, the kinds of bodily intelligences of which the essence remains incommensurable with text and spoken language. Historically, the upshot of this has been bad for the Arts, at least in the modern period. As western science and western culture increasingly has dealt in the currency mathematical and linguistic symbolic abstraction (numbers and letters) so increasingly, the conception of intelligence has been framed in these terms. By this token, the arts have been marginalized. One can be 'stupid like a painter' but not 'stupid like a mathematician'. This trend has been reinforced by a vein of stubborn anti-intellectualism in the arts.

Download Paper [pdf]


Art and robotics: sixty years of situated machines

25th anniversary edition of the journal AI and Society. Vo28. 2011. Springer.

Abstract.
This paper pursues the intertwined tracks of robotics and art since the mid 20th century, taking a loose chronological approach that considers both the devices themselves and their discursive contexts. Relevant research has occurred in a variety of cultural locations, often outside of or prior to formalized robotics contexts. Research was even conducted under the aegis of art or cultural practices where robotics has been pursued for other-than-instrumental purposes. In hindsight, some of that work seems remarkably prescient of contemporary trends. The context of cultural robotics is a highly charged interdisciplinary test-environment in which the theory and pragmatics of technical research confronts the phenomenological realities of physical and social being-in-the-world, and the performative and processual practices of the arts. In this context, issues of embodiment, material instantiation, structural coupling, and machine-sensing have provoked the reconsideration of notions of (machine) intelligence and cognitivist paradigms. The paradoxical condition of robotics vis-à-vis artificial intelligence is reflected upon. This paper discusses the possibility of a new embodied ontology of robotics that draws upon both cybernetics and post-cognitive approaches.

Download Paper [pdf]


Towards a performative aesthetics of interactivity.

Fibreculture 19. December 2011. Ubiquity. Ed Ulrik Ekman.

Abstract.
This paper places contemporary modalities of digital interaction in an historical context of sixty years of intersections between technological development and artistic experimentation. Specific technological developments are identified as context-defining historical markers and specific works are discussed as exemplars of significant milestones in the engineering and the aesthetics of interaction. The shortage of theorisation of non-instrumental interaction is lamented. The process of naturalisation to increasingly sophisticated digital tools and appliances in the current period of ubiquitous computing is noted. A number of theoretical issues are drawn out and discussed in terms of cognitive and sensorimotor dynamics. Woven through the discussion is the proposal that a synthesis of performance theory and neuro-cognitive studies might provide a basis for a performative ontology around which an aesthetics of interaction might be constructed. As the paper progresses a theoretical framework for an ontologically performative aesthetics of interaction and ubiquity is formulated.

View Paper [fiberculturejournal.org]


Twenty Years of Artificial Life.

Digital Creativity – Routledge, vol 21#3, Sept 2010

Abstract
This essay provides an overview of the history and forms of Artificial Life Art as it has developed over two decades, elaborating on the way that such experimental practices have increasingly become standard components of digital cultural practices and commodities. The paper offers some background in the ideas of the Artificial Life movement of the late 1980s and 1990s which informed such practices. The essay takes four relatively recent semi-autonomous behaving artworks: Propagaciones, by Leo Nuñez; Sniff by Karolina Sobecka and Jim George; Universal Whistling Machine by Marc Boehlen and Performative Ecologies by Ruari Glynn works as representative of the current status of major themes and preoccupations in Artificial Life Art.



Artificial Life Art – a primer.

Catalog essay to Emergence. Beall Center for Art Technology, 2010.

Abstract
It was not until the late 1980s that the term 'Artificial Life' arose as a descriptor of a range of (mostly) computer based research practices which sought alternatives to conventional Artificial Intelligence methods as a source of (quasi-) intelligent behavior in technological systems and artifacts. These practices included reactive and bottom-up robotics, computational systems which simulated evolutionary and genetic processes, and are range of other activities informed by biology and complexity theory. A general desire was to capture, harness or simulate the generative and 'emergent' qualities of 'nature' – of evolution, co-evolution and adaptation. 'Emergence' was a keyword in the discourse. Two decades later, the discourses of Artificial Life continues to have intellectual force, mystique and generative quality within the 'computers and art' community. This essay is an attempt to contextualise Artificial Life Art by providing an historical overview, and by providing background in the ideas which helped to form the Artificial Life movement in the late 1980s and early 1990s. This essay is prompted by the exhibition Emergence – Art and Artificial Life (Beall Center for Art and Technology, UCI, December 2009) which is a testament to the enduring and inspirational intellectual significance of ideas associated with Artificial Life.

View Paper [escholarship.org]


Techno-utopianism, Embodied Interaction and the Aesthetics of Behavior – an interview with Simon Penny

Leonardo Electronic Almanac DAC09 special edition
Simon Penny, Editor

This text is an edited version of an interview conducted by Jihoon Felix Kim at the International Symposium on Art and Technology, Korea National University of the Arts, Seoul, Korea, November 2008. Transcribed by Kristen Galvin, edited by Kristen Galvin, and Simon Penny. 2008-2010.

Download Paper [pdf]


Desire for Virtual Space: the Technological Imaginary in 1990s Media Art


Space and Desire Anthology, 2011
Editor Thea Brezjek
ZHDK Zurich

This article discusses technological and discursive developments in the interdisciplinary historical trajectory of theory and practice of digital cultures in the 1990s, specifically around the notion of virtuality and the transition to the paradigm of ubiquitous computing. 2010.

Download Paper [pdf]