Download it!

Alexander Hogan, Kevin Hoganand Christian Tilt. Learning the History of Milton Keynes

Abstract: State of the art AI isn’t about building elaborate systems each aiming to be incrementally more human-like than the last; it’s about pure manipulation. That something artificial, from a urinal to a twitter account, might be presented openly to an audience and still have influence is, in fact, the most accurate marker of achievement.

The best AIs might remember knowledge for self-improvement, but surely the more characteristically human metric of development has nothing to do with ‘self’ and instead everything to do with perception. The ability to reflect patterns back towards the viewer as to promote a sense of understanding and legitimacy. That’s where algorithms find an opportunity to surpass even our capacity and also room for an existing AI to develop into both the tropes of science fiction and also in harmony with its surroundings and the people with whom it shares space.

Steering a Bot towards the rewards of a positive relationship with its environment is nothing new. The influence of Horse_ebooks through pithy one-liners and Handy-by-Designs terrible phone cases on both national news stations and clickbait sidebars showed the appetite for interaction between audience and a reportedly artificial producer.

But what happens when the goal of an AI is not merely visibility for the sake of generating revenue for its ‘master’, or the connecting people within the same social bubbles, but the promotion of the UK’s most futuristic New Town?

Is developing a new ‘shared history’ around everyday objects, the way machine learning can make the most resonate piece of provincial public art the ‘City of Dreams’ that is Milton Keynes has ever seen?

Rosemary Leeand Daniel Cermak-Sassenrath. Playful Practices for Living with Algorithms

Abstract: Algorithms are increasingly integrated into ordinary tasks, necessitating that users adjust their behaviour in order to progress toward their aims. Algorithms are increasingly integrated into ordinary tasks, necessitating that users adjust their behaviour in order to progress toward their aims. In this paper, we discuss individual and collective strategies and practices how people approach, investigate, test, share, communicate in response to algorithms; common forms these encounters take on, such as avoidance, adjustment, appropriation and exploitation; and how this phenomenon is negotiated in the cultural sphere. Beyond everyday subversion of algorithms, artists and activists have generated relevant methods of digital resistance, which will be explored. We limit the discussion to algorithmic system behaviour in everyday situations, such as artistic, social, political, economic contexts, and exclude areas such as chaotic math. Building a conceptual framework drawing on the discourses of embodiment, phenomenology, Situated Action, appropriation, and Critical Play, our initial focus is on immediate and direct encounters with algorithms, e.g. when users train autocorrect to give better suggestions, Facebook prompts people to remember birthdays, or how letters get sorted by computers. We will also examine how algorithmically influenced behaviour contributes to social and artistic practices.

Bio (Rosemary):
Rosemary Lee is an artist and media theorist whose work investigates interrelations between machines, living things and the environments which they inhabit. Her research brings together hybrid influences from conceptual art, philosophy of media, science, technology and literature, addressing themes including media geology, hybrid ecology and posthumanism. Rosemary is currently a PhD fellow at the IT University of Copenhagen in the Department of Digital Design. Her artwork and research have been shown in international exhibitions including: machines will watch us die (The Holden Gallery, UK, 2018), A New We (Kunsthall Trondheim, NO, 2017), Hybrid Matters (Nikolaj Kunsthal, DK, 2016), The Spring Exhibition (Kunsthal Charlottenborg, DK, 2015) and the transmediale (Haus der Kulturen der Welt, DE, 2014).

Bio (Daniel):
Daniel is Associate Professor at the ITU, Copenhagen, and member of the Center for Computer Games Research (game.itu.dk) and the Pervasive Interaction Technology Lab (PitLab, pitlab.itu.dk). Daniel writes, composes, codes, builds, performs and plays. He is interested in artistic, analytic, explorative, critical and subversive approaches to and practices of play. Discourses he is specifically interested in, are play and materiality, play and learning, and critical play. He aims to integrate and constrast methods and practices of art, design, media studies, engineering and education. He runs the University’s monthly workshop series which is about electronics, mechanics, alchemy, interface devices and dangerous things. In his own practice, he makes interactive works which are shown at art exhibitions, academic conferences and popular events. (More info at www.dace.de)

Tincuta Heinzel. Patented Patterns – on the art and science of patterns. A philosophical inquiry.

Abstract:
The present paper proposes a critical reflection on the existing relationship between patterns, algorithms and their patentable status. Based on a series of legal actions related to the use of patterns (Robert Lang against Sarah Morris, Mexican indigenous against Isabel Marant, Apple patent on some gestures, etc.) I will analyse the already existing legal interpretations of what a pattern is and I will discuss in which way these cases can establish a precedent for today’s digitalized environments. Defined both as form of stylistic and cultural expression, as well as logical forms, patterns are becoming elements of high importance for the present digital modelization technologies (see, for example, pattern recognition algorithms). Therefore, the legal status of a pattern is becoming a field of political battle. Notions like author and collective author, cultural tradition and logical form, creative commons and intellectual property are at stake in this context. The implications are of social, political and economic importance and we will sketch some of their short-comings when it comes to their use and application, their implicit ideologies, as well as arts and sciences disciplinary divisions.BIO
Tincuta Heinzel is an artist, designer, and curator interested in the relationship between arts and technosciences. Following visual arts, design, and cultural anthropology studies in Cluj (RO), she completed her PhD in Aesthetics and Art Sciences in 2012 at Paris 1 University (FR) with a thesis on the foundations of interactive textiles’ aesthetics. She initiated, curated and/or coordinated several projects, such as “Artists in Industry” (Bucharest, RO, 2011–2013) and “Haptosonics” (Oslo, NO, 2013). For now, under what she labels as “aesthetics of imperceptibility,” she is investigating the aesthetic issues of nano-materiality. She was fellow of the French Government (2002-2003), DAAD research fellow at ZKM in Karlsruhe (2005) and Fulbright Fellow at Cornell University in 2017. Heinzel is Senior Lecturer at Loughborough University (UK) and Visiting Professor at “Ion Mincu” University of Architecture and Urbanism Bucharest (RO).

Hassan Choubassi. The Burst of the Latent, Politics of Mobile Connectivity

Abstract:
In virtual reality the body is doomed to immobility, though it is connected to the wonderful world of the screen but it only means a static connectivity, an imprisonment of the physical body behind the screen. The advent of mobile connectivity marked a revolution in communication technology. With the new technologies of smart phones, the image has taken on a new dimension and the space changed from being an enclosed, cocooned space to the actual space of the real. The image that was latent in the virtual and trapped within the boundaries of the catatonic digital stationary of a desktop computer is now loose at large.The flux is so fast and omnipresent that the boundaries between the two entities of the virtual and the actual are blurry and inconsistent to the extent that the individual user of media augmentation cannot distinguish the difference. The latent image of the virtual exploded in the actual augmenting it to become an actual and physical space of hyperactivity and virtual flux. Manipulation of the image is becoming manipulation of life itself and thus the creation of an augmented space in the actual becomes a prerequisite.Bio:
Hassan Choubassi is a visual artist born in Beirut in 1970 holder of a PhD. in Communication Media from the European Graduate School, Switzerland 2014 with a thesis on the politics of mobile connectivity, “The masses: from the implosion of fantasies to the explosion of the political”, he also holds a Masters degree in Scenic Design from DasArts in Amsterdam 2005, and a BA in Fine Arts from the Lebanese American University 1996. He is currently the Chairperson of the Fine Arts & Design department at the Lebanese International University (LIU) and the founding director of the Institute of Visual Communication (IVC).He has several research studies in the filed of communication media, digital and information technology in arts, augmented reality, new media in urban context, anthropological and intercultural city mapping, mobile media, alternative education, new modes of perception for university students.

Barbara Rauchand Michelle Gay. What is it like to draw?

Abstract:

This paper introduces a practice-led project that uses the Google Quickdraw dataset to articulate and explore the potential differences of algorithmic ‘machine’, or digitally constructed drawings, and fictional associative ‘hand’ drawings. The authors use both digital 20-second sketching (the rule set for the Quickdraw project) and more elaborate drawings and collages to then analyze and speculate about the results of these visualizations. At this stage it seems obvious to label and move the machine drawing to the reductive, the hand drawn to the more complex and associate realm. Artificial intelligence and machine learning are producing a wealth of projects, and we will pick a couple of case studies to speak to this particular visual material that derives from algorithmic processing. For instance, the (IBM AI) Watson-composed film trailer for Morgan will be dissected for its apparent allure and glitch-free appearance, when we speak to glitch-free we also indicate that we are left with a movie trailer that is almost too perfect, a little too obvious. We miss surprises and mistakes that come naturally in hand made materials – exploring then, what it means to draw and to work within classification systems in an algorithm-leaning world.

It will be interesting to speak to the pleasure this project provides us with. While this work feels related to modernist work like Sol Lewitt or other conceptual artists using rule sets and instructions we are in fact aiming to place imagination back into the digital landscape of Google lists and the particular, often hidden, choices that are made for us. We are indeed hacking the list of words that Google researchers find worth drawing. The database of training words is an odd choice of words to begin with: general, non-specific, non-inclusive, averaged-out nouns. Is a new taxonomy being formed? We are left with a seemingly arbitrary choice of words researchers and their social and cultural backgrounds. Yet clearly trying to be inclusive – by living on the global google platform – something is missing.

Our research paper will compare the GoogleDraw project with a printed library of Symbols, and in particular The Book of Symbols: Reflections on Archetypal Images. This archive for research in archetypal symbolism has successfully compiled an impossible list of images, broken down into five sections, creation and cosmos, plant world, animal world, human world and spirit world. The complexity of this undertaking, and its choice of depiction, resulted in an inclusive, yet distinct and personal accumulation of images. To refer to the editors, ‘It is an evocation of the image as a threshold leading to new dimensions of meaning. Symbolic images are more than data; they are vital seeds, living carriers of possibility.’ And Paul Klee said it succinctly as well: “Art doesn’t reproduce the visible. It renders visible.” (Ronnberg 2010, p.6)

We hope that this drawing project will add complexity back into the reductive training set of the GoogleDraw project – after all this project is set up to teach a computer to draw and recognize drawn symbols — but how to draw is at the same time simple and complex.

Returning to the list of google words, we noted that they were mainly nouns. Our first approach to this list was to deny the nouns and render them into active verbs, to rather depict action than stillness. In sum we discovered some crucial gaps in the AI Drawing system: their point is to reduce the gap in understanding what a small drawing means (car, camouflage, cup), while we hope to introduce gaps BACK into the readings/understanding of the images. And this is where humans may find pleasure (we imagine our brain synapses firing in excitement). Whereas when we train the AI algorithm this space of poetry (the gap) is calcified, or lost.

When we read the words as single entities, there is no ‘poetic’ resonance. It is in our act of creative classification, our simple groupings, and then making hand drawings that allows and welcomes these gaps – and the potential for poetry.

Ivan Yamshchikov and Alexey Tikhonov. Post-imitational intelligence.

Abstract: In his groundbreaking philosophic essay “Computing Machinery and Intelligence” Alan Turing has developed a concept of an Imitation Game, which became a cornerstone principle in which computer scientists as well as broader audiences are thinking about AI. However, “fake it till you make it” approach being extremely useful in the context of a specific application is far from optimal in a more general context (say, SETI or general AI).

The biggest challenge of Turing test as an umbrella-approach towards the understanding of intelligence is that it only allows to estimate some kind of a proximity of the agents that are involved in the test and assesses this proximity through a by-design anthropocentric perspective. This anthropocentric approach towards general intelligence is extremely instrumental but has one predominant limitation: if some other form of intelligence is indeed possible, we would acknowledge it as such only if it is developed enough to conceptualize our anthropocentric approach and therefore play along the imitation guidelines.

We suggest to look on the imitation paradigm from a non-anthropocentric context and suggest a possible approach that might allow us to assess intelligence in a broader multi-scale perspective. We discuss the limitations of Imitation Game as a paradigm for general AI and discuss possible paradigm shifts that would allow to estimate intelligence in a broader an more diverse context. An iterative game-like procedure between two different types of intelligent agents would allow to establish a mutual an reciprocal understanding of intelligence. We further suggest to discuss possible limitations and advantages of such new approach, especially as an key concept for general AI.

We illustrate the discussed ideas with our own research in the areas of poetry and music generation with artificial neural networks backing up the theoretical ideas about the understanding of the concept of intelligence with actual empirical results in the area of non-general creative AI.

Alexey Tikhonov bio: Has 17+ years experience in web-development. Works in Yandex since 2010 as a data analyst. Has created his first natural language dialog bot in 2001. Is interested in interactive text generation since the middle of 90s.

Kasperi Mäki-Reinikka. Cave paintings for the AI – Art in the age of Singularity

Abstract:
In my presentation I discuss the possible futures of art and aesthetic experience in the age of inhuman agency, machine learning, and artificial intelligence. I approach my subject from the point of view of artistic research asking how the dawning Singularity will experience art, and how artists could take this into account while making art today. I am not focusing on art made by the machine but rather on art made for the machine. I discuss the transhuman condition through the writings of David Roden, prophesies of Ray Kurzweil and installations by Brains on Art collective. The artistic component of the presentation rises from the work made by my interdisciplinary art collective, Brains on Art. Our new installations suggest a way to approach the question at hand and illuminate the sensory experience, dullness, and the (un-)consciousness of a machine spectator. The transhuman condition is taken here as a speculative thought experiment, to ignite artistic research process in order to answer questions on the ability (or inability) of a machine to have an aesthetic experience. In this presentation such contradictions are savored as artistic possibilities in order to understand the ontology of the machine.Bio:
Kasperi Mäki-Reinikka is a new media artist and a doctoral candidate in Aalto University School of Arts, Design and Architecture, Helsinki. Interested in bridging the gap between art and science, he founded an interdisciplinary Brains on Art collective in 2010. The collective includes practitioners of art, cognitive science, and bionics. Brains on Art appropriates methods used in science and technology and applies them in artistic practice. The project strives for dialogue between fields in order to find new ways of collaboration. The background of the artworks is formed by shared artistic reflection. Mäki-Reinikka’s doctoral research delves into the intersections of art, science and technology in interdisciplinary artistic practice and in transdisciplinary higher education. The goal of the research is to formulate and put to test methodological approaches on interdisciplinary collaboration between art and science. Mäki-Reinikka is also an artist member of advisory group for the Finnish Center for Artificial Intelligence.
www.brainsonart.com

Patricia De Vries. Bringing the Dark Side of Algorithmic Finance to Light

Abstract: This paper starts of with the observation that there is anxiety (Kierkegaard) about the alleged ubiquity of high-frequency trading algorithms (HFT), which has captured the attention and imagination of artists and critics alike.

Remember the trade floor of the New York and other major Stock Exchanges of the 1990s? Allegedly, these floors look little like the days of yore. Gone is the noise and smell coming from traders that work on behalf of large investors. Gone is an image of Wall Street teeming with market makers: mainly rowdy men dressed in suits with the occasional color-coded overcoats, milling around stock booths, tensely looking at competitors and at screens with graphs and numbers on them, while shouting into telephones, gesticulating and making hand signs. Robots took their jobs. Or rather, today, high frequency trading algorithms did. Bots automated the market, and sped up the warfare of ordering and closing deals from minutes, to seconds, to milliseconds, down to microseconds. Scott Patterson observes, “[w]ith electronic trading, a placeless, faceless, postmodern cybermarket in which computers communicated at warp-speeds, that physical sense of the market’s flow had vanished. The market gained new eyes—electronic eyes” (Patterson 2012, p.7)

“What becomes of economic sociology when markets and most participants in then are computer algorithms?,” Donald MacKenzie has asked (MacKenzie 2014, p. 2). What to make of trading algorithms when they operate at a speed that makes them unobservable to humans? How are these invisible and unobservable bots visualized and critiqued? And what makes this type of algorithm the addressee of anxiety?

Some artists seek to materialize the alleged immateriality of HFT algorithms and its infrastructures by way of visualizations. Rich in imagery such works invoke unthinkable amounts of money floating in the form of bits, numbers and graphics through a virtual, intergalactic space. Some mock the trust in algorithmic computing, others soak it up. Again others foreground the mystical, magical, unknowable qualities of financial technologies giving way to the assumption that algorithmic trading exists outside of human control.

An algorithm is plural, one could say in the spirit of Deleuze. Tacitly but poignantly, artists produce subversive inflections on what triggers anxiety about high frequency trading bots through intervention and mimicry, through tinkering and reverse engineering, through objectification and visualization. The works under consideration each constitute an artistic approach to the undercurrents of algorithmic trading on the one hand, and provide an alternative understanding of algorithmic trading on the other hand, which in turn results in different reconceptualizations of our entanglement with algorithms.

Underlying aspects of HFT are brought to the surface in these artworks in a trickster-like, playful yet political, and visceral manner. In these works a critical perception on algorithmic trading emerges that opens up space for the at times weird histories — experiments with dart-throwing chimpanzees — and little acknowledged aspects of algorithmic culture to come to light, thereby interrogating and moving beyond dominant and often times dualist understandings of algorithmic culture.

Konrad Wojnowski. Machine dreams and the probabilistic regime of art

Abstract: Neural networks – created and used mainly for big data analysis – are currently taking the Internet by storm. They are used for probabilistic computation: by insurers for client profiling, by tech companies for image recognition, or in translation software. But as artificial neural nets (ANNs) become more and more refined, programmers and artists are able to find new applications for them: not only as tools for pattern recognition, but also for creating new content. Latest experimental software based on neural algorithms (like Google’s Deep Dream) can analyze images and by reversing this process automatically generate works of art.

The possibility of creating images beyond human control and programming poses a serious challenge for our aesthetic categories. In my presentation I would like to reflect on this challenge by critically re-reading Jacques Rancière’s distinction of “regimes of art” from the point of view of “probabilistic computation” in art.

Konrad Wojnowski – works as an assistant at the Jagiellonian University (Performativity Studies Department). He wrote two books (in Polish): “Aesthetics of Disturbance” and “Productive Catastrophes”. The first one is devoted to different strategies of disturbance in the cinema of Michael Haneke; the second one deals with the concept of catastrophe in the context of contemporary technoculture. His research interests span theories of performativity, philosophy of communication, and various intersections between culture, science, and technology. Currently he is leading a research grant regarding the impact of probability theory on avant-garde art and science-fiction literature in the 20th and 21st centuries.

Renée Ridgway. Data Visualisations as Transcription of the Machinic

Abstract: In an era of ‘big data’ companies (and governments) inflict control on society by amassing large amounts of data about users, yet when analysing this data, visualisations are often necessary in order to correlate the data. Drawings, in the sense of making diagrams, have nowadays been replaced by ‘mindmapping’ software, network analysis, info graphics and screenshots – though they do not directly address the causality nor fully express the data. In his text Are some things Unrepresentable? Alexander Galloway questions ‘information aesthetics’, in regard to the dilemma surrounding the notion of ‘unrepresentability.’1 According to Galloway, information aesthetics has not been able to represent all that needs to imaged and, referencing Gilles Deleuze, “adequate visualizations of control society have not happened. Representation has not happened. At least not yet ” (Galloway 2011: 95).

My PhD entitled Re:search – the Personalised Subject vs. the Anonymous User looks at the technological and conceptual implications of ‘search’. It addresses how the scope of knowledge is limited by the ‘filter bubble’ of Google’s corporate ‘personalisation’ compared to diverse approaches of querying with Tor, whether one can be anonymous online and what kind of divergent search results are returned to the user. In order to test out this hypothesis I designed a speculative artistic research method where I explored the hidden workings of the black box and this machinic ‘unrepresentability’. I conducted a series of empirical ‘qualitative interviews with algorithms’ by compiling ‘small’ sets of data and producing a series of data visualisations. Notes from my fieldwork, or what used to be transcription in conventional qualitative analysis instead becomes data visualizations in the era of digital research methods.

Specifically this experiment attempted to ‘decloak’ some of the mystery enshrouding the proprietary ranking algorithm of Google (PageRank), which now has become ‘machine-learning’. RankBrain disrupts human ontologies and taxonomies of ‘keywords’ that have previously structured search results. Yet the techno-ecologies of algorithms are not oracles and have bias, requiring interpretation of their assumed autonomy and raise questions regarding the politics of human interaction and agency. Whether my data visualisations are adequate representations or whether they answer Galloway’s call for ‘a poetics as such for this mysterious new machinic space’ remains to be seen.

My powerpoint presentation for the EVA conference would show the process and results of the (search) returns of the machine. Re:search – Terms of art was exhibited at Hacking Habitat in the former Wolvenplein Prison, Utrecht, NL in 2016. The data visualisations and interactive touch screen show the value (ranking and unique returns) of keywords in contemporary art measured through the lens of personalized (left column) and anonymized (right column) search results (URLs).2 The research was conducted on two computers: one using Google Search in a Firefox browser on a completely ‘personalized’ Apple; the other computer is a hacker approved ‘clean’ Lenovo PC, with a Debian operating system running the Tor (The Onion Router) browser.

1. “The point of unrepresentability is the point of power. And the point of power today is not the image. The point of power today resides in networks, computers, information, and data” (Galloway 2011: 95).
2. Accelerationism, Aesthetic Turn, Anthropocene, Artistic Research, Contemporaneity, Creative Industries, Cultural Entrepreneurship, New Aesthetic, Object Oriented Ontology, Performativity, Post Digital, Post Humanism, Post Internet, Post Media, Transmedia

Kathrine Elizabeth L. Johansson. Discrepancies between Art Theory and Art Practice Approaching interactive art through third order cybersemiotics

Abstract: Some newer interactive, computer based artworks offer not only the possibility of immediate phenomenological experience, followed by various degrees of individual user contemplation. It is my thesis that art works can function as genuine research in an exemplary manner. However, to understand art as research, it is necessary to alter one’s idea of what knowledge is. This paper presents a hermeneutical-semiotic approach to two neuroscientifically inspired sound art works, Closer (in progress) and Ghost (premiered at ISEA, Istanbul, September, 2011) by Jane Grant, Matt Wade, and John Matthias. This theoretical approach will allow a wider fulfillment of the communicational potential and range of the works, and of their potential place in current knowledge cultures.

Closer presents the use of mobile phones and Iphone applications, based on Artificial Intelligence software, as representations of “neurons” and neuronal behaviour. As performers and game players move closer, the mobile phones generate “fields” of proximity, which allows “synapses” between phones to form. Information is sent from one mobile phone to the other like neuronal “spikes”, based on the concepts of chemical action potential at the level of single cells, known from empirical neuroscience. As we see, the artists have extrapolated the meaning of scientific terms into artistic, functional and relational practice, based, however, on macro level social media, rather than the natural communications of the micro level (wet) human brain. From there, Closer, and similar works, break boundaries and suggests new narratives of both neurons and human-technology interactions. In that sense, it is fair to say that these works form new ontologies in their very making (artist concept, artwork and user form a continuity).

From a point of view of the Cybersemiotic Star, derived from Søren Brier’s third order, cybersemiotic perspective (Brier, 2008), I will present the potential of hermeneutical-semiotic interpretation as an approach to the current art scene. All though it is at this very level discrepancies of interests between art philosophy and art practice could occur, I find that art and art theory (on the third order observer) will, in the long run, be able to mutually inspire and inform each other at new (third order) levels with a fast pace, which is what makes the influence more visible, evident and therefore useful in knowledge generation today.

Ivan Yamshchikov and Alexey Tikhonov. I feel you. What makes algorithmic experience personal?

Abstract: Communication is a very diverse form of human-to-human interactions. Communication does not have to be verbal, it might be tactile or visual, in fact one can see communication between the author and her audience in any form of art, yet whenever we talk about a human-to-machine interactions we rarely regard them as communicational experiences. The fact that in a context of human-to-machine interactions one rather talks in terms of user-experience or interface instead of a communication has several historical reasons. A primary one is that a predictability in a narrow context was for many years (and still is) in the focus of human-machine interfaces, whereas communication by design incorporates discovery and context shifts as integral parts of the whole process.

In this submission we suggest to use case studies of two our projects on the verge of art and computer science. These are Neurona – a mini-album with lyrics generated by a neural network in the style of Kurt Cobain (the project took part in NIPS workshop on creative potential of AI, see nips4creativity.com/music/) and Skryabin-stylized AI-generated piece performed life in Moscow in summer 2017 (see, https://youtu.be/5bfI3bhiRa4?t=1m13s).

Since artificial neural networks are excelling in stylization of visual, textual and acoustic objects, we suggest to discuss the potential of such technologies in context of human-to-machine interactions in general and art specifically.

Alexey Tikhonov bio: Has 17+ years experience in web-development. Works in Yandex since 2010 as a data analyst. Has created his first natural language dialog bot in 2001. Is interested in interactive text generation since the middle of 90s.

Helena Barranha and Marco Gomes. Pictured Machines, Digitised Artworks, Algorithmic Narratives

Abstract: Digital Art History has recently emerged as a new research field. Although its recognition as an autonomous discipline is controversial, it seems undeniable that the increasing digitisation of cultural heritage has paved the way for innovative curatorial practices. In fact, traditional methods for the description and classification of artworks are gradually being replaced by computer-aided processes, pointing to a new paradigm: Artificial Intelligence.
The continuous growth of computer processing power and the development of Convolutional Neural Networks have led to breakthroughs in computer vision and object recognition. These technologies have “contaminated” every vision-related subject, including digital and digitised art, posing questions that go far beyond the practical demands of conservation and cataloguing. By enabling unconventional search criteria as an alternative to the usual categories (author, period, medium, style), art information systems inspire new relations and interpretations.
Focusing on Europeana and Google Arts and Culture, our paper will discuss the relevance of the machine as a subject-matter in Art History, addressing the following questions: Is it possible to build up a consistent narrative based on an algorithmic selection of artworks associated with the keyword “machine”? And what do these digital images reveal about the artists’ utopian or dystopian visions?BIOS:
Helena Barranha graduated in Architecture, holds a Master’s Degree in Management of Cultural Heritage and a PhD in Architecture (Universidade do Porto, 2008). She is an Assistant Professor at Instituto Superior Técnico – Universidade de Lisboa, and Researcher at IHA – Institute of Art History, Faculty of Social Sciences and Humanities, Universidade NOVA de Lisboa.
She was Director of the National Museum of Contemporary Art – Museu do Chiado, in Lisbon (2009-2012), and coordinator of the unplace project – A museum without a place (http://unplace.org/), between 2014 and 2015.
Her professional and research activities focus on cultural heritage, museums and digital art, and she has published widely on these topics both in Portugal and abroad.Marco Gomes works for the Portuguese Government as a Data Architect in GEE – Gabinete de Estratégia e Estudos [Department of Strategy and Economic Research] at the Ministry of Economy. Before this role he was a Lead Developer at the Ministry of Health (2000-2005) and Project Manager at the Ministry of Finance (2005-2010). Marco graduated in Statistics (NOVA Information Management School, 2005) and holds a Master’s Degree in Information Management with specialisation in Business intelligence (NOVA Information Management School, 2010). Later, in 2012, he finished Post-graduate studies in Cryptography at Faculty of Sciences and Technology (FCT NOVA). Currently, and as an independent researcher, he explores several interests in the field of Artificial Intelligence.

Catherine Bernard. Flesh Machine, Vision Machine, War Machine: Transcending the physical?

Abstract: The obsolescence of the physical experience is one theme in current artistic practices that underscores issues of body and machine interface, examines the boundaries limiting organic bodies and exposes the consequences of the new technological paradigm.
This paper proposes to look at the work of a few artists whose work is located beyond human and non-human agency and who look critically at the human/machine interface.Eduardo Kac uses genetic materials to create hybrids organisms and his work has brought attention to the possible alterations of the physical and the enhancement of the capacities of the organic through technological transmutation. Neil Harbisson explores cyborg identity by fusing his body with a machine that allows him to transform sensorial perception through the translation of sounds and colors. His body becomes thus the site of an experimental flesh machine.These examples point to the body as a site of experimentation with links to a techno-ontology fusing the organic and inorganic.Twenty years ago, The Flesh Machine by the Critical Art Ensemble looked at the relation between technology and manipulations of the body and exposed the manifestation of a new eugenic consciousness. The predictions of the activist group have been exceeded and the traditional line between engineering and medical science erased. This field of experimentation is also very much tied to corporate biotech and the normalization of the body to maintain it to a maximum social functionality. Feminist theorist Faith Wilding writes a scathing analysis of the transformations of the female body in particular, colonized as a laboratory for a lucrative medical/pharmaceutical industry.In parallel to the flesh machine, the monitoring of the physical environment by the vision machine has become the ubiquitous presence that accompanies our quotidian gestures. Computer keyboards, video cameras and other devices relay signals to large monitoring entities that probe into every recesses of the human activity. It extends its surveillance capacities to reach outer space as suggested by a recent acknowledgment by the Pentagon of decades of secret intelligence and information gathering related to extra terrestrial life.
The work of Trevor Paglen reveals the mise en scene and mise en boite of information into systems of surveillance and monitoring entrenched in the social body. His work looks at the architecture of state surveillance and its strategy of invisibility.In his recent film Shadow World, Johan Grimonprez highlights the growth of distance wars. In the film, investigative journalist Jeremy Sahill underlines the growing use of robotics in the military. Drones are manned from control centers thousands of miles away from their target by soldiers who drive home after their shift. Recent studies have shown that distance killing creates higher levels of combat stress and PTSD for them than for military personnel physically deployed on the ground.

Transcending the physical thus is not without significant consequences and the use of new technologies implies perhaps more than can be fathomed. These implications need to be critically appraised while our expanded connectivity creates forms of disembodiment that are worth assessing as we are approaching, in the words of Baudrillard, “the vanishing point of communication”.

Dew Harrison. Duchamp and Dialogue

Abstract: As an artist and researcher, the core of my thinking and practice concerns the parallels apparent within Conceptual Art and Hypermedia technology, where both enable the semantic associations between thoughts and ideas to interlink into an holistic complex concept or statement. My work takes the form of digital multi-media explorations which have largely focused on unraveling the purposefully obtuse work of Marcel Duchamp, the instigator of Conceptual Art. Recent pieces have involved animating digitised images of Duchampian objects by attaching ‘flocking’ behaviors (algorithms) to them as data items and allowing them to herd into appropriate families of similar personalities – the families of his Large Glass being Bride, Bachelor, and Horizon.

The data items currently consist of images of his works, notes, objects, photos, prints, musical notations etc., but his texts and pieces have also generated a galaxy of interpretations and art theories which have not yet been included in the ‘flocking’ experiments. With a respectful nod to the mappings of the dialogues of the Art & Language group, I am wondering if, by endowing text strands taken from varied Duchampian understandings with flocking behaviors, we might begin to witness the shifts and shaping of his thinking evolve before our eyes. Further familial text strands could then be added from aspects of contemporary art theory to enrich and augment our understandings of current art works.