Singular Value Decomposition – by The Arts at CERN_IAIRI & Augmentation and Amplification – by Janet Biggs, with Mary Esther Carter

Découvrir :

Date: novembre 12, 2021

 

 

Œuvres Présentées

1) Singular Value Decomposition by The Arts at CERN-IARI Collaboration, 2021

2) Janet Biggs, with Mary Esther Carter, Augmentation and Amplification, 2020, performance

 

Texte de présentation

Singular Value Decomposition

In February of 2020, Joey Orr came to visit my studio. Joey, the  Andrew W. Mellon Curator for Research, at the Spencer Museum of Art at University of Kansas directs the Museum’s Integrated Arts Research Initiative (IARI), asked if I would be interested in collaborating on a research project with the Museum’s Integrated Arts Research Initiative and CERN, the European Organization for Nuclear Research. The project would include a research/residence trip to CERN and a working dialogue with high-energy nuclear physicist and KU professor of physics Daniel Tapia Takakai and KU professor of mathematics,  Agnieszka Miedlar. The project would culminate in a solo exhibition for the museum in 2021-2022.

 

On March 13, 2020, the Covid 19 pandemic was declared a national emergency in the United States of America. All travel was halted and we were asked to shelter in our homes. On the first of May, I was introduced to the project’s collaborators over Zoom. The trip to CERN was cancelled, as was a trip to University of Kansas, but Daniel, Agnieszka, Joey and I continued to meet virtually for two hours every week for the next year. Our team grew to include two research fellows from University of Kansas, philosophy graduate student Clint Hurshman and mathematics and dance student Olivia Johnson. These weekly meetings challenged, inspired, and sustained. We became friends. The collaborators became a collective group, determined that our work together be generative of each other and have substance within our own fields. While unable to meet in person, we have co-written papers, presented talks about our work-in-progress, and presented a live streamed performance, titled “Singular Value Decomposition.”

 

We ran the live streamed performance as two experiments, both using the Singular Value Decomposition, a mathematical technique broadly used in the sciences and particularly in quantum mechanics. It allows one to decompose a matrix into a sum of low-rank terms allowing compression of the original data into a lower dimensional representation. This technique can expose useful and interesting properties of the original data.

 

Augmentation and Amplification

Operating within NYC’s shelter-in-place guidelines, only one person – the performer – is present. Equipped with multiple microphones and cameras (mixed remotely), the space itself becomes a generative agent for the performer.

“Augmentation and Amplification,” was presented as a live-streamed project performed during the global pandemic, on July 30th, 2020. The performance continued my ongoing investigation into creative collaborations between humans and technology, fusing neural diversity, inclusive creative expressions, and adaptability within isolation and confinement.

Vocalist/dancer Mary Esther Carter was alone in the space. All other technology, including multiple microphones and cameras were run and mixed remotely, allowing the performance to operate in accordance with NYC’s shelter-in-place guidelines. Her vocal partner, an autonomous A.I. entity, exists only online. The live-stream performance interwove live action in the gallery, translucent video overlays and opaque video imagery, with a combination of live, pre-recorded, and tech-generated audio.

The work opened with images of dramatic volcanic landscapes as a narrator described global seismic shifts. This video dissolves to reveal Mary, in real-time, pacing a small empty space. Mary dropped to the ground and performed an anxiety-producing series of movements. The percussive soundtrack for Mary’s dance sequence was generated by electroencephalogram (EEG) sonification, the process of turning data into sound. Composer and music technologist Richard Savery used his own brainwaves to control a drum-set, producing brainwave-beats.

Gasping for breath and exhausted from the Sisyphean dance movement, Mary hears a disembodied voice in the space. Answering
the voice, Mary and “A. I. Anne” begin an improvised duet of vocalizations.

A.I. Anne is a machine learning entity created by Richard Savery. Trained on Mary’s voice, A.I. Anne is named and patterned after my aunt,
who was severely autistic and nonverbal due to apraxia. The “human” Anne could emotively hum, but was never able to speak. The virtual
A.I. Anne has the ability to vocalize, but not create language. Using deep learning combined with semantic knowledge, A.I. Anne can
generate audible emotions and respond to emotions.

The performance ends with video footage of my aunt Anne rocking in a chair. The narrator describes her end of life as she was taken off
of a ventilator and speaks of the profound affect Anne had on the world and those who knew her.

 

Copyright & Courtesy

© Janet Biggs

Courtesy of the artist and Analix Forever

Découvrir: