BRIAN ENO'S THE SHIP A GENERATIVE FILM

TitleBRIAN ENO'S THE SHIP A GENERATIVE FILM
BrandBEATINK INC / OPAL LIMITED
Product / ServiceTHE SHIP
CategoryE03. Innovative Use of Technology
EntrantDENTSU INC. Tokyo, JAPAN
Idea Creation DENTSU INC. Tokyo, JAPAN
Production QOSMO Tokyo, JAPAN
Additional Company BACKSPACETOKYO Tokyo, JAPAN
Additional Company 2 MOUNT POSITION INC. Tokyo, JAPAN
Additional Company 3 INS STUDIO Tokyo, JAPAN
Additional Company 4 SUPERPOSITION INC. Tokyo, JAPAN
Additional Company 5 DENTSU CREATIVE X INC. Tokyo, JAPAN

Credits

Name Company Position
Kaoru Sugano (Dentsu Lab Tokyo) DENTSU INC. Creative Director / Creative Technologist
Togo Kida (Dentsu Lab Tokyo) DENTSU INC. Creative Technologist
Yuri Uenishi DENTSU INC. Art Director
Nao Tokui Qosmo, inc. Machine Learning / Technical Direction
Satoru Higa backspacetokyo Programming / Technical direction
Hikaru Ikeuchi (Dentsu Lab Tokyo) DENTSU INC. Producer
Kohei Ai (Dentsu Lab Tokyo) DENTSU INC. Producer
Akiyo Ogawa (Dentsu Lab Tokyo) DENTSU INC. Producer
Jun Kato (Dentsu Lab Tokyo) DENTSU INC. Producer
Hajime Sasaki Mount Position inc. Server Side
Koji Otsuka Mount Position inc. Server Side
Shunsuke Shiino Mount Position inc. Server Side
Baku Hashimoto INS Studio Motion Designer
Junya Kojima Superstition, inc. Front Engineer

The Campaign

Brian Eno’s music questions what humankind is. In response, we sought whether AI can achieve creativity like us. The project collected images of historical moments, and made an AI with memories since the 20th century. It also reads images from today’s news. The movie changes constantly based on the input of the world today. This project expresses the theme of the music visually by replicating the process of generation, and stands as an experimental project which leaves the user to comprehend.

Creative Execution

In order to support the concept that the collective photos represent the photographic memory of humankind, vast amount of photos under the Creative Commons license were collected. The photos included both historical events and natural events. Upon designing an AI to understand these images, several convoluted neural networks were used. We utilized several trained image recognition models. Models include VGG model, used to determine 1000 classes from ImageNet, and PlaceNet is used to determine specific locations. Image inputs were passed to these image recognition models, and 4096 dimensional vectors derived from an image recognition layer, which is one below from the top layer used as recognizing images, were utilized as the feature points. This enables to compare several observations by combining the results of multiple neural networks, the overall system permits to present intriguing results which is understood as misreading of an image, or linking other contexts unlike human. The project was launched in September 2016, and has been proliferated mainly through Brian Eno's various social media.

Links

Website URL