Product / ServiceTHE SHIP
CategoryA07. Use of Music Technology or Innovation
Idea Creation DENTSU INC. Tokyo, JAPAN
Production QOSMO Tokyo, JAPAN
Additional Company BACKSPACETOKYO Tokyo, JAPAN
Additional Company 2 MOUNT POSITION INC. Tokyo, JAPAN
Additional Company 3 INS STUDIO Tokyo, JAPAN
Additional Company 4 SUPERPOSITION INC. Tokyo, JAPAN
Additional Company 5 DENTSU CREATIVE X INC. Tokyo, JAPAN


Name Company Position
Kaoru Sugano (Dentsu Lab Tokyo) DENTSU INC. Creative Director / Creative Technologist
Togo Kida (Dentsu Lab Tokyo) DENTSU INC. Creative Technologist
Yuri Uenishi DENTSU INC. Art Director
Nao Tokui Qosmo, inc. Machine Learning / Technical Direction
Satoru Higa backspacetokyo Programming / Technical direction
Hikaru Ikeuchi (Dentsu Lab Tokyo) DENTSU INC. Producer
Kohei Ai (Dentsu Lab Tokyo) DENTSU INC. Producer
Akiyo Ogawa (Dentsu Lab Tokyo) DENTSU INC. Producer
Jun Kato (Dentsu Lab Tokyo) DENTSU INC. Producer
Hajime Sasaki Mount Position inc. Server Side
Koji Otsuka Mount Position inc. Server Side
Shunsuke Shiino Mount Position inc. Server Side
Baku Hashimoto INS Studio Motion Designer
Junya Kojima Superstition, inc. Front Engineer

The Campaign

Brian Eno’s music questions what humankind is. In response, we sought whether AI can achieve creativity like us. The project collected images of historical moments, and made an AI with memories since the 20th century. It also reads images from today’s news. The movie changes constantly based on the input of the world today. This project expresses the theme of the music visually by replicating the process of generation, and stands as an experimental project which leaves the user to comprehend.

Creative Execution

The project was launched in September 2016, and has been proliferated mainly through Brian Eno's various social media. The project has been covered by major 9 media which deals with creativity and innovation, such as WIRED and The Drum. This project has successfully created discussion among the readers regarding the topic of creativity through artificial intelligence. On Brian Eno’s official social, a video introducing this project has garnered 450+ shares and 1,600+ reactions. Instead of merely passively viewing the “music video” the project enabled fans to actively engage on the concept/theme of the musical score to discuss about the relationship between technology and humankind, and to tackle big question such as whether machine intelligence is capable of attaining human-like creativity in today’s society.

Since the launch of the project, Not just the followers of Brian Eno, but the general public has responded by posting comments on social media and initiated an active discussion overall. Given the current status that not a single day passes by without talking about artificial intelligence in the news, this project has successfully shed a light to its usage, and the meaning of technology, which was exactly Brian Eno had in mind upon composing "The Ship."

This project is relevant to Music Spikes because through a collaboration together with the artist, the project intends to push the definition of what music video is both in the format and the way how it is structured. The project is one of the first to leverage the use of neural networks. Moreover, the ever-changing generative nature of this music video makes this project particularly innovative. The generated music video invites the viewer to question what the neural network behind this music video system is trying to do, which is strongly related with the concept behind the musical score itself.

In order to support the concept that the collective photos represent the photographic memory of humankind, vast amount of photos under the Creative Commons license were collected. The photos included both historical events and natural events. Upon designing an AI to understand these images, several convoluted neural networks were used. We utilized several trained image recognition models. Models include VGG model, used to determine 1000 classes from ImageNet, and PlaceNet is used to determine specific locations. Image inputs were passed to these image recognition models, and 4096 dimensional vectors derived from an image recognition layer, which is one below from the top layer used as recognizing images, were utilized as the feature points. This enables to compare several observations by combining the results of multiple neural networks, the overall system permits to present intriguing results which is understood as misreading of an image, or linking other contexts unlike human.


Website URL