visual.jpg
amethyst-cave.png
world_out.jpg
book-town copy.jpg

Envisage 

 

Envisage is software that uses a chain of Machine learning models to turn descriptive texts into video and sound. The first prototype is based on fiction books, so uploading a picture of the text on the back of a book, will result in a trailer of a book. This way a reader can have a sneak peek into you the story.

 

This  project started with the challenge: "how can artificial intelligence enrich the Library 

experience of the future?" After doing some interviews, the research turned towards fiction books with the research question "how do people choose their next book to read?" They say "don't judge a book by its cover" but actually it turns out that for 90% of the readers, the cover is a really important factor. Which is the only visual you get from most books.

Movies based on books, however, are often not in line with what readers imagine while reading the book. This can be frustrating to some readers. So how do you visualize a book without spoiling too much and without taking away from the reader's imagination? What someone imagines while reading, is of course different for every person. This is why the project was constantly approached from a human-centered perspective and developed in collaboration with a lot of participants. Co-creation sessions and even a temporary book club were created to start conversations that led to very interesting results.

 

Next to the book trailers, visualising descriptive texts into videos with AI has a much bigger potential than just books. Maybe in the future, it could visualise our dreams, and our journals but also help illiterate people, or help us better understand complicated medical texts for example.

The final exhibition will showcase a selection of AI-generated book trailers in an interactive installation.