Active projects and challenges as of 23.11.2024 14:19.
Hide full text Print Download CSV Data Package
Back to the Greek Universe
Back to the Greek Universe
Back to the Greek Universe is a web application that allows users to explore the ancient Greek model of the universe in virtual reality so that they can realize what detailed knowledge the Greeks had of the movement of the celestial bodies observable from the Earth's surface. The model is based on Claudius Ptolemy's work, which is characterized by the fact that it adopts a geo-centric view of the universe. The movements of the celestial bodies as they appear to Earthlings are expressed as a series of superposed circular movements, characterized by varying radius and speed. The tabular values that serve as inputs to the model have been extracted from the literature.
The ancient greeks believed in a geocentric system of the universe. The earth at the center, the planets going round in perfect circles: This is an enormous simplification. Ancient Greek astronomy invented a complex model which is astonishingly close to what we know today.
Claudius Ptolemy (~100-160 AD) was a Greek scientist working at the library of alexandria. One of his most important works, the «Almagest», sums up the geographic, mathematical and astronomical knowledge of the time. It is the first outline of a coherent system of the universe in the history of mankind.
Back to the Greek Universe is a VR model that rebuilds Ptolemy’s system of the universe on a scale of 1/1 billion. The planets are 100 times larger, the Earth rotates 100 times more slowly. The planet orbits periods are 1 million faster than they would be according to Ptolemy’s calculations.
Back to the Greek Universe was coded and presented at the Swiss Open Cultural Data Hackathon/mix'n'hack 2019 in Sion, Switzerland, from Sept 6-8, 2019, by Thomas Weibel, Cédric Sievi, Pia Viviani and Beat Estermann.
Data
- Simon Grynaeus: Kl. Ptolemaiou Megalēs syntaxeōs bibl. 13, public domain
- Peter Liechtenstein: Almagestum CL. Ptolemei Pheludiensis Alexandrini astronomorum principis opus ingens ac nobile omnes celoru motus continens, public domain
- Richard Fitzpatrick: A Modern Almagest, An Updated Version of Ptolemy’s Model of the Solar System
- John Cramer: The Ptolemaic System, A Detailed Synopsis
- Earth map, video, public domain
- Moon map, image, public domain
- Mercury map, image, public domain
- Venus map, image, public domain
- Sun map, image, public domain
- Mars map, image, public domain
- Jupiter map, image, public domain
- Saturn map, image, public domain
- Stars map, image, Creative Commons Attribution 4.0 International
Media
- Back to the Greek Universe Video (mp4), public domain
Team
- Thomas Weibel (weibelth)
- Cédric Sievi
- Pia Viviani (pia)
- Beat Estermann (beat_estermann)
<a href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" title="status:concept">concept</a>,
<a href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" title="needs:dev">dev</a>,
<a href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" title="needs:design">design</a>,
<a href="/wiki/tag:glam?do=showtag&tag=glam" title="tag:glam">glam</a>
CoViMAS
The demo video of Project CoViMAS #glamhack #opendata #mixnhack #museomix @museomixCH pic.twitter.com/pGotKnQwmN
— DBIS Research Group, University of Basel (@dbisUnibas) September 8, 2019
CoViMAS
Collaborative Virtual Museum for All Senses (CoViMAS) is an extended virtual museum which engages all senses of visitors. It is a substantial upgrade and expansion of our award-winning Glamhack 2018 project “Walking around the Globe” http://make.opendata.ch/wiki/project:virtual_3d_exhibition which had the DBIS Group from the University of Basel team up with the ETH Library to introduce a prototype of an exhibition in Virtual Reality.
CoViMAS aims to provide a collaborative environment for multiple visitors in the virtual museum. This feature allows them to have a shared experience through different virtual reality devices.
Additionally, CoViMAS enriches the user experience in virtual space by providing physical objects which can be manipulated by the user in virtual space. Thanks to the mix'n'hack organizers and FabLab (https://fablab-sion.ch/), the user will be able to touch postcards, view them closely, and feel their texture.
To add the modern touch to the older pictures in the provided data, we add colorized images alongside the existing ones, to have a more lively look into the past using the pictures in the Virtual Museum.
Project Timeline
Day One
CoViMAS joins forces of different disciplines to form a group which contains Maker, content provider, developer(s), communicator, designer and user experience expert. Having different backgrounds and expertise made a great opportunity to explore different ideas and opportunities to develop the horizons of the project.
Two vital components of this project is Virtual Reality headset and Datasets which are going to be used. HTC Vive Pro VR headsets are converted to wireless mode after our last experiment which prove the freedom of movement without wires attached to the user, increases the user experience and feasibility of usage.
Our team content provider and designer spent invaluable amount of time to search for the representative postcards and audio which can be integrated in the virtual space and have the potential to improve the virtual reality experience by adding extra senses. This includes selecting postcards which can be touched and seen in virtual and non-virtual reality. Additionally, to improve the experience, and idea of hearing a sound which is related to the picture being viewed popped up. This audio should have a correlation with the picture being viewed and recreate the sensation of the picture environment for the user in virtual world.
To integrate the modern methods of Image manipulation through artificial Intelligence, we tried using Deep Learning method to colorize the gray-scale images of the “otografien aus dem Wallis von Charles Rieder”. The colored images allow the visitor get a more sensible feeling of the pictures he/she is viewing. The initial implementation of the algorithm showed the challenges we face, for example the faded parts of the pictures or scratched images could not very well be colorized.
Day Two
Although the VR exhibition is taken from our previous participation in Glamhack2018, the exhibition needed to be adjusted to the new content. We have designed the rooms to showcase the dataset “Postkarten aus dem Wallis (1890-1950)”. at this point, the selected postcards to be enriched with additional senses are sent to the Fablab, to create a haptic card and a feather pallet which is needed to be used alongside one postcard which represent a goose.
the fabricated elements of our exhibition get attached to a tracker which can be seen through VR glasses and this allows the user to be aware of location of the object, and sense it.
The Colorization improved through the day, by some alteration in training setup and the parameters used to tune the images. The results at this stage are relatively good.
And the VR exhibition hall is adjusted to be automatically load images from postcards and also the colorized images alongside their original color.
And late night, when finalizing the works for the next day, most of our stickers have changed status from “Implementation” phase to “Done” Phase!
Day Three
CoViMAS is getting to the final stage in the last day. The Room design is finished and the location of the images on the wall are determined. The tracker location is updated in the VR environment to represent the real location of the object. With this improvement the postcard can be touched while being viewed simultaneously.
Data
- Fotografien aus dem Wallis von Charles Rieder https://opendata.swiss/dataset/photographs-of-valais-by-charles-rieder
- Postkarten aus dem Wallis (1890-1950) https://opendata.swiss/dataset/postcards-from-valais-1890-1950
Team
- Mahnaz Amiri Parian, PhD Student @ Databases and Information Systems Group
- Silvan Heller, PhD Student @ Databases and Information Systems Group
- Florian Spiess, MsC @ Computer Science Uni Basel
- Fabian
- Stef
- Florence
<a href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" title="status:concept">concept</a>,
<a href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" title="needs:dev">dev</a>,
<a href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" title="needs:design">design</a>,
<a href="/wiki/needs:data?do=showtag&tag=needs%3Adata" title="needs:data">data</a>,
<a href="/wiki/needs:expert?do=showtag&tag=needs%3Aexpert" title="needs:expert">expert</a>,
<a href="/wiki/tag:glam?do=showtag&tag=glam" title="tag:glam">glam</a>
Opera Forever
Opera Forever
Opera Forever is an online collaboration platform and social networking site to collectively explore large amounts of opera recordings.
The platform allows users to tag audio sequences with various types of semantics, such as personal preference, emotional reaction, specific musical features, technical issues, etc. Through the analysis of personal preference and/or emotional reaction to specific audio sequences, a characterization of personal listening tastes will be possible, and people with similar (or very dissimilar) tastes can be matched. The platform will also contain a recommendation system based on preference information and/or keyword search.
Background: The Bern University of the Arts has inherited a large collection of about 15'000 hours of bootleg live opera recordings. Most of these recordings are unique, and many individual recordings rather long (up to 3-4 hours), hence the idea of segmenting the recordings so as to allow for the creation of semantical links between segments to enhance the possibilities of collectively exploring the collection. In our fast-moving times, drawing on
Core Idea: Users engaging in “active” listening leave semantic traces behind that can be used as a resource to guide further exploration of the collection, both by themselves and by third parties. The approach can be used for an entire spectrum of users, ranging from occasional opera listeners, through opera amateurs, to interpretation researchers. The tool can be used as a collaborative tagging platform among research teams or within citizen science settings. By putting the focus on the listeners and their personal reaction to the audio segments, the perspective of analysis can be switched to the user, e.g. by creating typologies or clusterings of listening tastes or by using the approach for match-making in social settings.
Demo Video
Proof of Concept
Opera Forever (demo application)
A first proof of concept contains the following features:
- The user can browse through and listen to the recordings of different performances of the same opera.
- The individual recordings are segmented into their different parts.
- By using simple swiping gestures, the user can navigate between the individual segments of the same recording (swiping left or right) or between different recordings (swiping up or down) - the swiping is not yet implemented, but you can click on the respective arrows.
- For each segment, the user can indicate to what extent they like that particular segment (1 to 5 stars). - not implemented yet
- Based on this information, individual preference lists and collective hit-parades are generated. - not implemented yet
- Also, it will be possible to cluster users according to their musical taste, which opens up the possibility to match users based on their musical taste or to build recommendation systems. not implemented yet
Data
- Metadata: Ehrenreich Collection Database
- Audio Files: Digitized audio recordings from the Ehrenreich Collection (currently not available online; many of them presenting copyright issues)
Documentation
Team
- Birk Weiberg (birk)
- Dominik Sievi (dsievi)
- Beat Estermann (beat_estermann)
- Pia Viviani (pia)
- Oleg Lavrovsky (loleg)
- Kenny Floria (paulkc)
<a href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" title="status:concept">concept</a>,
<a href="/wiki/needs:dev?do=showtag&tag=needs%3Adev" title="needs:dev">dev</a>,
<a href="/wiki/needs:design?do=showtag&tag=needs%3Adesign" title="needs:design">design</a>,
<a href="/wiki/tag:glam?do=showtag&tag=glam" title="tag:glam">glam</a>
TimeGazer
TimeGazer
Welcome to TimeGazer: A time-traveling photo booth enabling you to send greetings from historical postcards.
Based on the wonderful “Postcards from Valais (1890 - 1950)” dataset, consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth.
Choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.
Photobomb a historical postcard
A photo booth for time traveling send greetings from the poster virtually enter the historical postcard
Based on the wonderful “Postcards from Valais (1890 - 1950)” dataset, consisting of nearly 4000 historic postcards of Valais, we create a prototype for Mixed-Reality photo booth. One can choose a historic postcard as a background and a person will be style-transferred virtually onto the postcard.
Potentially with VR-trackerified things to add choosable objects virtually into the scene.
Technology
This project is roughly based on a project from last year, which resulted in an active research project at Databases and Information Systems group of the University of Basel: VIRTUE. Hence, we use a similar setup:
Results
- Website (password : Valais)
- Video
- Instagram account with the pictures taken
Project
Blue screen
Printer box
Standard box on MakerCase:
Modified for the input of paper and output of postcard:
Data
Quote from the data introduction page:
A collection of 3900 postcards from Valais. Some highlights are churches, cable cars, landscapes and traditional costumes. Source: Musées cantonaux du Valais – Musée d’histoire
Team
- Dr. Ivan Giangreco
- Dr. Johann Roduit
- Lionel Walter
- Loris Sauter
- Luca Palli
- Ralph Gasser
<a href="/wiki/status:concept?do=showtag&tag=status%3Aconcept" title="status:concept">concept</a>,
<a href="/wiki/tag:glam?do=showtag&tag=glam" title="tag:glam">glam</a>,
<a href="/wiki/tag:tourism?do=showtag&tag=tourism" title="tag:tourism">tourism</a>