All our hack are belong to us.

Active projects and challenges as of 24.04.2024 23:07.

Hide full text Print CSV Data Package


Artify

display data of museumobjects in a innovative and explorative way


~ PITCH ~

ocdh2018

ARTIFY

The goal of this project is to display different data of museumobjects in a innovative way.

Source

Datasource of the project is open data from the Landesmuseum Zürich. https://opendata.swiss/en/organization/schweizerisches-nationalmuseum-snm

Build during the 4th Swiss Open Cultural Data Hackathon. (http://make.opendata.ch/wiki/event:2018-10)


Art on Paper Gallery


~ PITCH ~

We develop a gallery app for browsing art works on paper. For the prototype we use a dataset sample delivered from the Collection Online of the Graphische Sammlung ETH Zurich. In our app the user can find the digital images of the prints and drawings, gets metadata information about the different techniques and other details. The app invites the user to browse from one art work to the other, following different paths such as the same technique, the same artist, the same subject and so on.

Challenge

To use a Collection Online properly the user needs previous knowledge. Many people just love art and are interested but no experts.

User

Especially this group of people is invited to explore our large collection in an interactive journey.

Goals

  • The Art on Paper Gallery App enables the user to jump from one artwork to another in an associative way. It offers suggestions following different categories, such as the artist, technique, etc.
  • It allows social interaction with the possibility to like, share and comment an artwork
  • Artworks can be arranged according to relevance, number of clicks etc.
  • This again allows Collections or Museums to evaluate the user interests and trends

Code

The code is available at the following link: https://github.com/DominikStefancik/Art-on-Paper-Gallery-App.

Example of a possible Design

Data

  • Graphische Sammlung ETH Zurich, Collection Online, sample dataset with focus on different techniques of printmaking and drawing

Team

  • Dominik Štefančik, Software Engineer
  • Graphische Sammlung ETH Zurich, Susanne Pollack, Ann-Kathrin Seyffer

Ask the Artist


~ PITCH ~

The project idea is to create a voice assistance with the identity of an artist. In our case, we created a demo of the famous Swiss painter Ferdinand Hodler. That is to say, the voice assistance is nor Siri nor Alexa. Instead, it is an avatar of Ferdinand Hodler who can answer your questions about his art and his life.

You can directly interact with the program by talking, as what you would do normally in your daily life. You can ask it all kinds of questions about Ferdinand Hodler, e.g.:

  • When did you start painting?
  • Who taught you painting?
  • Can you show me some of your paintings?
  • Where can I find an exhibition with your artworks?

By talking to the digital image of the artist directly, we aim to bring the art closer to people's daily life, in a direct, intuitive and hopefully interesting way.

As you know, museum audiences need to keep quiet which is not so friendly to children. Also, for people with special needs, like the visually dispaired, and people without professional knowledge about art, it is not easy for them to enjoy the museum visit. To make art accessible to more people, a voice assistance can help with solving those barriers.

If you asked the difference between our product with Amazon's Alexa or Apple's Siri, there are two major points:

  1. The user can interact with the artist in a direct way: talking to each other. In other applications, the communication happened by Alexa or Siri to deliver the message as the 3rd party channel. In our case, users can have immersive and better user experienceand they will feel like if they were talking to an artist friend, not an application.
  1. The other difference is that the answers to the questions are preset. The essence of how Alexa or Siri works is that they search the question asked by users online and read the returned search results out. In that case, we cannot make sure that the answer is correct and/or suitable. However, in our case, all the answers are coming from reliable data sets of museum and other research institutions, and have been verified and proofread by the art experts. Thus, we can proudly say, the answers from us are reliable and correct. People can use it as a tool to educate children or as visiting assistance in the exhibition.

Video demo:

Data

  • List and link your actual and ideal data sources.
  • Kunsthaus Zürich

⭐️ List of all Exhibitions at Kunsthaus Zürich

  • SIK-ISEA

⭐️ Artist data from the SIKART Lexicon on art in Switzerland

  • Swiss National Museum

⭐️ Representative sample from the Paintings & Sculptures Collection (images and metadata)

  • Wikimedia Switzerland

Team

  • Angelica
  • Barbara
  • Anlin (lianganlin@foxmail.com)

Dog Name Creativity

Dog Name Creativity Survey of New York City


~ PITCH ~

We started this project to see if art and cultural institutions in the environment have an impact on the creativity of dognames. This was not possible with the date from Zurich because the name-dataset does not contain information about the location and the dataset about the owners does not include the dognames. We choose to stick with our idea but used a different dataset: NYC Dog Licensing Dataset.

The creativity of a name is measured by the frequency of each letter in the English language and gets +/- points according to the amount of dogs with the same name. The data for the cultural environment comes from Wikidata.

After some data-cleaning with OpenRefine and failed attempts with OpenCalc we got the following code:

import string
import pandas as pd

numbers_ = {"e":1,"t":2,"a":3,"o":4,"n":5,"i":6,"s":7,"h":8,"r":9,"l":10,"d":11,"u":12,"c":13,"m":14,"w":15,"y":16,"f":17,"g":18,"p":19,"b":20,"v":21,"k":22,"j":23,"x":24,"q":25,"z":26}
name_list = []

def KreaWert(name_):
    name_ = str(name_)
    wert_ = 0
    for letter in str.lower(name_):
        temp_ = 0
        if letter in string.ascii_lowercase :
            temp_ += numbers_[letter]
            wert_ += temp_
    if name_ in H_:   
        wert_ = wert_* ((Hmax-H_[name_])/(Hmax-1)*5 + 0.2)
    return round(wert_,1)

df = pd.read_csv("Vds3.csv", sep = ";")
df["AnimalName"] = df["AnimalName"].str.strip()
H_ = df["AnimalName"].value_counts()
Hmax = max(H_)
Hmin = min(H_)

df["KreaWert"] = df["AnimalName"].map(KreaWert)
df.to_csv("namen2.csv")

dftemp = df[["AnimalName", "KreaWert"]].drop_duplicates().set_index("AnimalName")
dftemp.to_csv("dftemp.csv")

df3 = pd.DataFrame()
df3["amount"] = H_
df3 = df3.join(dftemp, how="outer")
df3.to_csv("data3.csv")

df1 = round(df.groupby("Borough").mean(),2)
df1.to_csv("data1.csv")

df2 = round(df.groupby(["Borough","AnimalGender"]).mean(),2)
df2.to_csv("data2.csv")

Visualisations were made with D3: https://d3js.org/

Data

Hundedaten der Stadt Zürich:

NYC Dog Licensing Dataset:

Team

  • Birk Weiberg
  • Dominik Sievi

Find Me an Exhibit

Files and notes on the GLAM Hackathon 2018 @Landesmuseum, October 26 - 28


~ PITCH ~

Are you ready to take up the challenge? Film categories of objects in the exhibition "History of Switzerland" running against the clock.

The app displays one of several categories of exhibits that can be found in the exhibition (like "cloths", "paintings" or "clocks"). Your job is to find a matching exhibit as quick as possible. You don't have much time, so hurry up!

Best played on portable devices. ;-)

The frontend of the app is based on the game "Emoji Scavenger Hunt", the model is built with TensorFlow.js fed with a lot of images kindly provided by the National Museum Zurich. The app is in pre-alpha stage.

Demo

For a demo see here https://game.annotat.net

Set up your own environment

Requirements:

  1. git clone https://github.com/google/emoji-scavenger-hunt
  2. Go to the folder of the Dockerfile: cd emoji-scavenger-hunt/training/
  3. Build the docker image: docker build . -t model-builder
  4. Create a custom directory with a mandatory subfield images;
  5. Copy images to an arbitrary subfield of the images directory, each category of images in a dedicated field (e.g. for pictures on armours: /path/to/custom/dir/images/armours)
  6. Run the docker container to build the model: docker run -v /path/to/custom/directory:/data -it model-builder
  7. Copy the created models to the dist/model folder of the git repository: cp /path/to/data/saved_model_web/* dist/model
  8. Adapt the scavanger_class.ts in /path/to/custom/dir to your image categories (the name field of the objects must match to the names of your category folders)
  9. Copy the changed scavenger_class.ts file to src/js: cp /path/to/data/scavenger_class.ts src/js
  10. Install the needed dependencies: yarn prep
  11. Build the application yarn build
  12. Load dist/index.html in your preferred webbrowser

Letterjongg


~ PITCH ~

In 1981 Brodie Lockard, a Stanford University student, developed a computer game just two years after a serious gymnastics accident had almost taken his life and left him paralyzed from the neck down. Unable to use his hands to type on a keyboard, Lockard made a special request during his long recovery in the hospital: He asked for access to a PLATO terminal. PLATO (Programmed Logic for Automatic Teaching Operations) was the first generalized computer-assisted instruction system designed and built by the University of Illinois.

The computer game Lockard started coding on his PLATO terminal was a puzzle game displaying Chinese Mah-Jongg tiles, pieces used for the Chinese strategy game called «Mahjongg» that had become increasingly popular in the U.S. Lockard accordingly called his game «Mah-Jongg solitaire». In 1986 Activision, one of the early game companies, released the game under the name of «Shanghai» (see screenshot), and when Microsoft decided to add the game to their Windows Entertainment Pack für Win 3.x in 1990, this time under the name of «Taipei», Mah-Jongg Solitaire became one of the world's most popular computer games.

Typography

The project «Letterjongg» aims at translating the far-east Mah-Jongg imagery into late medieval typography. 570 years ago the invention of the modern printing technology by Johannes Gutenberg in Germany (and, two decades later, by William Caxton in England) was massively disruptive. Books, carefully bound manuscripts written and copied by scribes in weeks, if not months, could all of a sudden be mass-produced in a breeze. The invention of moveable types as such, along with other basic book printing technologies, had a huge impact on science and society.

Yet, 15th century typographers were not only businessmen, they were artists as well. Early printing fonts reflect their artistic past. The design of 15th/16th century fonts is still influenced by their handwritten predecessors. A new book, although produced by means of a new technology, was meant to be what books had been for centuries: a precious document, often decorated with magnificent illustrations. (Incunables – books printed before 1500 – often have a blank space in the upper left corner of a page so that illustrators could manually add artful initials after the printing process.)

Letterjongg comes with 144 typographic tiles (hence 36 tile faces). The letters have been taken and isolated from a high resolution scan (2,576 × 4,840 pixels, file size: 35.69 MB, MIME type: image/tiff) of Aldus Pius Manutius, Horatius Flaccus, Opera (font design by Francesco Griffo, Venice, 1501). «Letterjongg» has been slightly simplified. Nevertheless it is not easy to play as the games are set up at random (actually not every game can be finished) and the player's visual orientation is constrained by the sheer number and resemblance of the tiles.

Letterjongg, Screenshot

Rules

Starting from the sides, or from the central tile at the top of the pile, remove tiles by clicking on two equal letters. If the tiles are identical, they will disappear, and your score will rise. Removeable tiles always must be free on their left or right side. If a tile sits between two tiles on the same level, it cannot be selected.

Updates

2018/10/26 v0.1: Basic game engine, prototype 2018/10/27 v0.11: Moves counter, rules section 2018/10/28 v0.12: Minor bugfixing 2018/10/30 v0.2: Matches counter, code cleanup

Data

Team


Multilingual library search

Based on Wikidata


~ PITCH ~

In Switzerland each linguistic region is working with different authority files for authors and organizations, situation which brings difficulties for the end user when he is doing a search.

Goal of the Hackathon: work on a innovative solution as the library landscape search platforms will change in next years. Possible solution: Multilingual Entity File which links to GND, BnF, ICCU Authority files and Wikidata to bring end user information about authors in the language he wants.

Visit make.opendata.ch/wiki for details.


Sex and Crime

und Kneippenschlägereien in Early Modern Zurich


~ PITCH ~

#glamhack2018

Goal

Make the data ("Stillstandsprotokolle des 17. Jahrhunderts") better searchable and georeference it for visualization.

Team

Data sources:

Steps taken

Lemmatization/Normalisation

  • Done: Wordlist and Frequencies

  • ToDo: POS tagging

Named Entities

  • Names of persons: done A-D

  • Names of places: done A-K

Visualization

Word-Cluster

Visualization

(using fasttext) https://github.com/mmznr/Staatsarchiv-GLAMhack/tree/master/Visualisierungen/clusters.png https://github.com/mmznr/Staatsarchiv-GLAMhack/tree/master/Visualisierungen/clusters2.png

Frequency list of Word-Cluster

https://docs.google.com/spreadsheets/d/1rFo7p9YsQRwJufMuWGw2677acOsWevcmm-lN5RVBJv4/edit?usp=sharing

GIS Visualization

https://beta.observablehq.com/@mmznrstat/sex-and-crime-und-kneipenschlagereien-in-der-fruhen-neuzei

  • Done: Borders from swisstopo via Linked Data, Matching of the settlements of the canton of Zurich

  • ToDo: Get List of old names of this settlements, match them and show all relating documents of a settlement (or municipality)


SPARQLfish

sparql query effort for glam hackathon 2018


~ PITCH ~

GLAMhack 2018 Project

Try it here: https://sparqlfish.github.io/sparqlfish/

our current list of issues: https://github.com/sparqlfish/sparqlfish/projects/1

A typical SPARQL endpoint is not friendly in the eyes of an average user. Typical users of cultural databases include researchers in the humanities, museum professionals and the general public. Few of these people have any coding experience and few would feel comfortable translating their questions into a SPARQL query.

Moreover, the majority of the users expect searches of online collections to take something like the form of a regular Google search (names, a few words, or at the top end Boolean operators). This approach to search does not make use of the full potential of the graph-type databases that typically make SPARQL endpoints available. It simply does not occur to an average user to ask the database a query of the type "show me all book authors whose children or grandchildren were artists."

The extensive possibilities that are offered by graph databases to researchers in the humanities go unexplored because of a lack of awareness of their capabilities and a shortage of information about how to exploit them. Even those academics who understand the potential of these resources and have some experience in using them, it is often difficult to get an overview of the semantics of complex datasets.

We therefore set out to develop a tool that:

  • simplifies the entry point of a SPARQL query into a form that is accessible to any user
  • opens ways to increase the awareness of users about the possibilities for querying graph databases
  • moves away from purely text-based searches to interfaces that are more visual
  • gives an overview to a user of what kinds of nodes and relations are available in a database
  • makes it possible to explore the data in a graphical way
  • makes it possible to formulate fundamentally new questions
  • makes it possible to work with the data in new ways
  • can eventually be applied to any SPARQL endpoint

Swiss Art Stories on Twitter


~ PITCH ~

In the project “Swiss art stories on Twitter”, the Twitter bot “larthippie” has been created. The idea of the project is to automatically tweet information on Swiss art, artists and exhibitions.

Originally, different storylines for the tweets were discussed and programmed, such as:

- Tweeting information about upcoming exhibitions at Kunsthaus Zürich and reminders with approaching deadlines

- Tweets with specific information about artists, taken from the artists database SIK-ISEA

- Tweeting the exhibition history of Kunsthaus Zürich

- Comparing the images of artworks, created in the same year, held at the same location or showing the same subject

The prototype however has another focus now. It tweets the ownership history (provenance) of artworks. As the the information is scientifically researched, larthipppie provides tweets for art professionals. Therefore the twitter bot is more than a usual social media account, but might become a tool for provenance research. Interested audience has to follow larthippie in order to be updated on new provenance information. As an additional feature, the twitterbot larthippie likes and follows accounts that shares content from Swiss artists.

Followers can message the bot and ask for information about a painting on any artist. In the prototype, it is only possible to query the provenance of artworks by Ferdinand Hodler. In the future, the twitter bot might also tweet newly acquired works in art museums.

You can check the account by the following link: https://twitter.com/larthippie

Data

  • Swiss Institute for Art Research (SIK-ISEA)
  • Kunsthaus Zürich
  • Swiss National Museum

Team

  • Tugrulcan Elmas, tugrulcanelmas@gmail.com, @tugrulcanelmas

Walking Around the Globe

a VR Picture Expedition


~ PITCH ~

Watch our Video:

With our Hackathon prototype for a virtual 3D exhibition in VR we tackle several challenges.

• The challenge of exhibition space: many collections, especially small ones – like the Collection of Astronomical Instruments of ETH Zurich – have only a small or no physical space at all to exhibit their objects to the public

• The challenge of exhibiting light-sensitive artworks: some artworks – especially art on paper – are very sensitive to the light and are in danger of serious damaged when they are permanently exposed. That’s why the Graphische Sammlung ETH Zurich can’t present their treasures in a permanent exhibition to the public

• The challenge of involving the public: nowadays the visitors do not want to be reduced to be passive consumers, they want and appreciate active involvement

• The challenge of the scale: in usual 2D digital presentations the user does not get information about the real scale of the artworks and gets wrong ideas about the dimensions

• The challenge of showing 3D objects in the digital space: many museum databases show only one or two digital images of their 3D objects, so the user gets only a very limited impression

Our Hackathon prototype for a virtual 3D exhibition in VR

• offers unlimited exhibition space in the virtual reality

• makes it possible to exhibit light sensitive artworks permanently using their digital reproductions

• involves the public inviting them to slip into the role of the curator

• shows the artwork in the correct scale

• and gives the users the opportunity to walk around the 3D objects in the virtual space

A representative screenshot:

We unveil a window into a future where you can create, curate and experience virtual 3D expositions in VR. We showcase a first exposition with a 3D-Model of a globe like in the Collection of Astronomical Instruments of ETH Zurich as a centerpiece and works of art from the Graphische Sammlung ETH Zurich surrounding it. Users can experience our curated exposition using a state-of-the-Art VR headset, the HTC Vive.

Our vision has massive value for practitioners, educators and students and also opens up the experience of curation to a broader audience. It enables art to truly transcend borders, cultures and economic boundaries.

Project Presentation

You can download the presentation slides: 20181028_glamhack_presentation.pdf.

Project Impressions

On the very first day, we create our data model on paper by ensuring everybody got a chance to present their use cases, stories and needs.

On Friday evening, we had a first prototype of our VR Environment:

On Saturday, we created our interim presentation, improved our prototype, curated our exposition, and tried and ditched many ideas.

Saturday evening saw our prototype almost finished! Below a screenshot of two rooms, one curated by experts and the other one containing artificially generated art.

Our final project status involves a polished prototype with an example exhibition consisting of two rooms:

Walking Around The Globe: A room curated by art experts, exhibiting a selection of old masterpieces (15th century to today).

Style transfer: A room designed by laymen showing famous paintings & derivates generated by an artificial intelligence (AI) via a technique called style transfer.

In the end we even created a mockup of the possible backend ui, the following images are some impressions for it:

Technical Information

Our code is on Github, both the Frontend and the Backend.

We are using Unity with the SteamVR Plugin to deploy on the HTC Vive. This combination means we had to use combinations of C# Scripting (We recommend the excellent  Rider Editor), design using the Unity Editor, custom modifications to the SteamVR plugin, do 3d model imports using FastObjImporter and other fun stuff.

Our backend is written in Java and uses MongoDB.

For the style transfer images, we used a open source python code which is available on Github.

Acknowledgments

The Databases and Information Systems Group at the University of Basel is the home of a majority of our project members. Our hardware was borrowed for the weekend from the Gravis Group from Prof. Vetter.

Data

  • Dataset with digital images (jpg) and metadata (xml) from the Collection of Astronomical Instruments of ETH Zurich
  • Graphische Sammlung ETH Zurich, Collection Online, four sample datasets with focus on bodies in the air, portraits, an artist (Rembrandt) and different techniques (printmaking and drawing)

Team


we-art-o-nauts

A better way to experience art


~ PITCH ~

Build a working, easy to follow example that integrates open and curated culture data with VR devices in a museum exhibition to provide modern, fun and richer visitor experience. Focusing on one art piece, organizing data alone the timeline, building a concept and process, so anyone who want to use above technologies can easily follow the steps for any art object for any museum.

Have a VR device next to the painting, we integrate interesting facts about the painting in a 360 timeline view with voice over. Visitors can simply put it on to use it.

Try the live demo in your browser - works particularly well on mobile phones, and supports Google Cardboard:

DEMO

Visit the project homepage for more information, and to learn more about the technology and data used.

See also: hackathon presentation (PPTX) | source code (GitHub)

Data

  1. “Allianzteppich”, a permanent collection in Landsmuseum
  2. Curated data from Dominik Sievi in Landsmuseum
  3. Open data sources (WikiData)

Team

  • Kamontat Chantrachirathumrong (Developer)
  • Oleg Lavrovsky (Developer)
  • Marina Pardini (UX designer)
  • Birk Weiberg (Art Historian)
  • Xia Willuhn (Developer)

Zurich historical photo tours


~ PITCH ~

We would like to enable the user to discover historical pictures of Zürich and go to the places where they were taken. He can take the perspective of the photographer from around 100 years ago and see how the places have changed. He can also share his photographs with the community. We have planned two thematic tours, one with historical photographs of Adolphe Braun and one with photographs connected to the subject of silk fabrication. The tour is enhanced with some historical information. In the collections of the ETH, Baugeschichtliches Archiv, and Zentralbibliothek Zurich, Graphische Sammlung, we found pictures to match the topics above and to set up a nice tour for the users. In a second step we went to the actual spots to verify if the pictures could be taken and to find out the exact geo data. Meanwhile our programmers installed the the photographer's stops on a map. As soon as the user reaches the proximity of the spot his phone will start vibrating. At this point the historical photo will show up and the task is to search for the right angle from where the historical photograph has been taken. At this point the photographer is asked to take his own picture. The app allows the user to overblend the historical with the actual picture so a comparison can be made. The user is provided by additional information, as the name of the photographer of historical picture, links to the collection the picture comes from, the building itself, connection to the silk industry etc. Here is the link to out tour:https://glamhistorytour.github.io/HistoryTourApp/

Data

Team

  • Maya Beer
  • Rafael Arizcorreta
  • Tina Tomovic
  • Annabelle Wiegart
  • Lothar Schmitt
  • Thomas Bochet
  • Marina Petrova
  • Kenny Floria


Challenges