At the Competence Centre for Non-Textual Materials (KNM), an interdisciplinary team comprising experts on IT development, multimedia retrieval and ontologies, media documentalists, information scientists and legal experts is engaged in fundamentally improving conditions of access and use for media types such as audiovisual media, 3D objects and research data. Non-textual materials are systematically collected, and preserved as cultural heritage.
KNM focuses on developing innovative solutions to problems in the areas of collecting, indexing, providing and (digitally) archiving non-textual materials. In future, it should be possible for such material to be published, located, cited and made available on a permanent basis as easily as textual documents. To achieve this, infrastructures, tools and services are being developed to actively support users in the scientific work process. In addition to seeking solutions for specific user's needs and other object types, the adaptation to new knowledge domains is also taken into account. To ensure research approaches are transferred to digital library practice quickly and successfully, the developments are systematically flanked by user-centred software design, ensuring the optimum usability of the portals and tools.
- has been funded by the Leibniz Association since 2011.
- undertakes interdisciplinary research and development projects with local, European and international partners.
- supports other knowledge institutes and knowledge providers in matters concerning non-textual materials, and provides services, tools and infrastructures as required.
- helps researchers, teaching staff and students to use, publish and gain access to non-textual materials.
- Multimedia retrieval
- Automatic indexing
- Semantic search
- Visual search
- Linked data engineering and semantic applications
Range of services
- Indexing according to international standards
- Development of innovative media-specific portals featuring automatic content-based indexing via text, speech and image recognition
- Licensing, preferably under Open Access conditions
- Registration of digital object identifiers (DOI) and media fragment identifiers (MFID)
- Linked Open Data Service
- Digital preservation
- Provision of support to producers in publishing their media, including advice on technology, rights, metadata, digital preservation and DOI registration.
The TIB AV Portal
By developing the TIB AV Portal, TIB has created a customer- and demand-oriented platform for non-textual materials. The portal offers unrestricted access to high-quality scientific videos and computer visualisations, simulations, experiments, interviews as well as films and recordings of lectures and conferences from the fields of science and technology. TIB developed the portal in cooperation with the Semantic Web research group at the Hasso-Plattner-Institut (HPI). Key feature of the TIB AV Portal: it links the current state of the art of relevant multimedia retrieval methods with semantic analysis. The automatic video analysis of the TIB AV Portal includes not only structural analysis (scene recognition), but also text, audio and image analysis. Automatic indexing by the TIB AV Portal describes the videos at the segment level, enabling pinpoint searches to be made within videos. Videos in the TIB AV Portal are automatically indexed with terms from the Integrated Authority File (GND), which have a semantic relationship (synonyms, hypernyms, related terms, and so on) to each another. Based on the terms and their semantic relationships, the TIB AV Portal creates a semantic search that enhances traditional keyword-based search by expanding the search results and making them more precise. English-language videos in the TIB AV Portal are tagged using English identifiers, determined following the automated mapping of the GND terms onto other standard data (including DBpedia). Films are allocated a digital object identifier (DOI), which means they can be referenced clearly. Individual film segments are allocated a media fragment identifier (MFID), which enables the video to be de-referenced to the second, and cited.