Refine
Year of publication
Institute
Document Type
- Conference Proceeding (37)
- Article (11)
- Book (1)
- Part of a Book (1)
- Course Material (1)
- Other (1)
- Report (1)
Keywords
- CAD (15)
- civil engineering (14)
- Bauingenieurwesen (13)
- Natural language processing (4)
- Information extraction (3)
- Architektur (2)
- Clustering (2)
- architecture (2)
- Active Learning (1)
- Active learning (1)
Comparative performance analysis of active learning strategies for the entity recognition task
(2024)
Supervised learning requires a lot of annotated data, which makes the annotation process time-consuming and expensive. Active Learning (AL) offers a promising solution by reducing the number of labeled data needed while maintaining model performance. This work focuses on the application of supervised learning and AL for (named) entity recognition, which is a subdiscipline of Natural Language Processing (NLP). Despite the potential of AL in this area, there is still a limited understanding of the performance of different approaches. We address this gap by conducting a comparative performance analysis with diverse, carefully selected corpora and AL strategies. Thereby, we establish a standardized evaluation setting to ensure reproducibility and consistency across experiments. With our analysis, we discover scenarios where AL provides performance improvements and others where its benefits are limited. In particular, we find that strategies including historical information from the learn ing process and maximizing entity information yield the most significant improvements. Our findings can guide researchers and practitioners in optimizing their annotation efforts.
Research collaborations provide opportunities for both practitioners and researchers: practitioners need solutions for difficult business challenges and researchers are looking for hard problems to solve and publish. Nevertheless, research collaborations carry the risk that practitioners focus on quick solutions too much and that researchers tackle theoretical problems, resulting in products which do not fulfill the project requirements.
In this paper we introduce an approach extending the ideas of agile and lean software development. It helps practitioners and researchers keep track of their common research collaboration goal: a scientifically enriched software product which fulfills the needs of the practitioner’s business model.
This approach gives first-class status to application-oriented metrics that measure progress and success of a research collaboration continuously. Those metrics are derived from the collaboration requirements and help to focus on a commonly defined goal.
An appropriate tool set evaluates and visualizes those metrics with minimal effort, and all participants will be pushed to focus on their tasks with appropriate effort. Thus project status, challenges and progress are transparent to all research collaboration members at any time.
Software Stories Guide
(2017)
Often, research results from collaboration projects are not transferred into productive environments even though approaches are proven to work in demonstration prototypes. These demonstration prototypes are usually too fragile and error-prone to be transferred
easily into productive environments. A lot of additional work is required.
Inspired by the idea of an incremental delivery process, we introduce an architecture pattern, which combines the approach of Metrics Driven Research Collaboration with microservices for the ease of integration. It enables keeping track of project goals over the course of the collaboration while every party may focus on their expert skills: researchers may focus on complex algorithms,
practitioners may focus on their business goals.
Through the simplified integration (intermediate) research results can be introduced into a productive environment which enables
getting an early user feedback and allows for the early evaluation of different approaches. The practitioners’ business model benefits throughout the full project duration.
This paper presents NLP Lean Programming
framework (NLPf), a new framework
for creating custom natural language processing
(NLP) models and pipelines by utilizing
common software development build systems.
This approach allows developers to train and
integrate domain-specific NLP pipelines into
their applications seamlessly. Additionally,
NLPf provides an annotation tool which improves
the annotation process significantly by
providing a well-designed GUI and sophisticated
way of using input devices. Due to
NLPf’s properties developers and domain experts
are able to build domain-specific NLP
applications more efficiently. NLPf is Opensource
software and available at https://
gitlab.com/schrieveslaach/NLPf.