Conference Proceeding
Refine
Year of publication
Document Type
- Conference Proceeding (34) (remove)
Keywords
- CAD (11)
- civil engineering (11)
- Bauingenieurwesen (10)
- Natural language processing (4)
- Clustering (2)
- Information extraction (2)
- Active learning (1)
- CAD ; (1)
- Deep learning (1)
- Human Factors (1)
- Information Extraction (1)
- Information Integration Tools (1)
- Knowledge Management (1)
- Machine learning (1)
- Natural Language Processing (1)
- Natural language understanding (1)
- Ontologie <Wissensverarbeitung> (1)
- Ontology Engineering (1)
- Process model (1)
- Profile Extraction (1)
- Profile extraction (1)
- Query learning (1)
- Relation classification (1)
- Reproducible research (1)
- Text Mining (1)
- Text mining (1)
- Trustworthy artificial intelligence (1)
- UML (1)
- Unified Modeling Language (1)
The integration of frequently changing, volatile product data from different manufacturers into a single catalog is a significant challenge for small and medium-sized e-commerce companies. They rely on timely integrating product data to present them aggregated in an online shop without knowing format specifications, concept understanding of manufacturers, and data quality. Furthermore, format, concepts, and data quality may change at any time. Consequently, integrating product catalogs into a single standardized catalog is often a laborious manual task. Current strategies to streamline or automate catalog integration use techniques based on machine learning, word vectorization, or semantic similarity. However, most approaches struggle with low-quality or real-world data. We propose Attribute Label Ranking (ALR) as a recommendation engine to simplify the integration process of previously unknown, proprietary tabular format into a standardized catalog for practitioners. We evaluate ALR by focusing on the impact of different neural network architectures, language features, and semantic similarity. Additionally, we consider metrics for industrial application and present the impact of ALR in production and its limitations.
Supervised machine learning and deep learning require a large amount of labeled data, which data scientists obtain in a manual, and time-consuming annotation process. To mitigate this challenge, Active Learning (AL) proposes promising data points to annotators they annotate next instead of a subsequent or random sample. This method is supposed to save annotation effort while maintaining model performance.
However, practitioners face many AL strategies for different tasks and need an empirical basis to choose between them. Surveys categorize AL strategies into taxonomies without performance indications. Presentations of novel AL strategies compare the performance to a small subset of strategies. Our contribution addresses the empirical basis by introducing a reproducible active learning evaluation (ALE) framework for the comparative evaluation of AL strategies in NLP.
The framework allows the implementation of AL strategies with low effort and a fair data-driven comparison through defining and tracking experiment parameters (e.g., initial dataset size, number of data points per query step, and the budget). ALE helps practitioners to make more informed decisions, and researchers can focus on developing new, effective AL strategies and deriving best practices for specific use cases. With best practices, practitioners can lower their annotation costs. We present a case study to illustrate how to use the framework.