Publications
2024
Brand, Florian; Malburg, Lukas; Bergmann, Ralph
Large Language Models as Knowledge Engineers Proceedings Article
In: Malburg, Lukas (Ed.): Proceedings of the Workshops at the 32nd International Conference on Case-Based Reasoning (ICCBR-WS 2024) co-located with the 32nd International Conference on Case-Based Reasoning (ICCBR 2024), Mérida, Mexico, July 1, 2024, pp. 3–18, CEUR-WS.org., 2024.
Abstract | Links | BibTeX | Tags: {Case-Based Reasoning, Knowledge Acquisition Bottleneck, Knowledge Engineering, Large Language Models, Prompting}
@inproceedings{Brand.2024_LLMKnowledgeEngineer,
title = {Large Language Models as Knowledge Engineers},
author = {Florian Brand and Lukas Malburg and Ralph Bergmann},
editor = {Lukas Malburg},
url = {https://www.wi2.uni-trier.de/shared/publications/2024_ICCBR-WS_LLMInCBR_BrandEtAl.pdf},
year = {2024},
date = {2024-01-01},
booktitle = {Proceedings of the Workshops at the 32nd International Conference
on Case-Based Reasoning (ICCBR-WS 2024) co-located with the 32nd
International Conference on Case-Based Reasoning (ICCBR 2024), Mérida,
Mexico, July 1, 2024},
volume = {3708},
pages = {3–18},
publisher = {CEUR-WS.org.},
series = {CEUR Workshop Proceedings},
abstract = {Many Artificial Intelligence (AI) systems require human-engineered knowledge at their core to reason about new problems based on this knowledge, with Case-Based Reasoning (CBR) being no exception. However, the acquisition of this knowledge is a time-consuming and laborious task for the domain experts that provide the needed knowledge. We propose an approach to help in the creation of this knowledge by leveraging Large Language Models (LLMs) in conjunction with existing knowledge to create the vocabulary and case base for a complex real-world domain. We find that LLMs are capable of generating knowledge, with results improving by using natural language and instructions. Furthermore, permissively licensed models like CodeLlama and Mixtral perform similar or better than closed state-of-the-art models like GPT-3.5 Turbo and GPT-4 Turbo.},
keywords = {{Case-Based Reasoning, Knowledge Acquisition Bottleneck, Knowledge Engineering, Large Language Models, Prompting}},
pubstate = {published},
tppubtype = {inproceedings}
}
Many Artificial Intelligence (AI) systems require human-engineered knowledge at their core to reason about new problems based on this knowledge, with Case-Based Reasoning (CBR) being no exception. However, the acquisition of this knowledge is a time-consuming and laborious task for the domain experts that provide the needed knowledge. We propose an approach to help in the creation of this knowledge by leveraging Large Language Models (LLMs) in conjunction with existing knowledge to create the vocabulary and case base for a complex real-world domain. We find that LLMs are capable of generating knowledge, with results improving by using natural language and instructions. Furthermore, permissively licensed models like CodeLlama and Mixtral perform similar or better than closed state-of-the-art models like GPT-3.5 Turbo and GPT-4 Turbo.