wrong bibliography -.-

This commit is contained in:
Nicole Dresselhaus
2025-05-09 22:42:09 +02:00
parent afdd11718c
commit be80d335ff
7 changed files with 77 additions and 192 deletions

2
dist/search.json vendored
View File

@ -225,7 +225,7 @@
"href": "Writing/ner4all-case-study.html#conclusion",
"title": "Case Study: Local LLM-Based NER with n8n and Ollama",
"section": "Conclusion",
"text": "Conclusion\nBy following this guide, we implemented the NER4All papers methodology with a local, reproducible setup. We used n8n to handle automation and prompt assembly, and a local LLM (via Ollama) to perform the heavy-duty language understanding. The result is a flexible NER pipeline that requires no training data or API access just a well-crafted prompt and a powerful pretrained model. We demonstrated how a user can specify custom entity types and get their text annotated in one click or API call. The approach leverages the strengths of LLMs (vast knowledge and language proficiency) to adapt to historical or niche texts, aligning with the papers finding that a bit of context and expert prompt design can unlock high NER performance.\nImportantly, this setup is easy to reproduce: all components are either open-source or freely available (n8n, Ollama, and the models). A research engineer or historian can run it on a single machine with sufficient resources, and it can be shared as a workflow file for others to import. By removing the need for extensive data preparation or model training, this lowers the barrier to extracting structured information from large text archives.\nMoving forward, users can extend this case study in various ways: adding more entity types (just update the definitions input), switching to other LLMs as they become available (perhaps a future 20B model with even better understanding), or integrating the output with databases or search indexes for further analysis. With the rapid advancements in local AI models, we anticipate that such pipelines will become even more accurate and faster over time, continually democratizing access to advanced NLP for all domains.\nSources: This implementation draws on insights from Ahmed et al. (2025) for the prompt-based NER method, and uses tools like n8n and Ollama as documented in their official guides. The chosen models (DeepSeek-R1 and Cogito) are described in their respective releases. All software and models are utilized in accordance with their licenses for a fully local deployment.",
"text": "Conclusion\nBy following this guide, we implemented the NER4All papers methodology with a local, reproducible setup. We used n8n to handle automation and prompt assembly, and a local LLM (via Ollama) to perform the heavy-duty language understanding. The result is a flexible NER pipeline that requires no training data or API access just a well-crafted prompt and a powerful pretrained model. We demonstrated how a user can specify custom entity types and get their text annotated in one click or API call. The approach leverages the strengths of LLMs (vast knowledge and language proficiency) to adapt to historical or niche texts, aligning with the papers finding that a bit of context and expert prompt design can unlock high NER performance.\nImportantly, this setup is easy to reproduce: all components are either open-source or freely available (n8n, Ollama, and the models). A research engineer or historian can run it on a single machine with sufficient resources, and it can be shared as a workflow file for others to import. By removing the need for extensive data preparation or model training, this lowers the barrier to extracting structured information from large text archives.\nMoving forward, users can extend this case study in various ways: adding more entity types (just update the definitions input), switching to other LLMs as they become available (perhaps a future 20B model with even better understanding), or integrating the output with databases or search indexes for further analysis. With the rapid advancements in local AI models, we anticipate that such pipelines will become even more accurate and faster over time, continually democratizing access to advanced NLP for all domains.\nSources: This implementation draws on insights from [1] for the prompt-based NER method, and uses tools like n8n and Ollama as documented in their official guides. The chosen models (DeepSeek-R1[2] and Cogito[3]) are described in their respective releases. All software and models are utilized in accordance with their licenses for a fully local deployment.",
"crumbs": [
"Home",
"Serious",