<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Blog | Heye's Blog</title><link>https://heye.dev/posts/</link><atom:link href="https://heye.dev/posts/index.xml" rel="self" type="application/rss+xml"/><description>Blog</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><item><title>Semantic Public Learning</title><link>https://heye.dev/posts/introducing-semantic-public-learning--74jjna98d/</link><pubDate>Tue, 12 Aug 2025 00:00:00 +0000</pubDate><guid>https://heye.dev/posts/introducing-semantic-public-learning--74jjna98d/</guid><description>
&lt;div class="border border-gray-200 dark:border-gray-700 rounded-lg bg-gradient-to-r from-blue-50 to-indigo-50 dark:from-gray-800 dark:to-gray-700 border-l-4 border-blue-500 dark:border-blue-400 rounded-lg p-4 mb-6 shadow-sm">
&lt;div class="flex items-center justify-between flex-wrap gap-3">
&lt;div class="flex items-center space-x-3">
&lt;div class="flex-shrink-0 text-primary-800 dark:text-primary-200">
&lt;svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
&lt;path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M19 11H5m14 0a2 2 0 012 2v6a2 2 0 01-2 2H5a2 2 0 01-2-2v-6a2 2 0 012-2m14 0V9a2 2 0 00-2-2M5 11V9a2 2 0 012-2m0 0V5a2 2 0 012-2h6a2 2 0 012 2v2M7 7h10">&lt;/path>
&lt;/svg>
&lt;/div>
&lt;div>
&lt;div class="text-sm text-primary-600 dark:text-primary-300 font-medium">Post 2 of 2 in the &lt;a href="https://heye.dev/series/semantic-public-learning-framework/" class="text-primary-600 dark:text-primary-300">Series: Semantic Public Learning Framework&lt;/a> series&lt;/div>
&lt;/div>
&lt;/div>
&lt;a href="https://heye.dev/series/semantic-public-learning-framework/" class="text-blue-600 dark:text-blue-400 hover:text-blue-800 dark:hover:text-blue-200 text-sm font-medium">
View All →
&lt;/a>
&lt;/div>
&lt;/div>
&lt;h2 id="_evolving-the-learn-in-public-method_">&lt;em>Evolving the Learn in Public method&lt;/em>&lt;/h2>
&lt;p>In my last post I introduced the &lt;a href="https://heye.dev/posts/learn-in-public-method--74hsyqc9h/">Learn in Public method&lt;/a>. While practicing it, I noticed gaps in how knowledge creation and sharing works. These gaps revealed an opportunity to evolve the practice into something more powerful. While the core principle of learning openly remains powerful, I have begun developing an approach I call &lt;em>Semantic Public Learning&lt;/em>: enhancing Learn in Public with academic rigor and semantic web integration.&lt;/p>
&lt;p>Semantic Public Learning builds on Learn in Public&amp;rsquo;s foundation while adding:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Rigorous citation practices&lt;/strong> with proper bibliographies&lt;/li>
&lt;li>&lt;strong>Semantic markup&lt;/strong> for machine-readable knowledge artifacts&lt;/li>
&lt;li>&lt;strong>Integration with the knowledge graph&lt;/strong> through indexable structured data (like schema.org markup for articles)&lt;/li>
&lt;li>&lt;strong>Academic standards&lt;/strong> while maintaining accessibility&lt;/li>
&lt;/ul>
&lt;p>This isn&amp;rsquo;t about replacing Learn in Public, it&amp;rsquo;s about evolving it for deeper integration into our collective knowledge infrastructure. Just as
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://medium.com/@dan_abramov/why-my-new-blog-isnt-on-medium-3b280282fbae" target="_blank" rel="noopener" itemprop="url">Dan Abramov moved his blog off Medium&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-abramovWhyMyNew2019"
title="Jump to reference 1"
aria-label="Citation 1"
itemprop="identifier">1&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
to have more control over his content&amp;rsquo;s permanence and discoverability, we need to ensure our learning artifacts become lasting, findable contributions to the knowledge ecosystem.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="Semantic Public Learning: Evolving the Learn in Public method" srcset="
/posts/introducing-semantic-public-learning--74jjna98d/featured_hu15623601471117887854.webp 400w,
/posts/introducing-semantic-public-learning--74jjna98d/featured_hu8834183231680847117.webp 760w,
/posts/introducing-semantic-public-learning--74jjna98d/featured_hu7997346055494352704.webp 1200w"
src="https://heye.dev/posts/introducing-semantic-public-learning--74jjna98d/featured_hu15623601471117887854.webp"
width="760"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="the-four-pillars-of-semantic-public-learning">The Four Pillars of Semantic Public Learning&lt;/h2>
&lt;p>To bridge the gap between casual learning documentation and semantic knowledge contribution, we need a systematic approach. Based on my analysis of the research and practice, I&amp;rsquo;ve identified four interconnected mechanisms through which Semantic Public Learning enhances the Learn in Public approach:&lt;/p>
&lt;h3 id="1-semantically-enhanced-documentation">1. Semantically-Enhanced Documentation&lt;/h3>
&lt;p>Unlike private note-taking, learning in public requires
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.edutopia.org/article/how-teachers-can-use-pedagogical-documentation-reflection-and-planning/" target="_blank" rel="noopener" itemprop="url">documenting the learning process&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-martirenaHowTeachersCan"
title="Jump to reference 2"
aria-label="Citation 2"
itemprop="identifier">2&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
in ways that show thinking progression and can be used for future curriculum planning by both the learner and others. Semantic Public Learning adds structured data and proper markup to ensure machine readability.&lt;/p>
&lt;h3 id="2-citation-backed-explanation">2. Citation-Backed Explanation&lt;/h3>
&lt;p>Following Feynman&amp;rsquo;s principle, learning in public demands
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.teachthought.com/learning-posts/how-to-use-the-feynman-technique-learning-by-simplifying/" target="_blank" rel="noopener" itemprop="url">learning by simplifying&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-founderWhatFeynmanTechnique2019"
title="Jump to reference 3"
aria-label="Citation 3"
itemprop="identifier">3&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
, breaking concepts into core components while making them accessible. Semantic Public Learning enhances this with proper citations and references, bringing academic credibility to accessible writing.&lt;/p>
&lt;h3 id="3-community-validated-refinement">3. Community-Validated Refinement&lt;/h3>
&lt;p>Learning by teaching effects
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://doi.org/10.1007/s10648-021-09643-4" target="_blank" rel="noopener" itemprop="url">occur even with non-interactive audiences&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-lachnerLearningbyTeachingAudiencePresence2022"
title="Jump to reference 4"
aria-label="Citation 4"
itemprop="identifier">4&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
, but boundary conditions include having sufficient prior knowledge and the ability to generate high-quality explanations. Public feedback enables continuous improvement of both understanding and explanation quality.&lt;/p>
&lt;h3 id="4-knowledge-graph-integration">4. Knowledge Graph Integration&lt;/h3>
&lt;p>Artifacts produced by learning in public
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.researchgate.net/publication/340834975_Semantic_knowledge_networks_in_education" target="_blank" rel="noopener" itemprop="url">become semantic knowledge networks&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-kivSemanticKnowledgeNetworks"
title="Jump to reference 5"
aria-label="Citation 5"
itemprop="identifier">5&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
that allow analysis of connections between different disciplines and concepts. Semantic Public Learning ensures these connections are machine-readable and can be properly indexed (using semantic HTML tags like &lt;code>&amp;lt;article&amp;gt;&lt;/code>, &lt;code>&amp;lt;cite&amp;gt;&lt;/code>, and microdata).&lt;/p>
&lt;h2 id="practicing-semantic-public-learning-real-world-examples">Practicing Semantic Public Learning: Real-World Examples&lt;/h2>
&lt;p>These examples showcase aspects of an evolved version of learning in public, each demonstrating elements that Semantic Public Learning aims to integrate:&lt;/p>
&lt;p>&lt;strong>Academic Twitter&lt;/strong>: Researchers sharing work-in-progress, methodology questions, and &amp;ldquo;failed&amp;rdquo; experiments contribute to learning in public by making the typically private research process visible and collaborative
&lt;cite class="inline-citation-number" itemscope itemtype="https://schema.org/Citation">
&lt;sup class="cite-num">
&lt;a href="#ref-mojaradBeginnersGuideAcademic2020"
title="Jump to reference 6"
aria-label="Citation 6"
itemprop="identifier">6&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
.
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.researchgate.net/publication/306426174_Twitter_as_Method_Using_Twitter_as_a_Tool_to_Conduct_Research" target="_blank" rel="noopener" itemprop="url">Academic Twitter creates informal peer review networks&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-stewartTwitterMethodUsing2016"
title="Jump to reference 7"
aria-label="Citation 7"
itemprop="identifier">7&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
where knowledge is rapidly shared and validated, though the ephemeral nature of tweets limits long-term semantic integration.&lt;/p>
&lt;p>&lt;strong>Citation Style Language (CSL) Project&lt;/strong>: The
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://citationstyles.org/" target="_blank" rel="noopener" itemprop="url">CSL project&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-CitationStyleLanguage"
title="Jump to reference 8"
aria-label="Citation 8"
itemprop="identifier">8&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
exemplifies open source documentation with academic rigor, maintaining over 10,000 citation styles with
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://github.com/citation-style-language/styles" target="_blank" rel="noopener" itemprop="url">1,000&amp;#43; unique contributors&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-CitationstylelanguageStylesOfficial"
title="Jump to reference 9"
aria-label="Citation 9"
itemprop="identifier">9&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
. The project creates semantic, machine-readable citation formats and demonstrates transparent version control, extensive documentation, and community-driven development. While excellent for citation infrastructure, it focuses more on tools than documenting learning journeys.&lt;/p>
&lt;p>&lt;strong>NileRed&amp;rsquo;s Chemistry Experiments&lt;/strong>: YouTuber
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://nile.red" target="_blank" rel="noopener" itemprop="url">NileRed&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-NileRedOfficialWebsite"
title="Jump to reference 10"
aria-label="Citation 10"
itemprop="identifier">10&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
documents his chemistry experiments in real-time, showing failed attempts and explaining his reasoning process on his
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.youtube.com/channel/UCFhXFikryT4aFcLkLw2LBLA" target="_blank" rel="noopener" itemprop="url">YouTube channel&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-brawnNileRed"
title="Jump to reference 11"
aria-label="Citation 11"
itemprop="identifier">11&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
. Originally, Nigel only wanted to keep the documentation for personal reasons but in
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://starsbiopedia.com/nilered-nigel-braun-age-biography/" target="_blank" rel="noopener" itemprop="url">March 2014 decided&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-rajapakshaWhoNileRedUntold2023"
title="Jump to reference 12"
aria-label="Citation 12"
itemprop="identifier">12&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
to share his experiments with the world, transforming personal lab notes into public learning resources. His videos reveal the iterative problem-solving that leads to understanding, though they lack formal citations and bibliographies.&lt;/p>
&lt;p>&lt;strong>Mozilla Developer Network (MDN) Web Docs&lt;/strong>: The
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://github.com/mdn" target="_blank" rel="noopener" itemprop="url">MDN Web Docs&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-MDNWebDocs"
title="Jump to reference 13"
aria-label="Citation 13"
itemprop="identifier">13&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
demonstrate massive collaborative documentation. Each page shows contributors and modification dates, uses semantic HTML, and creates a living knowledge base. However, it rarely includes formal citations to academic sources or bibliographies.&lt;/p>
&lt;p>&lt;strong>3Blue1Brown&amp;rsquo;s Mathematical Explanations&lt;/strong>:
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.3blue1brown.com/3blue1brown.com" target="_blank" rel="noopener" itemprop="url">Grant Sanderson&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-3Blue1Brown"
title="Jump to reference 14"
aria-label="Citation 14"
itemprop="identifier">14&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
transforms abstract mathematical concepts into visual narratives, documenting his journey of understanding while creating resources that help others grasp complex topics through his
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw" target="_blank" rel="noopener" itemprop="url">YouTube videos&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-3Blue1Browna"
title="Jump to reference 15"
aria-label="Citation 15"
itemprop="identifier">15&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
.
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://stanforddaily.com/2020/01/24/3blue1brown-creator-grant-sanderson-15-talks-engaging-with-math-using-stories-and-visuals/" target="_blank" rel="noopener" itemprop="url">He explains&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-wei3Blue1BrownCreatorGrant2020"
title="Jump to reference 16"
aria-label="Citation 16"
itemprop="identifier">16&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
concepts ranging from linear algebra to neural networks with a highly visual approach, offers his
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://github.com/3b1b/manim" target="_blank" rel="noopener" itemprop="url">Manim visualization library&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-sanderson3b1bManim2025"
title="Jump to reference 17"
aria-label="Citation 17"
itemprop="identifier">17&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
as open source, and includes extensive references in video descriptions. From the mentioned examples, this comes closest to Semantic Public Learning, combining accessible explanation with source attribution.&lt;/p>
&lt;p>&lt;strong>This Post as Semantic Public Learning&lt;/strong>: This very post demonstrates Semantic Public Learning in action. Notice the inline citations linking to sources, the complete bibliography at the end, and the semantic markup of meta information that makes this content discoverable and citable by others. The citation tools I&amp;rsquo;m &lt;a href="https://heye.dev/projects/semantic-public-learning--74jz79y2q/">developing as part of this project&lt;/a> will enable this integration for Hugo Blox, to aid in creating knowledge artifacts that are both human-readable and machine-discoverable.&lt;/p>
&lt;h2 id="what-makes-semantic-public-learning-different">What Makes Semantic Public Learning Different&lt;/h2>
&lt;p>Semantic Public Learning occupies a unique space in the knowledge-sharing ecosystem. Unlike structured curricula that follow predetermined paths, it embraces the learner&amp;rsquo;s authentic discovery journey while adding layers of discoverability and verifiability. Think of it as sitting between traditional blogging and academic publishing, maintaining higher citation standards than typical blog posts while removing the high entry barriers of academic journals.&lt;/p>
&lt;p>You don&amp;rsquo;t need formal peer review to share your learning journey. Peer review happens organically through reader comments and community engagement. What remains crucial is grounding your insights in evidence and acknowledging sources. That&amp;rsquo;s why you&amp;rsquo;ll find proper citations with hyperlinks throughout my posts, along with a complete bibliography at the end, bringing academic rigor to accessible writing.&lt;/p>
&lt;p>The key differentiator: Semantic Public Learning creates &lt;strong>machine-readable, discoverable, and verifiable knowledge artifacts&lt;/strong> that can be:&lt;/p>
&lt;ul>
&lt;li>Found by search engines and AI agents&lt;/li>
&lt;li>Cited by others with confidence&lt;/li>
&lt;li>Built upon systematically&lt;/li>
&lt;li>Verified through source tracking&lt;/li>
&lt;/ul>
&lt;h2 id="the-broader-impact">The Broader Impact&lt;/h2>
&lt;p>When individuals practice Semantic Public Learning, their personal learning journeys become contributions to what researchers call
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.sciencedirect.com/science/article/pii/S1471772722000239" target="_blank" rel="noopener" itemprop="url">knowledge commoning&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-qureshiKnowledgeCommoningScaffolding2022"
title="Jump to reference 18"
aria-label="Citation 18"
itemprop="identifier">18&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
, an iterative process between knowledge curation and dissemination guided by community demand and uptake potential.&lt;/p>
&lt;p>This creates compounding benefits:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>For Learners&lt;/strong>: Enhanced understanding through public accountability and community feedback&lt;/li>
&lt;li>&lt;strong>For Communities&lt;/strong>: Accessible knowledge resources that lower barriers to learning&lt;/li>
&lt;li>&lt;strong>For Knowledge&lt;/strong>: Dynamic, interconnected networks that reveal new connections and applications&lt;/li>
&lt;li>&lt;strong>For the Future&lt;/strong>: AI-discoverable knowledge that can be integrated into emerging systems&lt;/li>
&lt;/ul>
&lt;h2 id="the-semantic-public-learning-movement">The Semantic Public Learning Movement&lt;/h2>
&lt;p>We&amp;rsquo;re at a unique moment. Knowledge-sharing is becoming the
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.chieflearningofficer.com/2021/08/16/open-learning-and-knowledge-sharing-in-a-remote-working-world/" target="_blank" rel="noopener" itemprop="url">preferred method&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-spiroOpenLearningKnowledge2021"
title="Jump to reference 19"
aria-label="Citation 19"
itemprop="identifier">19&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
over traditional learning interventions, with employees increasingly training at their own pace and supporting each other through knowledge-sharing in multiple formats.&lt;/p>
&lt;p>Semantic Public Learning harnesses this shift by making individual learning processes visible, valuable, and &lt;em>findable&lt;/em> to others. It transforms the traditional model where learning happens in isolation and knowledge remains trapped in individual minds or unsearchable formats.&lt;/p>
&lt;p>Instead of asking &amp;ldquo;What do I need to learn?&amp;rdquo; Semantic Public Learning asks: &amp;ldquo;How can my learning process create lasting, discoverable value for others while accelerating my own understanding?&amp;rdquo;&lt;/p>
&lt;h2 id="your-semantic-public-learning-challenge">Your Semantic Public Learning Challenge&lt;/h2>
&lt;p>Semantic Public Learning isn&amp;rsquo;t just a concept, it&amp;rsquo;s a practice to embrace. The most powerful way to grasp its potential is to experience it yourself.&lt;/p>
&lt;p>Here&amp;rsquo;s your challenge: &lt;strong>Pick one thing you&amp;rsquo;re currently learning and document it with semantic rigor to create a lasting, citable resource.&lt;/strong>
It could be:&lt;/p>
&lt;ul>
&lt;li>A complex concept you&amp;rsquo;re struggling with at work&lt;/li>
&lt;li>A skill you&amp;rsquo;re developing in your free time&lt;/li>
&lt;li>A research question you&amp;rsquo;re investigating&lt;/li>
&lt;li>A book or paper you&amp;rsquo;re working through&lt;/li>
&lt;/ul>
&lt;p>Start with these steps:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Choose a specific concept&lt;/strong> you&amp;rsquo;re actively learning (not something you already know)&lt;/li>
&lt;li>&lt;strong>Write a clear explanation&lt;/strong> as if teaching someone new to the topic&lt;/li>
&lt;li>&lt;strong>Add proper citations&lt;/strong> for every source that informed your understanding&lt;/li>
&lt;li>&lt;strong>Include semantic markup&lt;/strong> (proper headings, meta descriptions, structured data)&lt;/li>
&lt;li>&lt;strong>Share your questions&lt;/strong> and confusion points openly&lt;/li>
&lt;li>&lt;strong>Publish it publicly&lt;/strong> on your blog, GitHub, or any platform that supports proper formatting&lt;/li>
&lt;/ol>
&lt;p>&lt;strong>Share your learning experiment&lt;/strong>: Use &lt;a href="https://x.com/search?q=%23SemanticPublicLearning" target="_blank" rel="noopener">&lt;strong>#SemanticPublicLearning&lt;/strong>&lt;/a> or &lt;a href="https://x.com/search?q=%23LearnInPublic" target="_blank" rel="noopener">&lt;strong>#LearnInPublic&lt;/strong>&lt;/a> when you post about your learning journey so others can discover and learn from your process. Whether on social media, your blog, or any platform where you share knowledge, make your learning visible and findable.&lt;/p>
&lt;p>The knowledge you&amp;rsquo;re about to discover will not just belong to you. It will belong to everyone who might benefit from watching you discover it, now and in the future.&lt;/p>
&lt;h2 id="join-the-semantic-public-learning-journey">Join the Semantic Public Learning Journey&lt;/h2>
&lt;p>Speaking of making knowledge tools accessible: I&amp;rsquo;m using &lt;a href="https://hugoblox.com/" target="_blank" rel="noopener">Hugo Blox&lt;/a> for managing my blog and have been developing enhanced citation and bibliography features along with semantic integration capabilities. These tools powering the citations you see throughout this post, enable the semantic integration that makes Semantic Public Learning possible. All of these tools together form a framework that I&amp;rsquo;ll be sharing over the coming weeks. Follow me on &lt;a href="https://www.linkedin.com/in/hvoecking" target="_blank" rel="noopener">&lt;strong>LinkedIn&lt;/strong>&lt;/a>, &lt;a href="https://x.com/heye_dev" target="_blank" rel="noopener">&lt;strong>Twitter/X&lt;/strong>&lt;/a>, or &lt;a href="https://hvoecking.medium.com/" target="_blank" rel="noopener">&lt;strong>Medium&lt;/strong>&lt;/a> to be the first to know and see Semantic Public Learning in action. You can also have a look at the &lt;a href="https://heye.dev/projects/semantic-public-learning--74jz79y2q/">Semantic Public Learning project page&lt;/a>, it serves as a living document that tracks the journey as it develops. For best integration, you can also add the &lt;a href="https://heye.dev/posts/index.xml">RSS feed&lt;/a> of my blog, to always stay up to date without the need for a social media account. Comment below, share your perspectives, and help make it the interactive process it&amp;rsquo;s meant to be. Your questions and insights often become the catalyst for my next learning breakthrough.&lt;/p>
&lt;p>Remember: every expert was once a beginner who learned in public. By adding semantic layers to our learning, we ensure that knowledge remains discoverable for future learners. See you in the comments!&lt;/p>
&lt;section class="bibliography" itemscope>
&lt;h2>References&lt;/h2>
&lt;ol class="references" role="list">
&lt;li class="reference" itemprop="citation" id="ref-abramovWhyMyNew2019" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 1">1.&lt;/span>
&lt;span class="authors" itemprop="author">Abramov, D.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2019-%!d(float64=02)-%!d(float64=23)">
(2019, 23).
&lt;/time>
&lt;span class="title" itemprop="name">Why My New Blog Isn’t on Medium&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Medium&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://medium.com/@dan_abramov/why-my-new-blog-isnt-on-medium-3b280282fbae" target="_blank" rel="noopener" itemprop="url">https://medium.com/@dan_abramov/why-my-new-blog-isnt-on-medium-3b280282fbae&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-martirenaHowTeachersCan" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 2">2.&lt;/span>
&lt;span class="authors" itemprop="author">Martirena, C. C.&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">How Teachers Can Use Pedagogical Documentation for Reflection and Planning&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Edutopia&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://www.edutopia.org/article/how-teachers-can-use-pedagogical-documentation-reflection-and-planning/" target="_blank" rel="noopener" itemprop="url">https://www.edutopia.org/article/how-teachers-can-use-pedagogical-documentation-reflection-and-planning/&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-founderWhatFeynmanTechnique2019" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 3">3.&lt;/span>
&lt;span class="authors" itemprop="author">Founder, T. H. &amp;amp; TeachThought, D.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2019-%!d(float64=07)-%!d(float64=09)">
(2019, 9).
&lt;/time>
&lt;span class="title" itemprop="name">What Is The Feynman Technique?&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>TeachThought&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://www.teachthought.com/learning-posts/how-to-use-the-feynman-technique-learning-by-simplifying/" target="_blank" rel="noopener" itemprop="url">https://www.teachthought.com/learning-posts/how-to-use-the-feynman-technique-learning-by-simplifying/&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-lachnerLearningbyTeachingAudiencePresence2022" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 4">4.&lt;/span>
&lt;span class="authors" itemprop="author">Lachner, A., Hoogerheide, V., Gog, T. &amp;amp; Renkl, A.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2022">
(2022).
&lt;/time>
&lt;span class="title" itemprop="name">Learning-by-Teaching Without Audience Presence or Interaction: When and Why Does it Work?&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Educational Psychology Review&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 34 &lt;/em>(2), 575-607&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1007/s10648-021-09643-4" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1007/s10648-021-09643-4&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-kivSemanticKnowledgeNetworks" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 5">5.&lt;/span>
&lt;span class="authors" itemprop="author">Kiv, A., Soloviev, V., Tarasova, E., Koycheva, T. &amp;amp; Kolesnykova, K.&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">Semantic knowledge networks in education&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>ResearchGate&lt;/em>&lt;/span>.&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1051/e3sconf/202016610022" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1051/e3sconf/202016610022&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-mojaradBeginnersGuideAcademic2020" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 6">6.&lt;/span>
&lt;span class="authors" itemprop="author">Mojarad, S.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2020-%!d(float64=08)-%!d(float64=20)">
(2020, 20).
&lt;/time>
&lt;span class="title" itemprop="name">A Beginners Guide to Academic Twitter&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Medium&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://medium.com/@smojarad/a-beginners-guide-to-academic-twitter-f483dae86597" target="_blank" rel="noopener" itemprop="url">https://medium.com/@smojarad/a-beginners-guide-to-academic-twitter-f483dae86597&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-stewartTwitterMethodUsing2016" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 7">7.&lt;/span>
&lt;span class="authors" itemprop="author">Stewart, B.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2016">
(2016).
&lt;/time>
&lt;span class="title" itemprop="name">Twitter as method: Using twitter as a tool to conduct research&lt;/span>
&lt;span class="ISBN" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="ISBN">ISBN: &lt;a href="https://www.researchgate.net/publication/306426174_Twitter_as_Method_Using_Twitter_as_a_Tool_to_Conduct_Research" target="_blank" rel="noopener" itemprop="value">978-1-4739-1632-6&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-CitationStyleLanguage" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 8">8.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">Citation Style Language&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Citation Style Language&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://citationstyles.org/" target="_blank" rel="noopener" itemprop="url">https://citationstyles.org/&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-CitationstylelanguageStylesOfficial" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 9">9.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">citation-style-language/styles: Official repository for Citation Style Language (CSL) citation styles.&lt;/span>
&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://github.com/citation-style-language/styles" target="_blank" rel="noopener" itemprop="url">https://github.com/citation-style-language/styles&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-NileRedOfficialWebsite" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 10">10.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">NileRed - Official Website&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>NileRed&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://nile.red" target="_blank" rel="noopener" itemprop="url">https://nile.red&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-brawnNileRed" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 11">11.&lt;/span>
&lt;span class="authors" itemprop="author">Brawn, N.&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">NileRed&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>YouTube&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://www.youtube.com/channel/UCFhXFikryT4aFcLkLw2LBLA" target="_blank" rel="noopener" itemprop="url">https://www.youtube.com/channel/UCFhXFikryT4aFcLkLw2LBLA&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-rajapakshaWhoNileRedUntold2023" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 12">12.&lt;/span>
&lt;span class="authors" itemprop="author">Rajapaksha, M.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2023-%!d(float64=07)-%!d(float64=24)">
(2023, 24).
&lt;/time>
&lt;span class="title" itemprop="name">Who Is Behind NileRed? | The Untold Story of the TikTok Chemist Taking the Internet by Storm.&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Starsbiopedia.com&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://starsbiopedia.com/nilered-nigel-braun-age-biography/" target="_blank" rel="noopener" itemprop="url">https://starsbiopedia.com/nilered-nigel-braun-age-biography/&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-MDNWebDocs" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 13">13.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">MDN Web Docs&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>GitHub&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://github.com/mdn" target="_blank" rel="noopener" itemprop="url">https://github.com/mdn&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-3Blue1Brown" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 14">14.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">3Blue1Brown&lt;/span>
&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://www.3blue1brown.com/3blue1brown.com" target="_blank" rel="noopener" itemprop="url">https://www.3blue1brown.com/3blue1brown.com&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-3Blue1Browna" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 15">15.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">3Blue1Brown&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>YouTube&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw" target="_blank" rel="noopener" itemprop="url">https://www.youtube.com/channel/UCYO_jab_esuFRV4b17AJtAw&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-wei3Blue1BrownCreatorGrant2020" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 16">16.&lt;/span>
&lt;span class="authors" itemprop="author">Wei, P.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2020-%!d(float64=01)-%!d(float64=24)">
(2020, 24).
&lt;/time>
&lt;span class="title" itemprop="name">3Blue1Brown creator Grant Sanderson ’15 talks engaging with math using stories and visuals&lt;/span>
&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=08)-%!d(float64=01)">
Retrieved 1, 2025, from
&lt;/time>&lt;a href="https://stanforddaily.com/2020/01/24/3blue1brown-creator-grant-sanderson-15-talks-engaging-with-math-using-stories-and-visuals/" target="_blank" rel="noopener" itemprop="url">https://stanforddaily.com/2020/01/24/3blue1brown-creator-grant-sanderson-15-talks-engaging-with-math-using-stories-and-visuals/&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-sanderson3b1bManim2025" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 17">17.&lt;/span>
&lt;span class="authors" itemprop="author">Sanderson, G.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2025">
(2025).
&lt;/time>
&lt;span class="title" itemprop="name">3b1b/manim&lt;/span>
&lt;span class="URL">
&lt;a href="https://github.com/3b1b/manim" target="_blank" rel="noopener" itemprop="url">https://github.com/3b1b/manim&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-qureshiKnowledgeCommoningScaffolding2022" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 18">18.&lt;/span>
&lt;span class="authors" itemprop="author">Qureshi, I., Bhatt, B., Parthiban, R., Sun, R., Shukla, D. M., Hota, P. K. &amp;amp; Xu, Z.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2022">
(2022).
&lt;/time>
&lt;span class="title" itemprop="name">Knowledge Commoning: Scaffolding and Technoficing to Overcome Challenges of Knowledge Curation&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Information and Organization&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 32 &lt;/em>(2), 100410&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1016/j.infoandorg.2022.100410" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1016/j.infoandorg.2022.100410&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-spiroOpenLearningKnowledge2021" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 19">19.&lt;/span>
&lt;span class="authors" itemprop="author">Spiro, K. &amp;amp; Bhamidi, V.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2021-%!d(float64=08)-%!d(float64=16)">
(2021, 16).
&lt;/time>
&lt;span class="title" itemprop="name">Open learning and knowledge sharing in a remote working world&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Chief Learning Officer&lt;/em>&lt;/span>.&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://www.chieflearningofficer.com/2021/08/16/open-learning-and-knowledge-sharing-in-a-remote-working-world/" target="_blank" rel="noopener" itemprop="url">https://www.chieflearningofficer.com/2021/08/16/open-learning-and-knowledge-sharing-in-a-remote-working-world/&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;/ol>
&lt;/section>
&lt;style>
.bibliography {
margin-top: 2rem;
padding-top: 1rem;
border-top: 2px solid #e5e7eb;
}
.bibliography h2 {
font-size: 1.5rem;
font-weight: 600;
margin-bottom: 1rem;
color: #374151;
}
.references {
list-style: none;
padding-left: 0;
margin: 0;
}
.reference {
margin-bottom: 1rem;
padding-left: 0;
line-height: 1.6;
font-size: 0.9rem;
}
.ref-content {
display: block;
font-style: normal;
}
.ref-number {
display: inline-block;
margin-right: 0.5em;
font-weight: 500;
}
.citation-link {
color: #2563eb;
text-decoration: none;
font-weight: 500;
}
.citation-link:hover {
text-decoration: underline;
}
.citation-error {
color: #dc2626;
font-weight: 500;
}
.citation {
white-space: nowrap;
}
&lt;/style></description></item><item><title>Learn in Public</title><link>https://heye.dev/posts/learn-in-public-method--74hsyqc9h/</link><pubDate>Tue, 05 Aug 2025 00:00:00 +0000</pubDate><guid>https://heye.dev/posts/learn-in-public-method--74hsyqc9h/</guid><description>&lt;h2 id="_from-individual-discovery-to-collective-intelligence_">&lt;em>From Individual Discovery to Collective Intelligence&lt;/em>&lt;/h2>
&lt;div class="border border-gray-200 dark:border-gray-700 rounded-lg bg-gradient-to-r from-blue-50 to-indigo-50 dark:from-gray-800 dark:to-gray-700 border-l-4 border-blue-500 dark:border-blue-400 rounded-lg p-4 mb-6 shadow-sm">
&lt;div class="flex items-center justify-between flex-wrap gap-3">
&lt;div class="flex items-center space-x-3">
&lt;div class="flex-shrink-0 text-primary-800 dark:text-primary-200">
&lt;svg class="w-5 h-5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
&lt;path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M19 11H5m14 0a2 2 0 012 2v6a2 2 0 01-2 2H5a2 2 0 01-2-2v-6a2 2 0 012-2m14 0V9a2 2 0 00-2-2M5 11V9a2 2 0 012-2m0 0V5a2 2 0 012-2h6a2 2 0 012 2v2M7 7h10">&lt;/path>
&lt;/svg>
&lt;/div>
&lt;div>
&lt;div class="text-sm text-primary-600 dark:text-primary-300 font-medium">Post 1 of 2 in the &lt;a href="https://heye.dev/series/semantic-public-learning-framework/" class="text-primary-600 dark:text-primary-300">Series: Semantic Public Learning Framework&lt;/a> series&lt;/div>
&lt;/div>
&lt;/div>
&lt;a href="https://heye.dev/series/semantic-public-learning-framework/" class="text-blue-600 dark:text-blue-400 hover:text-blue-800 dark:hover:text-blue-200 text-sm font-medium">
View All →
&lt;/a>
&lt;/div>
&lt;/div>
&lt;p>&lt;strong>Picture this:&lt;/strong> You&amp;rsquo;re struggling to understand how human memory works. Mid-explanation to a friend, you say: &amp;ldquo;&amp;hellip; kind of like when we tried to piece together that party from last year: you remembered the music, I remembered the cake, Sarah remembered who was there. None of us had the full memory, but together it all came back&amp;hellip;&amp;rdquo;
Then it hits you. That might just be how the brain remembers, not stored in a single location, but distributed across a vast neural network. You just understood in 30 seconds what hours of reading couldn&amp;rsquo;t clarify.&lt;/p>
&lt;p>Now, why does explaining in the open unlock insights that studying alone never could? The answer reveals something remarkable about how knowledge is built.&lt;/p>
&lt;h2 id="the-teaching-paradox">The Teaching Paradox&lt;/h2>
&lt;p>Nobel Prize-winning physicist
&lt;a href="https://heye.dev/articles/richard-feynman/"
class="article-link internal"
title="American physicist who won the Nobel Prize for quantum work and became famous for explaining science in simple terms"
data-internal="true">Richard Feynman&lt;/a>
believed that true understanding meant being able to explain complex ideas simply. His colleague David Goodstein recounted how Feynman once said about explaining a complex physics concept: &amp;ldquo;I couldn&amp;rsquo;t reduce it to the freshman level. That means we really don&amp;rsquo;t understand it&amp;quot;
&lt;cite class="inline-citation-number" itemscope itemtype="https://schema.org/Citation">
&lt;sup class="cite-num">
&lt;a href="#ref-goodsteinRichardFeynmanTeacher1989"
title="Jump to reference 1"
aria-label="Citation 1"
itemprop="identifier">1&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
. This philosophy, that if you can&amp;rsquo;t explain something simply, you don&amp;rsquo;t truly understand it, became the foundation for what we now call the &amp;ldquo;Feynman Technique.&amp;rdquo; However, the systematic four-step technique commonly attributed to him appears to be
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://books.google.com/books?vid=978-0-679-74704-8" target="_blank" rel="noopener" itemprop="url">a modern construction based on his general approach&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-gleickGeniusLifeScience1993"
title="Jump to reference 2"
aria-label="Citation 2"
itemprop="identifier">2&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
rather than something he explicitly formulated.&lt;/p>
&lt;p>The technique involves: choose a concept, explain it as if teaching a sixth-grader, identify knowledge gaps, and simplify further. While this method is sound, I propose adding one crucial step that amplifies its benefits for everyone: document your learning journey and share it publicly.&lt;/p>
&lt;p>&lt;strong>Traditional learning&lt;/strong>: &lt;em>You&amp;rsquo;re learning how LLMs choose the next token. You write an explanation on paper, pretending to teach it to an imaginary sixth-grader: &amp;ldquo;The AI looks at all possible words and picks the most likely one based on what it learned during training.&amp;rdquo; You realize you don&amp;rsquo;t understand the probability calculation, study more, and refine your explanation. The learning remains private.&lt;/em>&lt;/p>
&lt;p>&lt;strong>Learning in public&lt;/strong>: &lt;em>You follow the same process but document and publish that explanation on your blog, explicitly noting your confusion about the probability calculation. Now the magic happens: a machine learning engineer comments with a clearer explanation, someone shares a helpful visualization, and a student asks a question that reveals another gap. Your learning becomes collaborative.&lt;/em>&lt;/p>
&lt;p>The difference? When you teach to an imaginary audience, learning stops with you. When you learn publicly,
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.mdpi.com/2076-3387/14/1/17" target="_blank" rel="noopener" itemprop="url">knowledge spreads and grows within learning communities&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-zamiriMethodsTechnologiesSupporting2024"
title="Jump to reference 3"
aria-label="Citation 3"
itemprop="identifier">3&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
, transforming individual understanding into collective intelligence.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="Learn in Public: From Individual Discovery to Collective Intelligence" srcset="
/posts/learn-in-public-method--74hsyqc9h/featured_hu7427947222941672280.webp 400w,
/posts/learn-in-public-method--74hsyqc9h/featured_hu13625682636271262260.webp 760w,
/posts/learn-in-public-method--74hsyqc9h/featured_hu10737684660099526960.webp 1200w"
src="https://heye.dev/posts/learn-in-public-method--74hsyqc9h/featured_hu7427947222941672280.webp"
width="760"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="discovering-learn-in-public">Discovering Learn in Public&lt;/h2>
&lt;p>As I developed this concept while writing this post, I initially called it &amp;ldquo;public learning.&amp;rdquo; But in the spirit of the practice itself (openly documenting my learning journey), I discovered I wasn&amp;rsquo;t the first to explore this territory.
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.swyx.io/about" target="_blank" rel="noopener" itemprop="url">Shawn Wang (swyx)&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-Swyxio"
title="Jump to reference 4"
aria-label="Citation 4"
itemprop="identifier">4&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
had already articulated a framework called
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.swyx.io/learn-in-public" target="_blank" rel="noopener" itemprop="url">Learn in Public&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-LearnPublic"
title="Jump to reference 5"
aria-label="Citation 5"
itemprop="identifier">5&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
that captured much of what I was describing. This discovery exemplifies the principle I&amp;rsquo;m exploring: by working through ideas openly, I found existing wisdom that sharpened my understanding. I&amp;rsquo;ve adopted the term &lt;em>Learn in Public&lt;/em>, building on this foundation while adding my perspective and contributions. I&amp;rsquo;ll use the more natural phrase &lt;em>learning in public&lt;/em> when it fits better in context, but both refer to the same core idea.&lt;/p>
&lt;p>&lt;strong>Learn in Public&lt;/strong> is the practice of deliberately documenting and sharing your learning process in real-time, creating openly accessible knowledge artifacts that benefit both your understanding and the broader learning community.&lt;/p>
&lt;h3 id="the-research-foundation">The Research Foundation&lt;/h3>
&lt;p>Learning in public&amp;rsquo;s effectiveness is backed by converging research findings that explain why this approach transforms individual learning into collective intelligence.&lt;/p>
&lt;p>The foundation starts with the act of teaching itself. Research shows that both preparing for and actually teaching academic content
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://linkinghub.elsevier.com/retrieve/pii/S0361476X13000209" target="_blank" rel="noopener" itemprop="url">significantly enhance learning outcomes&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-fiorellaRelativeBenefitsLearning2013"
title="Jump to reference 6"
aria-label="Citation 6"
itemprop="identifier">6&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
. When you prepare to teach material publicly, you engage in more sophisticated cognitive processing than private study requires. The more complex the teaching activity, the more opportunities to learn by teaching
&lt;cite class="inline-citation-number" itemscope itemtype="https://schema.org/Citation">
&lt;sup class="cite-num">
&lt;a href="#ref-duranLearningbyteachingEvidenceImplications2017"
title="Jump to reference 7"
aria-label="Citation 7"
itemprop="identifier">7&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
, and public documentation is inherently complex, requiring you to anticipate questions, structure information clearly, and fill knowledge gaps you might otherwise ignore.&lt;/p>
&lt;p>This enhanced processing leads directly to improved retention.
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://doi.org/10.1186/s41235-017-0087-y" target="_blank" rel="noopener" itemprop="url">Weinstein et al. showed&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-weinsteinTeachingScienceLearning2018"
title="Jump to reference 8"
aria-label="Citation 8"
itemprop="identifier">8&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
that bringing information to mind directly improves memory for that information, and public documentation forces continuous retrieval and reorganization of knowledge. Unlike traditional teaching in a classroom, learning in public makes this process transparently visible and persistently accessible, creating artifacts that serve as both personal reference and community resource.&lt;/p>
&lt;p>But the real magic happens at the community level. Learning in public transforms individual study into systematic knowledge documentation where learners
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://books.google.de/books?vid=978-3-556-07336-0" target="_blank" rel="noopener" itemprop="url">acquire life skills along with subject matter&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-burowGrosseHandbuchUnterricht2018"
title="Jump to reference 9"
aria-label="Citation 9"
itemprop="identifier">9&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
while creating resources others can build upon.
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.sciencedirect.com/science/article/pii/S1747938X23000660" target="_blank" rel="noopener" itemprop="url">Knowledge sharing contributes to success&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-fanKnowledgeSharingAcademics2024"
title="Jump to reference 10"
aria-label="Citation 10"
itemprop="identifier">10&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
through positive impacts on creativity, learning, and performance. The public dimension creates what educational research terms
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://www.mdpi.com/2076-3387/14/1/17" target="_blank" rel="noopener" itemprop="url">learning communities&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-zamiriMethodsTechnologiesSupporting2024"
title="Jump to reference 3"
aria-label="Citation 3"
itemprop="identifier">3&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
where participants experience higher retention rates, with belonging and support contributing to increased persistence and success.&lt;/p>
&lt;p>Perhaps most importantly for our digital age, documenting learning creates knowledge graphs that significantly promote collaborative knowledge building, group performance, and social interaction
&lt;cite class="inline-citation-number" itemscope itemtype="https://schema.org/Citation">
&lt;sup class="cite-num">
&lt;a href="#ref-zhengAutomaticKnowledgeGraph2023"
title="Jump to reference 11"
aria-label="Citation 11"
itemprop="identifier">11&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
. Each concept you explain becomes what researchers call a
&lt;cite class="inline-citation" itemscope itemtype="https://schema.org/Citation">
&lt;a href="https://en.wikipedia.org/w/index.php?title=Semantic_network&amp;amp;oldid=1299856638" target="_blank" rel="noopener" itemprop="url">semantic node&lt;/a>
&lt;sup class="cite-num">
&lt;a href="#ref-SemanticNetwork"
title="Jump to reference 12"
aria-label="Citation 12"
itemprop="identifier">12&lt;/a>
&lt;/sup>
&lt;/cite>&lt;style>
.inline-citation {
font-style: normal;
}
.inline-citation a{
font-weight: 400;
}
.cite-num {
margin-left: -0.2em;
margin-right: 0.2em;
line-height: 0;
}
.cite-num a {
text-decoration: none;
font-weight: 500;
}
.cite-num a:hover {
text-decoration: underline;
}
&lt;/style>
in a broader knowledge network. These artifacts must be machine-readable to maximize impact, just as search engines need to understand your content to rank it effectively, your learning artifacts need proper structure, clear terminology, and semantic markup (like a Wikipedia page the internet links to). This makes them discoverable by indexers so both humans and AI agents can find them. This is commonly known as search engine optimization (SEO), but it&amp;rsquo;s more than that: by properly linking and annotating your writing, you ensure your knowledge contributions integrate into the expanding web of human understanding.&lt;/p>
&lt;p>When you learn in public, you&amp;rsquo;re improving your own understanding while participating in the collaborative construction of knowledge itself.&lt;/p>
&lt;h2 id="semantic-public-learning">Semantic Public Learning&lt;/h2>
&lt;p>While Learn in Public provides the foundation, I&amp;rsquo;ll show you in my &lt;a href="https://heye.dev/posts/introducing-semantic-public-learning--74jjna98d/">next post&lt;/a> in this series how we can enhance it further with what I call &lt;em>Semantic Public Learning&lt;/em> by adding academic rigor and machine-readability to maximize both personal learning and collective knowledge building.&lt;/p>
&lt;p>Follow me on &lt;a href="https://www.linkedin.com/in/hvoecking" target="_blank" rel="noopener">&lt;strong>LinkedIn&lt;/strong>&lt;/a>, &lt;a href="https://x.com/heye_dev" target="_blank" rel="noopener">&lt;strong>Twitter/X&lt;/strong>&lt;/a>, or &lt;a href="https://hvoecking.medium.com/" target="_blank" rel="noopener">&lt;strong>Medium&lt;/strong>&lt;/a> to be the first to know and see Semantic Public Learning in action. You can also have a look at the &lt;a href="https://heye.dev/projects/semantic-public-learning--74jz79y2q/">Semantic Public Learning project page&lt;/a>, it serves as a living document that tracks the journey as it develops. For easy integration, you can also add the &lt;a href="https://heye.dev/projects/semantic-public-learning/index.xml">RSS feed&lt;/a> of that page, or the &lt;a href="https://heye.dev/posts/index.xml">RSS feed&lt;/a> of my blog, to always stay up to date without the need for a social media account. Comment below, share your perspectives, and help make it the interactive process it&amp;rsquo;s meant to be. Your questions and insights often become the catalyst for my next learning breakthrough.&lt;/p>
&lt;p>Remember: every expert was once a beginner who learned in public.&lt;/p>
&lt;p>See you in the comments!&lt;/p>
&lt;section class="bibliography" itemscope>
&lt;h2>References&lt;/h2>
&lt;ol class="references" role="list">
&lt;li class="reference" itemprop="citation" id="ref-goodsteinRichardFeynmanTeacher1989" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 1">1.&lt;/span>
&lt;span class="authors" itemprop="author">Goodstein, D. L.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="1989">
(1989).
&lt;/time>
&lt;span class="title" itemprop="name">Richard P. Feynman, Teacher&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Physics Today&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 42 &lt;/em>(2), 70-75&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1063/1.881195" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1063/1.881195&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-gleickGeniusLifeScience1993" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 2">2.&lt;/span>
&lt;span class="authors" itemprop="author">Gleick, J.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="1993">
(1993).
&lt;/time>
&lt;span class="title" itemprop="name">&lt;em>Genius: the life and science of Richard Feynman&lt;/em>&lt;/span>
&lt;span class="publisher" itemprop="publisher">
Vintage Books&lt;/span>&lt;span class="ISBN" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="ISBN">ISBN: &lt;a href="https://books.google.com/books?vid=978-0-679-74704-8" target="_blank" rel="noopener" itemprop="value">978-0-679-74704-8&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-zamiriMethodsTechnologiesSupporting2024" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 3">3.&lt;/span>
&lt;span class="authors" itemprop="author">Zamiri, M. &amp;amp; Esmaeili, A.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2024">
(2024).
&lt;/time>
&lt;span class="title" itemprop="name">Methods and Technologies for Supporting Knowledge Sharing within Learning Communities: A Systematic Literature Review&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Administrative Sciences&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 14 &lt;/em>(1), 17&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.3390/admsci14010017" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.3390/admsci14010017&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-Swyxio" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 4">4.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">swyx.io/about&lt;/span>
&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://www.swyx.io/about" target="_blank" rel="noopener" itemprop="url">https://www.swyx.io/about&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-LearnPublic" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 5">5.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">Learn In Public&lt;/span>
&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://www.swyx.io/learn-in-public" target="_blank" rel="noopener" itemprop="url">https://www.swyx.io/learn-in-public&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-fiorellaRelativeBenefitsLearning2013" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 6">6.&lt;/span>
&lt;span class="authors" itemprop="author">Fiorella, L. &amp;amp; Mayer, R. E.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2013">
(2013).
&lt;/time>
&lt;span class="title" itemprop="name">The relative benefits of learning by teaching and teaching expectancy&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Contemporary Educational Psychology&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 38 &lt;/em>(4), 281-288&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1016/j.cedpsych.2013.06.001" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1016/j.cedpsych.2013.06.001&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-duranLearningbyteachingEvidenceImplications2017" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 7">7.&lt;/span>
&lt;span class="authors" itemprop="author">Duran, D.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2017">
(2017).
&lt;/time>
&lt;span class="title" itemprop="name">Learning-by-teaching. Evidence and implications as a pedagogical mechanism&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Innovations in Education and Teaching International&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 54 &lt;/em>(5), 476-484&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1080/14703297.2016.1156011" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1080/14703297.2016.1156011&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-weinsteinTeachingScienceLearning2018" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 8">8.&lt;/span>
&lt;span class="authors" itemprop="author">Weinstein, Y., Madan, C. R. &amp;amp; Sumeracki, M. A.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2018">
(2018).
&lt;/time>
&lt;span class="title" itemprop="name">Teaching the science of learning&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Cognitive Research: Principles and Implications&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 3 &lt;/em>(1), 2&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1186/s41235-017-0087-y" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1186/s41235-017-0087-y&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-burowGrosseHandbuchUnterricht2018" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 9">9.&lt;/span>
&lt;span class="authors" itemprop="author">Burow, O. &amp;amp; Bornemann, S. (Eds.)&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2018">
(2018).
&lt;/time>
&lt;span class="title" itemprop="name">&lt;em>Das große Handbuch Unterricht &amp;amp; Erziehung in der Schule&lt;/em>&lt;/span>
&lt;span class="publisher" itemprop="publisher">
Carl Link&lt;/span>&lt;span class="ISBN" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="ISBN">ISBN: &lt;a href="https://books.google.de/books?vid=978-3-556-07336-0" target="_blank" rel="noopener" itemprop="value">978-3-556-07336-0&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-fanKnowledgeSharingAcademics2024" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 10">10.&lt;/span>
&lt;span class="authors" itemprop="author">Fan, Z. &amp;amp; Beh, L.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2024">
(2024).
&lt;/time>
&lt;span class="title" itemprop="name">Knowledge sharing among academics in higher education: A systematic literature review and future agenda&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>Educational Research Review&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 42 &lt;/em>, 100573&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1016/j.edurev.2023.100573" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1016/j.edurev.2023.100573&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-zhengAutomaticKnowledgeGraph2023" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 11">11.&lt;/span>
&lt;span class="authors" itemprop="author">Zheng, L., Niu, J., Long, M. &amp;amp; Fan, Y.&lt;/span>
&lt;time class="year" itemprop="datePublished" datetime="2023">
(2023).
&lt;/time>
&lt;span class="title" itemprop="name">An automatic knowledge graph construction approach to promoting collaborative knowledge building, group performance, social interaction and socially shared regulation in CSCL&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">&lt;em>British Journal of Educational Technology&lt;/em>&lt;/span>&lt;/span>&lt;span class="publication-info">&lt;em> 54 &lt;/em>(3), 686-711&lt;/span>&lt;span class="DOI" itemprop="identifier" itemscope itemtype="https://schema.org/PropertyValue">
&lt;meta itemprop="propertyID" content="DOI">&lt;a href="https://doi.org/10.1111/bjet.13283" target="_blank" rel="noopener" itemprop="value">https://doi.org/10.1111/bjet.13283&lt;/a>&lt;/span>&lt;/cite>
&lt;/li>&lt;li class="reference" itemprop="citation" id="ref-SemanticNetwork" itemscope itemtype="https://schema.org/CreativeWork" role="listitem">
&lt;cite class="ref-content">
&lt;span class="ref-number" aria-label="Reference 12">12.&lt;/span>
&lt;span class="authors" itemprop="author">&lt;/span>
&lt;time class="year" itemprop="datePublished">
(n.d.).
&lt;/time>
&lt;span class="title" itemprop="name">Semantic network&lt;/span>
&lt;span class="container-title" itemprop="isPartOf" itemscope itemtype="https://schema.org/Periodical">
&lt;span itemprop="name">Wikipedia&lt;/span>.&lt;/span>&lt;span class="URL">
&lt;a href="https://en.wikipedia.org/w/index.php?title=Semantic_network&amp;amp;oldid=1299856638" target="_blank" rel="noopener" itemprop="url">https://en.wikipedia.org/w/index.php?title=Semantic_network&amp;amp;oldid=1299856638&lt;/a>
&lt;/span>&lt;span class="web-citation">&lt;time class="accessed" datetime="2025-%!d(float64=07)-%!d(float64=29)">
Retrieved 29, 2025, from
&lt;/time>&lt;a href="https://en.wikipedia.org/w/index.php?title=Semantic_network&amp;amp;oldid=1299856638" target="_blank" rel="noopener" itemprop="url">https://en.wikipedia.org/w/index.php?title=Semantic_network&amp;amp;oldid=1299856638&lt;/a>
&lt;/span>&lt;/cite>
&lt;/li>&lt;/ol>
&lt;/section>
&lt;style>
.bibliography {
margin-top: 2rem;
padding-top: 1rem;
border-top: 2px solid #e5e7eb;
}
.bibliography h2 {
font-size: 1.5rem;
font-weight: 600;
margin-bottom: 1rem;
color: #374151;
}
.references {
list-style: none;
padding-left: 0;
margin: 0;
}
.reference {
margin-bottom: 1rem;
padding-left: 0;
line-height: 1.6;
font-size: 0.9rem;
}
.ref-content {
display: block;
font-style: normal;
}
.ref-number {
display: inline-block;
margin-right: 0.5em;
font-weight: 500;
}
.citation-link {
color: #2563eb;
text-decoration: none;
font-weight: 500;
}
.citation-link:hover {
text-decoration: underline;
}
.citation-error {
color: #dc2626;
font-weight: 500;
}
.citation {
white-space: nowrap;
}
&lt;/style></description></item><item><title>Understanding Superposition in Neural Networks: A Guide Through Analogies</title><link>https://heye.dev/posts/understanding-superposition-in-neural-networks--74a4kjpn7/</link><pubDate>Tue, 08 Jul 2025 00:00:00 +0000</pubDate><guid>https://heye.dev/posts/understanding-superposition-in-neural-networks--74a4kjpn7/</guid><description>&lt;p>One of the most fascinating and challenging concepts in mechanistic interpretability is &lt;em>superposition&lt;/em>, the way neural networks cleverly pack multiple features into the same computational space. If you&amp;rsquo;re coming from a background in knowledge representation or semantic systems, superposition can seem quite alien at first. But with the right analogies, it becomes understandable and genuinely elegant.&lt;/p>
&lt;h2 id="what-is-superposition">What is Superposition?&lt;/h2>
&lt;p>In traditional knowledge systems like the semantic web, concepts are typically stored in dedicated, clearly labeled locations. You might have distinct slots for &amp;ldquo;Person,&amp;rdquo; &amp;ldquo;hasAge,&amp;rdquo; and &amp;ldquo;livesIn&amp;rdquo; - each with its own well-defined space in your ontology.&lt;/p>
&lt;p>Neural networks face a different challenge entirely. They need to represent potentially millions of concepts, but they have limited &amp;ldquo;storage space&amp;rdquo; in the form of neurons and dimensions. Superposition is their solution: &lt;em>multiple distinct features are encoded in the same set of neurons&lt;/em>, rather than each feature having its own dedicated slot.&lt;/p>
&lt;p>Think of it like having a library where you need to store a million books, but you only have shelf space for a thousand. Superposition would be like discovering a clever way to store multiple books in the same physical space, perhaps by layering them in a way that you can still retrieve individual books when needed.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="Understanding Superposition in Neural Networks: A Guide Through Analogies" srcset="
/posts/understanding-superposition-in-neural-networks--74a4kjpn7/featured_hu6767927628138745026.webp 400w,
/posts/understanding-superposition-in-neural-networks--74a4kjpn7/featured_hu5220002346455651064.webp 760w,
/posts/understanding-superposition-in-neural-networks--74a4kjpn7/featured_hu16184943129944942384.webp 1200w"
src="https://heye.dev/posts/understanding-superposition-in-neural-networks--74a4kjpn7/featured_hu6767927628138745026.webp"
width="760"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h2 id="the-compression-analogy">The Compression Analogy&lt;/h2>
&lt;p>Superposition is fundamentally a form of &lt;em>learned compression&lt;/em>. But unlike traditional compression algorithms like ZIP or JPEG, where you have explicit rules for packing and unpacking data, superposition is a compression scheme that the neural network discovers on its own.&lt;/p>
&lt;p>Imagine you&amp;rsquo;re trying to compress a massive dataset, but instead of using a pre-designed algorithm, you let the system figure out its own compression method through trial and error. The network learns to pack features together in a way that preserves the information it needs while fitting within its constraints.&lt;/p>
&lt;p>The tricky part? There&amp;rsquo;s no clean &amp;ldquo;decompression algorithm.&amp;rdquo; When multiple features are superimposed in the same neurons, the separation of them becomes a complex inference problem. It&amp;rsquo;s like having a brilliantly space-efficient storage system, but now you need to figure out how to extract individual items without clear labels.&lt;/p>
&lt;h2 id="the-river-delta-information-flow">The River Delta: Information Flow&lt;/h2>
&lt;p>Perhaps the most intuitive way to understand superposition is through the metaphor of a &lt;em>river delta with controllable dams&lt;/em>.&lt;/p>
&lt;p>In this analogy:&lt;/p>
&lt;ul>
&lt;li>&lt;em>Water flow&lt;/em> represents information/activation flowing through the network&lt;/li>
&lt;li>&lt;em>Controllable dams&lt;/em> represent the weights and biases that can be adjusted&lt;/li>
&lt;li>&lt;em>Different streams&lt;/em> represent different features or concepts&lt;/li>
&lt;li>&lt;em>The destination&lt;/em> represents the final output (like predicting the next token)&lt;/li>
&lt;/ul>
&lt;p>When you present a neural network with an image of a cat, multiple &amp;ldquo;streams&amp;rdquo; of information flow simultaneously: visual texture features, shape features, color features, and higher-level concept features. These streams don&amp;rsquo;t take separate paths, they flow through the same &amp;ldquo;channels&amp;rdquo; (neurons) in the network.&lt;/p>
&lt;p>The magic happens in how the network learns to adjust its &amp;ldquo;dam settings&amp;rdquo; (weights) so that when you need the &amp;ldquo;cat&amp;rdquo; concept, the right combination of streams naturally flows to produce that output. Multiple features share the same waterways, but the flow patterns are orchestrated so that the right information reaches the right destinations.&lt;/p>
&lt;h2 id="the-electrical-circuit-perspective">The Electrical Circuit Perspective&lt;/h2>
&lt;p>For those with electrical engineering backgrounds, superposition works remarkably like &lt;em>voltage and current in complex circuits&lt;/em>.&lt;/p>
&lt;p>Neural network activations are like voltages at different points in the circuit. Weights control the &amp;ldquo;resistance&amp;rdquo; of connections, determining how much signal passes through. Information flows through weighted connections like current through resistors. Using this analogy you can measure the &amp;ldquo;voltage&amp;rdquo; (activation level) at any point in the network just as you might use a multimeter to measure voltage at different points in a circuit. Now researchers basically use &amp;ldquo;virtual meters&amp;rdquo; to measure activation levels throughout the network. They can trace how changing the &amp;ldquo;voltage&amp;rdquo; at one point affects the final output, essentially measuring the signal flow from input to output.&lt;/p>
&lt;p>This electrical perspective helps explain why mechanistic interpretability is so challenging: you&amp;rsquo;re trying to reverse-engineer a circuit with billions of components, where multiple signals are running through the same wires simultaneously.&lt;/p>
&lt;h2 id="why-superposition-matters">Why Superposition Matters&lt;/h2>
&lt;p>Understanding superposition is crucial because it explains why neural networks are so hard to interpret. In traditional symbolic systems, if you want to know whether the system understands &amp;ldquo;cats,&amp;rdquo; you might look for a dedicated &amp;ldquo;cat&amp;rdquo; module. In neural networks with superposition, the &amp;ldquo;cat&amp;rdquo; concept might be distributed across thousands of neurons, mixed in with concepts for &amp;ldquo;furry textures,&amp;rdquo; &amp;ldquo;pointed ears,&amp;rdquo; and &amp;ldquo;domestic animals.&amp;rdquo;&lt;/p>
&lt;p>This is not a bug, it&amp;rsquo;s a feature! Superposition allows neural networks to be incredibly parameter-efficient, representing far more concepts than they have neurons. But it also means that understanding what these networks have learned requires sophisticated mathematical tools to disentangle the superimposed features.&lt;/p>
&lt;h2 id="the-path-forward">The Path Forward&lt;/h2>
&lt;p>The beauty of superposition is that it represents a fundamentally different approach to information storage and processing. Instead of the explicit, structured representations we&amp;rsquo;re used to in traditional AI systems, neural networks discover their own compression schemes that are often more efficient than anything we could design by hand.&lt;/p>
&lt;p>For researchers in mechanistic interpretability, superposition presents both the central challenge and the key to understanding these systems. By developing tools to decompose superimposed features, like sparse auto-encoders and other mathematical techniques, we&amp;rsquo;re slowly learning to read the compressed language that neural networks speak.&lt;/p>
&lt;p>The next time you interact with a language model or image classifier, remember that behind its responses lies an intricate dance of superimposed features, flowing through shared computational channels in patterns too complex for us to fully grasp - yet. The quest to understand these patterns is what makes mechanistic interpretability one of the most fascinating frontiers in AI research.&lt;/p>
&lt;h2 id="references-and-further-reading">References and Further Reading&lt;/h2>
&lt;p>The concepts explored in this article are grounded in cutting-edge research in mechanistic interpretability. Here are the key sources that support and extend these ideas:&lt;/p>
&lt;h3 id="core-theoretical-foundations">Core Theoretical Foundations&lt;/h3>
&lt;p>&lt;strong>Elhage, N., et al. (2022).&lt;/strong> &lt;em>Toy Models of Superposition.&lt;/em> arXiv:2209.10652. [&lt;a href="https://doi.org/10.48550/arXiv.2209.10652" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
The foundational work that introduces minimal settings where polysemanticity arises from storing sparse features in superposition. This paper provides the theoretical backbone for understanding the compression analogy discussed above.&lt;/p>
&lt;p>&lt;strong>Hänni, S., et al. (2024).&lt;/strong> &lt;em>Mathematical Models of Computation in Superposition.&lt;/em> arXiv:2408.05451. [&lt;a href="https://doi.org/10.48550/arXiv.2408.05451" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Formalizes how neural networks can compute Boolean circuits in superposition using sub-linear neuron counts, providing mathematical rigor to the “river delta” information flow concepts.&lt;/p>
&lt;h3 id="sparse-coding-and-decomposition-techniques">Sparse Coding and Decomposition Techniques&lt;/h3>
&lt;p>&lt;strong>(2025).&lt;/strong> &lt;em>From Superposition to Sparse Codes: Interpretable Representations in Neural Networks.&lt;/em> arXiv:2503.01824. [&lt;a href="https://doi.org/10.48550/arXiv.2503.01824" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Recent work explaining how evidence for linear overlay of concepts motivates extraction of monosemantic features via sparse autoencoders — the natural next step after understanding superposition.&lt;/p>
&lt;p>&lt;strong>Olshausen, B. A., &amp;amp; Field, D. J. (1996).&lt;/strong> &lt;em>Emergence of simple‐cell receptive field properties by learning a sparse code for natural images.&lt;/em> Nature, 381, 607–609. [&lt;a href="https://doi.org/10.1038/381607a0" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
The classic neuroscience foundation for sparse coding, showing that superposition isn’t unique to artificial neural networks but appears in biological vision systems.&lt;/p>
&lt;h3 id="interpretability-methods-and-visualizations">Interpretability Methods and Visualizations&lt;/h3>
&lt;p>&lt;strong>Olah, C., et al. (2018).&lt;/strong> &lt;em>The Building Blocks of Interpretability.&lt;/em> Distill. [&lt;a href="https://doi.org/10.23915/distill.00010" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Combines feature visualization, attribution, and dimensionality reduction to explore how individual neurons encode multiple features — the “virtual meter” approach to measuring neural activations.&lt;/p>
&lt;p>&lt;strong>Olah, C., Mordvintsev, A., &amp;amp; Schubert, L. (2017).&lt;/strong> &lt;em>Feature Visualization.&lt;/em> Distill. [&lt;a href="https://doi.org/10.23915/distill.00007" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Details techniques for reverse-engineering neuron-specific activation patterns, essential tools for detecting and understanding superimposed features.&lt;/p>
&lt;p>&lt;strong>Olah, C., et al. (2020).&lt;/strong> &lt;em>Zoom In: An Introduction to Circuits.&lt;/em> Distill. [&lt;a href="https://doi.org/10.23915/distill.00024.001" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Introduces the “circuit” metaphor for interpreting subgraphs of neurons and weights, which becomes particularly complex under superposition.&lt;/p>
&lt;h3 id="advanced-topics-and-current-research">Advanced Topics and Current Research&lt;/h3>
&lt;p>&lt;strong>Adler, M., &amp;amp; Shavit, N. (2024).&lt;/strong> &lt;em>On the Complexity of Neural Computation in Superposition.&lt;/em> arXiv:2409.15318. [&lt;a href="https://doi.org/10.48550/arXiv.2409.15318" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Presents theoretical bounds for computing logical operations in superposition, highlighting the computational advantages of this representational strategy.&lt;/p>
&lt;p>&lt;strong>Chang, E., et al. (2025).&lt;/strong> &lt;em>SAFR: Neuron Redistribution for Interpretability.&lt;/em> arXiv:2501.16374. [&lt;a href="https://doi.org/10.48550/arXiv.2501.16374" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Proposes methods to encourage monosemantic allocations in transformers, directly addressing the challenges posed by feature superposition.&lt;/p>
&lt;h3 id="broader-context">Broader Context&lt;/h3>
&lt;p>&lt;strong>Murdoch, W. J., et al. (2019).&lt;/strong> &lt;em>Interpretable Machine Learning: Definitions, Methods, and Applications.&lt;/em> arXiv:1901.04592. [&lt;a href="https://doi.org/10.48550/arXiv.1901.04592" target="_blank" rel="noopener">Paper&lt;/a>]&lt;br />
Provides taxonomies of interpretability methods that frame superposition-focused techniques within the broader landscape of explainable AI.&lt;/p>
&lt;hr>
&lt;p>&lt;em>This post represents one perspective on superposition in neural networks, built through collaborative exploration of analogies and concepts. The field of mechanistic interpretability is rapidly evolving, and our understanding of these phenomena continues to deepen.&lt;/em>&lt;/p></description></item><item><title>ChatGDT — Chatting with Graph-based Digital Twins</title><link>https://heye.dev/posts/chatting-with-graph-based-digital-twins-chatgdt--733eku6yp/</link><pubDate>Thu, 13 Mar 2025 00:00:00 +0000</pubDate><guid>https://heye.dev/posts/chatting-with-graph-based-digital-twins-chatgdt--733eku6yp/</guid><description>&lt;h2 id="_part-1-the-challenge-of-natural-language-interfaces-for-knowledge-graphs_">&lt;em>Part 1: The Challenge of Natural Language Interfaces for Knowledge Graphs&lt;/em>&lt;/h2>
&lt;p>&lt;em>This post was originally written in June 2023, and the research for this project started in the beginning of 2023, when GPT-3.5-turbo was state-of-the-art. While technology has evolved significantly since then, the core insights and methodologies remain valuable.&lt;/em>&lt;/p>
&lt;h1 id="introduction">Introduction&lt;/h1>
&lt;p>Let’s say you have a question and a database that contains all the information you need to answer it, but no idea how to query it.&lt;/p>
&lt;p>Since 2020, I’ve been working as a data engineer in the construction industry, modeling a &lt;a href="https://en.wikipedia.org/wiki/Digital_twin" target="_blank" rel="noopener">digital twin&lt;/a> using a &lt;a href="https://en.wikipedia.org/wiki/Knowledge_graph" target="_blank" rel="noopener">knowledge graph&lt;/a> expressed in &lt;a href="https://en.wikipedia.org/wiki/Resource_Description_Framework" target="_blank" rel="noopener">RDF&lt;/a> facts. A digital twin is a comprehensive digital representation of physical entities, capturing various aspects of their structure, systems, and behavior. It can act as a central database of information that can be queried using &lt;a href="https://en.wikipedia.org/wiki/SPARQL" target="_blank" rel="noopener">SPARQL&lt;/a>.&lt;/p>
&lt;p>In order to make a graph-based digital twin available to external parties I went on a journey to find a translator that can take natural language queries as input, search the graph, and convert the results back into natural language.&lt;/p>
&lt;h1 id="what-youll-learn-from-this-series">What You’ll Learn From This Series&lt;/h1>
&lt;p>By reading this series, you’ll discover:&lt;/p>
&lt;ul>
&lt;li>The practical challenges of connecting knowledge graphs with LLMs and how to approach them&lt;/li>
&lt;li>Why simpler prompting strategies sometimes outperform complex ones (a counterintuitive finding that challenged my assumptions)&lt;/li>
&lt;li>How to systematically test and evaluate different prompting techniques&lt;/li>
&lt;li>Techniques for iteratively exploring knowledge graphs using LLMs&lt;/li>
&lt;li>Insights about how LLMs reason with structured information&lt;/li>
&lt;li>How more recent advances like constrained decoding might improve these approaches&lt;/li>
&lt;/ul>
&lt;p>Whether you’re working with knowledge graphs, building interfaces for complex systems, or just curious about practical LLM applications beyond standard chatbots, this series offers both technical details and real-world lessons from my journey of trial and error.&lt;/p>
&lt;h1 id="technical-foundation">Technical Foundation&lt;/h1>
&lt;p>Before diving into my experiments, it’s helpful to understand the basic building blocks of what we’re working with. This section covers the key concepts and tools that form the foundation of this exploration.&lt;/p>
&lt;h1 id="knowledge-graphs-and-rdf">Knowledge Graphs and RDF&lt;/h1>
&lt;p>Knowledge graphs represent information as interconnected entities and relationships. They model the world as a network of facts, expressing each fact as a triple: subject, predicate, and object. This structure is intrinsically more machine-readable than natural language while preserving semantic meaning.&lt;/p>
&lt;p>The &lt;a href="https://en.wikipedia.org/wiki/Resource_Description_Framework" target="_blank" rel="noopener">Resource Description Framework (RDF)&lt;/a> provides a standard model for expressing these facts. In RDF, each component of a triple can be:&lt;/p>
&lt;ul>
&lt;li>A URI (Uniform Resource Identifier) representing an entity or relationship&lt;/li>
&lt;li>A literal value (like a string or number)&lt;/li>
&lt;li>A blank node (representing an unnamed resource, though I deliberately avoided these in my examples)&lt;/li>
&lt;/ul>
&lt;p>For example, a simple fact like “Alice knows Bob” would be expressed as:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-turtle" data-lang="turtle">&lt;span class="line">&lt;span class="cl">&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">alice&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">knows&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">bob&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Where &lt;code>ex:&lt;/code> and &lt;code>foaf:&lt;/code> are namespace prefixes that expand to full URIs.&lt;/p>
&lt;h1 id="foaf-ontology">FOAF Ontology&lt;/h1>
&lt;p>For my experiments, I used the &lt;a href="http://xmlns.com/foaf/spec/" target="_blank" rel="noopener">Friend of a Friend (FOAF)&lt;/a> ontology, which is a simple vocabulary for describing people and their relationships. FOAF includes terms like:&lt;/p>
&lt;ul>
&lt;li>&lt;code>foaf:Person&lt;/code> - A class representing a person&lt;/li>
&lt;li>&lt;code>foaf:name&lt;/code> - A property for a person’s name&lt;/li>
&lt;li>&lt;code>foaf:knows&lt;/code> - A relationship between two people&lt;/li>
&lt;li>&lt;code>foaf:age&lt;/code> - A property for a person’s age&lt;/li>
&lt;/ul>
&lt;p>I extended this with some additional relationship types (using the &lt;code>rel:&lt;/code> prefix) to create more complex scenarios:&lt;/p>
&lt;ul>
&lt;li>&lt;code>rel:siblingOf&lt;/code> - A relationship between siblings&lt;/li>
&lt;li>&lt;code>rel:childOf&lt;/code> - A relationship between a child and parent&lt;/li>
&lt;li>&lt;code>rel:employedBy&lt;/code> - A relationship between an employee and employer&lt;/li>
&lt;li>&lt;code>rel:employerOf&lt;/code> - Inverse of&lt;code>rel:employedBy&lt;/code>&lt;/li>
&lt;/ul>
&lt;h1 id="sparql">SPARQL&lt;/h1>
&lt;p>&lt;a href="https://en.wikipedia.org/wiki/SPARQL" target="_blank" rel="noopener">SPARQL&lt;/a> (pronounced “sparkle”) is the query language for RDF graphs. It allows you to search for patterns in the graph and retrieve matching data. A basic SPARQL query looks like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-sparql" data-lang="sparql">&lt;span class="line">&lt;span class="cl">&lt;span class="k">PREFIX&lt;/span> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span> &lt;span class="nl">&amp;lt;http://xmlns.com/foaf/0.1/&amp;gt;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">PREFIX&lt;/span> &lt;span class="nn">ex&lt;/span>&lt;span class="p">:&lt;/span> &lt;span class="nl">&amp;lt;https://example.com/people#&amp;gt;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">SELECT&lt;/span> &lt;span class="nv">?name&lt;/span> &lt;span class="k">WHERE&lt;/span> &lt;span class="p">{&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">ex&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">alice&lt;/span> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">knows&lt;/span> &lt;span class="nv">?person&lt;/span> &lt;span class="p">.&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nv">?person&lt;/span> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">name&lt;/span> &lt;span class="nv">?name&lt;/span> &lt;span class="p">.&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>This query would return the names of all people that Alice knows.&lt;/p>
&lt;h1 id="large-language-models-and-their-capabilities">Large Language Models and Their Capabilities&lt;/h1>
&lt;p>When &lt;a href="https://chat.openai.com/" target="_blank" rel="noopener">ChatGPT&lt;/a> was released in late 2022, I realized that &lt;a href="https://en.wikipedia.org/wiki/Large_language_model" target="_blank" rel="noopener">large language models (LLMs)&lt;/a> were becoming accessible tools for anyone, even non-experts, because they understand natural language. Better yet, these models can work with multiple languages and translate between them by comprehending the meaning of entire sentences and paragraphs. And importantly, “language” here isn’t limited to human languages but includes artificial ones like programming and query languages.&lt;/p>
&lt;p>The LLM I used for most of my experiments was GPT-3.5-turbo (the model powering ChatGPT at the time of writing the original post in July 2023). These models have several capabilities that make them potentially useful for knowledge graph interaction:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Natural language understanding&lt;/strong>: They can parse and comprehend human questions, even when ambiguous.&lt;/li>
&lt;li>&lt;strong>Code generation&lt;/strong>: They can generate structured code, including SPARQL queries.&lt;/li>
&lt;li>&lt;strong>In-context learning&lt;/strong>: They can adapt to new information provided in the prompt without additional training.&lt;/li>
&lt;li>&lt;strong>Reasoning&lt;/strong>: They can follow chains of relationships to draw conclusions from facts.&lt;/li>
&lt;/ol>
&lt;p>However, they also have limitations:&lt;/p>
&lt;ol>
&lt;li>&lt;strong>Context window&lt;/strong>: They have a limited context window (4,000 tokens for GPT-3.5-turbo at that time), which restricts how much of the knowledge graph can be provided at once.&lt;/li>
&lt;li>&lt;strong>Hallucinations&lt;/strong>: When uncertain, they can generate plausible but incorrect information, which is particularly problematic when generating structured queries.&lt;/li>
&lt;li>&lt;strong>Consistency&lt;/strong>: When choosing a temperature above 0, their performance can vary between runs, even with identical prompts.&lt;/li>
&lt;/ol>
&lt;p>&lt;em>Note: As of 2025, newer models have significantly larger context windows and gained the ability to do explicit reasoning. However, the core challenge of hallucinations persists, particularly for knowledge-intensive tasks.&lt;/em>&lt;/p>
&lt;h1 id="test-dataset">Test Dataset&lt;/h1>
&lt;p>I created a small knowledge graph for testing with fictional people and their relationships. The graph was intentionally designed to test different inference capabilities:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-ttl" data-lang="ttl">&lt;span class="line">&lt;span class="cl">&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Alice&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">30&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;female&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Bob&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">35&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;male&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Eve&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">28&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;female&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person3&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="c"># ...&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>&lt;em>Note: I deliberately used generic identifiers (ex:person1, ex:person2) in my test dataset rather than identifiers that include names (ex:alice, ex:bob). This was intentionally done to prevent the LLM from making assumptions based on the entity IDs. In some early experiments, I found that when using name-based identifiers, the model would sometimes correctly answer questions by pattern matching on the identifiers rather than properly traversing the relationships in the graph. Using generic identifiers ensured that the model could not cheat but had to actually understand and follow the relationships expressed in the triples.&lt;/em>&lt;/p>
&lt;p>This graph includes direct relationships (like Alice being a child of Person4) and implicit relationships that require inference (like determining that Person2 is Alice’s brother because they share a parent).&lt;/p>
&lt;p>By designing questions that required traversing multiple relationships (like “Who is the mother of the person that the colleague of Alice’s brother lives with?”), I could test the limits of LLM-based querying approaches.&lt;/p>
&lt;h1 id="llm-integration-tools">LLM Integration Tools&lt;/h1>
&lt;p>Two main tools were used in these experiments:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://github.com/jerryjliu/llama_index" target="_blank" rel="noopener">&lt;strong>LlamaIndex&lt;/strong>&lt;/a>: A data framework for connecting custom data sources to LLMs, which includes vector store capabilities for retrieval.&lt;/li>
&lt;li>&lt;a href="https://github.com/hwchase17/langchain" target="_blank" rel="noopener">&lt;strong>LangChain&lt;/strong>&lt;/a>: A framework for creating LLM-powered applications, which includes tools for prompting, agents, and custom tool creation.&lt;/li>
&lt;/ul>
&lt;p>These tools provide different approaches to the same fundamental challenge: how to effectively connect structured data with the natural language capabilities of LLMs.&lt;/p>
&lt;h1 id="initial-approaches">Initial Approaches&lt;/h1>
&lt;p>My journey to connect LLMs with knowledge graphs wasn’t straightforward. I explored several approaches before finding something that actually worked. Let me walk you through what I tried, where I failed, and what I learned along the way.&lt;/p>
&lt;h1 id="using-llamaindex">Using LlamaIndex&lt;/h1>
&lt;p>My colleague &lt;a href="https://maqboolkhan.github.io/" target="_blank" rel="noopener">Maqbool&lt;/a> showed me a small prototype he had built over the weekend using &lt;a href="https://github.com/jerryjliu/llama_index" target="_blank" rel="noopener">LlamaIndex&lt;/a> that could report knowledge in certain cases when the information was presented in a very simple format.&lt;/p>
&lt;p>His prototype followed a simple yet effective approach: import the RDF graph into LlamaIndex’s VectorStore and query it with unprocessed user questions. The vector store would retrieve relevant triples based on embedding similarity and send these as context to the LLM, which would then generate an answer based on this limited view on the graph.&lt;/p>
&lt;h1 id="results-and-analysis">Results and Analysis&lt;/h1>
&lt;p>This approach worked surprisingly well for small graphs and simple questions like “What is the age difference between Alice and Bob?” The LLM could use in-context learning to reason over the facts provided. For example, with these RDF triples:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-ttl" data-lang="ttl">&lt;span class="line">&lt;span class="cl">&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Alice&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">30&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;female&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Bob&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">35&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;male&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person2&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>The model could understand the entities and correctly answer: “The age difference between Alice and Bob is 5 years. Alice is 30, Bob is 35 years old.”&lt;/p>
&lt;p>As long as the entire graph fits within the prompt, the model can answer simple questions correctly. However, larger graphs wouldn’t fit within the 4,000 token context window of GPT-3.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="flex justify-center ">
&lt;div class="w-100" >&lt;img alt="Understanding Superposition in Neural Networks: A Guide Through Analogies" srcset="
/posts/chatting-with-graph-based-digital-twins-chatgdt--733eku6yp/featured_hu8527759590377031255.webp 400w,
/posts/chatting-with-graph-based-digital-twins-chatgdt--733eku6yp/featured_hu4896293434647799436.webp 760w,
/posts/chatting-with-graph-based-digital-twins-chatgdt--733eku6yp/featured_hu3465659487538655974.webp 1200w"
src="https://heye.dev/posts/chatting-with-graph-based-digital-twins-chatgdt--733eku6yp/featured_hu8527759590377031255.webp"
width="760"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;p>To give some perspective, this very short description of Alice already used 58 tokens that are competing for space with further instructions and reasoning. A graph of 60 people would barely fit into a 4,000 token context window, leaving almost no room for the actual inference.&lt;/p>
&lt;p>The solution of storing the facts in a VectorStore and only retrieving the relevant facts to overcome this limitation is a first step, but retrieval based on a simple query imposes the risk of missing the relevant facts. Let me give you an example:&lt;/p>
&lt;p>Let’s say we have a graph with facts of a million people, and Alice and Bob are not located close together but far apart:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-ttl" data-lang="ttl">&lt;span class="line">&lt;span class="cl">&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Alice&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">30&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;female&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person546398&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="c"># ... ex:person2 to ex:person546397&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person546398&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rdf:&lt;/span>&lt;span class="nt">type&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">Person&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person546398&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">name&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;Bob&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person546398&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">age&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="mi">35&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person546398&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">foaf:&lt;/span>&lt;span class="nt">gender&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="s">&amp;#34;male&amp;#34;&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person546398&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">rel:&lt;/span>&lt;span class="nt">siblingOf&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="nn">ex:&lt;/span>&lt;span class="nt">person1&lt;/span>&lt;span class="w"> &lt;/span>&lt;span class="p">.&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="w">&lt;/span>&lt;span class="c"># ... ex:person546400 to ex:person1000000&lt;/span>&lt;span class="w">
&lt;/span>&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>When querying the VectorStore with slightly more complex questions, such as “What is the age difference between Alice and &lt;em>her brother&lt;/em>?”, the vector search might retrieve the fact &lt;code>ex:person1 foaf:name &amp;quot;Alice&amp;quot; .&lt;/code> because the name “Alice” is directly mentioned in the query. Depending on the embedding model, it might even return some relationships of the form &lt;code>ex:personX rel:siblingOf ex:personY .&lt;/code> , since the word “brother” in the query hints at siblings being relevant. Though it is questionable if facts like &lt;code>ex:personX foaf:gender &amp;quot;male&amp;quot; .&lt;/code> would be picked up just because the word “brother” refers to male individuals. Therefore, with a million people in the graph, it is almost certain that no facts about the specific individual &lt;code>ex:person546398&lt;/code> (Bob) would be retrieved, as vector search is not capable enough to determine that “Bob” is the particular sibling we’re looking for. For this, we need to do some form of reasoning as part of the retrieval process.&lt;/p>
&lt;p>The model would admit insufficient information at best when given an insufficient subset of facts. However, due to the aforementioned problem of hallucinations, it might confidently produce an answer like “The age difference between Alice and her brother is 3 years” without any factual basis.&lt;/p>
&lt;h1 id="key-insight">Key Insight&lt;/h1>
&lt;p>I learned something important: the model could perform complex inference if all the necessary knowledge was available. Determining that Alice was Bob’s sibling and that Bob, being male, was Alice’s brother might seem trivial to humans, but it represented genuine logical reasoning for the model.&lt;/p>
&lt;p>Finding Alice’s brother is intuitive for a human looking at a graph: we start with Alice and follow the most promising relationships. But programming this flexibility is challenging. What if the relationship was labeled &lt;code>ex:hasBrother&lt;/code> instead of &lt;code>rel:siblingOf&lt;/code>? What if siblings weren’t directly connected but shared common parents? The complexity grows quickly.&lt;/p>
&lt;h1 id="learnings">Learnings&lt;/h1>
&lt;p>I realized I needed to retrieve a small but complete subgraph containing all facts relevant to answering a question. As long as this subgraph fits into the context window, the LLM could reason over it effectively. Unfortunately, as described before, a simple VectorStore using embeddings wasn’t sophisticated enough to retrieve the right subgraph for slightly advanced queries.&lt;/p>
&lt;h1 id="using-langchain">Using LangChain&lt;/h1>
&lt;p>My next attempt involved LangChain, which offers prompting chains, agents, and custom tools. I created a &lt;code>sparql_query&lt;/code> tool that accepted valid SPARQL queries as input and returned results as &lt;em>observations&lt;/em> for the model.&lt;/p>
&lt;p>While this sounded promising, LangChain’s prompts were quite verbose, and the parsing of responses was inconsistent. I spent too much time trying to inject additional instructions into LangChain prompts, eventually deciding it would be easier to craft my own from scratch:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-txt" data-lang="txt">&lt;span class="line">&lt;span class="cl">=== PROMPT ===
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Situation
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">You are a knowledge graph expert. Your high-level objective is to answer the following question given by a non-expert user: &amp;#34;Who is Alice&amp;#39;s brother?&amp;#34;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Tools
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">split_question: You can split the question into a list of subquestions. Arguments:
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">- [&amp;lt;list of subquestions&amp;gt;]
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">vector_store: You have access to a vector store, which you can query with natural language questions to gain some insights that help you understand the structure of relevant parts of the graph. Arguments:
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">- &amp;lt;natural language question&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">sparql_query: You can query the graph by writing lines of SPARQL that will be inserted into a template. Arguments:
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">- `&amp;lt;SPARQL query&amp;gt;`
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Task
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">You have to decide what to do next.=
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Knowledge
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">=== RESPONSE FORMAT ===
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Thoughts
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;your thoughts about the current state of knowledge and progress in the strategy&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Progress
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;Estimate progress in percent&amp;gt;%. &amp;lt;Message with how you would formulate the final answer with your findings so far; add as much concrete information as possible&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Next Step
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;Which tool would you use next and why?&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Tool Selection
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;split_question|vector_store|sparql_query&amp;gt; &amp;lt;arguments&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Final Remarks
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;Anything else you want to say to the user&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">=== END ===
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>This approach was better, especially since I could easily modify the prompt. I was expecting the model to write queries like:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-sparql" data-lang="sparql">&lt;span class="line">&lt;span class="cl">&lt;span class="k">SELECT&lt;/span> &lt;span class="nv">?age_alice&lt;/span> &lt;span class="nv">?age_brother&lt;/span> &lt;span class="k">WHERE&lt;/span> &lt;span class="p">{&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nv">?alice&lt;/span> &lt;span class="nn">rdf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">type&lt;/span> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">Person&lt;/span> &lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">name&lt;/span> &lt;span class="s">&amp;#34;Alice&amp;#34;&lt;/span> &lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">age&lt;/span> &lt;span class="nv">?age_alice&lt;/span> &lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">rel&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">siblingOf&lt;/span> &lt;span class="nv">?brother&lt;/span> &lt;span class="p">.&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nv">?brother&lt;/span> &lt;span class="nn">rdf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">type&lt;/span> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">Person&lt;/span> &lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">age&lt;/span> &lt;span class="nv">?age_brother&lt;/span> &lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">foaf&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">gender&lt;/span> &lt;span class="s">&amp;#34;male&amp;#34;&lt;/span> &lt;span class="p">;&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="nn">rel&lt;/span>&lt;span class="p">:&lt;/span>&lt;span class="nt">siblingOf&lt;/span> &lt;span class="nv">?alice&lt;/span> &lt;span class="p">.&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="p">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Maybe even add an additional line to calculate the age difference, like &lt;code>BIND( ?age_diff = ?age_alice - ?age_brother )&lt;/code>. And sometimes it actually did, or nearly did. If you have ever tried to execute code written by an LLM, you have probably experienced that it’s often not working on the first try. Unfortunately, an empty response will be returned if the query is syntactically correct but semantically wrong. Therefore, the model can’t differentiate if it made a mistake or the information it was querying for does not exist in the graph. Consequently, it will simply assume that the graph does not contain the information required to answer the question. And it happens too often that the model simply tries to use a predicate that doesn’t exist, such as &lt;code>rel:sibling&lt;/code> (as opposed to &lt;code>rel:siblingOf&lt;/code>). While it is clear what the intention is for an expert user, a non-expert will likely not be able to guide the model in the right direction to fix the SPARQL query.&lt;/p>
&lt;p>UPDATE (2025): Newer approaches using constrained decoding could potentially solve the specific issue of using, e.g., predicates that don’t exist. This technique restricts the model to generating SPARQL queries using predicates that actually exist in the ontology. By filtering the model’s output to only allow valid predicates like &lt;code>rel:siblingOf&lt;/code> and preventing invalid ones like &lt;code>rel:sibling&lt;/code>, we could guarantee syntactically valid queries against our specific knowledge graph. While this wasn’t available during my original experiments, it represents a promising direction for improving reliability in LLM-to-knowledge-graph interfaces.&lt;/p>
&lt;p>However, these were not the only hallucinations. Sometimes, actually, quite often, the model simply started to assume names of entities, e.g., &lt;code>Bob&lt;/code> (which I must admit was a good guess) or &lt;code>John&lt;/code> for Alice’s brother. Though, more unoriginal &lt;em>names&lt;/em>, such as &lt;code>Alice's brother&lt;/code>, were also assumed. It is highly unlikely that this would be the name of the entity in question, but the model does not consider this. In the end, it just generates more or less random text, which surprisingly often generates results that are good enough for a human that can easily do the needed interpretation on their end but, unfortunately, not good enough for a computer program such as a querying engine.&lt;/p>
&lt;h1 id="iterative-query-building">Iterative Query Building&lt;/h1>
&lt;p>My next idea mimicked how humans typically explore knowledge graphs. When I write queries, I start with something simple to get a feel for the structure of the graph, such as:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-txt" data-lang="txt">&lt;span class="line">&lt;span class="cl">=== PROMPT ===
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### System Message
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">You are an expert in exploring RDF knowledge graphs using SPARQL. Your high-level objective is to answer the following question: &amp;#34;What is the age difference between Alice and her brother?&amp;#34;.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Capabilities
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">You can query the graph by writing lines of SPARQL that will be inserted into the following template:
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;#34;&amp;#34;&amp;#34;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">prefix rdf: &amp;lt;http://www.w3.org/1999/02/22-rdf-syntax-ns#&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">prefix rdfs: &amp;lt;http://www.w3.org/2000/01/rdf-schema#&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">prefix foaf: &amp;lt;http://xmlns.com/foaf/0.1/&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">prefix rel: &amp;lt;http://purl.org/vocab/relationship/&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">prefix ex: &amp;lt;https://example.com/people#&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">SELECT * WHERE {
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> {INPUT_LINES_INSERTION}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;#34;&amp;#34;&amp;#34;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">The query will be executed, and the results will be returned to you. You only have to concern yourself with modifying lines in the WHERE clause.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### History
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">You can learn from this history of previous attempts and their respective results:
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{get_previous_queries_and_results()}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Task
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">You should iteratively improve the query step by step. It will be executed, and its results will be available in the history in the next iteration. This is the current query and the results it returned:
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{get_last_working_query_and_results()}
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Strategy
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">First, write out your thoughts. Does the current query yield results that will help you answer the question with some improvements? Is it going in the right direction when considering the results of previous queries? If not, what has to be changed? If yes, what can be improved to get closer to your goal? Consider previous queries so that you don&amp;#39;t try things again that did not yield results previously. You are done when you think you have enough knowledge to answer the question.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">To be able to iteratively improve the query, you should only do one of the following things: add a line, remove a line, or change a line.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">=== RESPONSE FORMAT ===
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Thoughts
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;your thoughts about the current query and its results. Does it provide **all** needed information, or how would you improve it?&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Are we done?
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;yes|no&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Current state of knowledge
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;Message with how you would formulate the final answer with your findings so far; add as much concrete information as possible&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">If not done, come up with 3 different ideas on how to improve the query.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Idea &amp;lt;1/2/3&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&amp;lt;your idea&amp;gt;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">Select the idea that you believe is the simplest yet most effective. Explain your reasoning for this choice and describe the modifications you would make to the query. In order to provide you with more detailed feedback from the execution, each line will be added one after the other, and the partial query will be executed. Every execution with a non-empty result set will be added to the provided history.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">First, write down step-by-step what you want to achieve with each line added and order the lines in a way that allows you to gain the most insights about the graph with each execution, as mentioned before. Also, add your reasoning for the order of each step.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Step-by-step description
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">a. &amp;lt;First step&amp;gt; (&amp;lt;reasoning why this step should be first&amp;gt;)
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">b. &amp;lt;Second step&amp;gt; (&amp;lt;reasoning why this step should be second&amp;gt;)
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">...
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">Remember to end each line with a `.`.
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">### Modifications
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">[
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{{&amp;#34;line&amp;#34;: &amp;lt;line number&amp;gt;, &amp;#34;action&amp;#34;: &amp;#34;&amp;lt;add&amp;gt;&amp;#34;, &amp;#34;text&amp;#34;: &amp;#34;&amp;lt;new text&amp;gt; .&amp;#34;}},
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{{&amp;#34;line&amp;#34;: &amp;lt;line number&amp;gt;, &amp;#34;action&amp;#34;: &amp;#34;&amp;lt;edit&amp;gt;&amp;#34;, &amp;#34;text&amp;#34;: &amp;#34;&amp;lt;modified text&amp;gt;. &amp;#34;}},
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">{{&amp;#34;line&amp;#34;: &amp;lt;line number&amp;gt;, &amp;#34;action&amp;#34;: &amp;#34;&amp;lt;delete&amp;gt;&amp;#34;, &amp;#34;text&amp;#34;: &amp;#34;&amp;lt;text to be deleted&amp;gt; . &amp;#34;}},
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">...
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">]
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">=== END ===
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>At the placeholders for history and previous queries (&lt;code>{get_previous_queries_and_results()}&lt;/code> and &lt;code>{get_last_working_query_and_results()}&lt;/code>), I’d insert the actual history of queries and results. My idea was to provide all the information needed for the model to understand how I wanted to collaborate.&lt;/p>
&lt;p>This complex approach was token-intensive, but after many attempts, it sometimes managed to find Alice’s brother. However, it tended to overcomplicate things, such as using SPARQL as a calculator to calculate simple age differences in very complex ways or retrieving unnecessary information. It wasn’t efficient or robust, but it was the first of all tried approaches so far that produced somewhat useful results.&lt;/p>
&lt;p>I considered building on this foundation, letting the model gradually gain knowledge about the graph until it felt confident to answer. However, the whole system fell apart once I introduced more complex queries. The model started running in circles, creating increasingly complex queries, effectively wandering aimlessly &lt;em>hoping&lt;/em> to stumble upon relevant information. Realistically speaking, I wasn’t getting any closer to a viable solution. So, I was back to square one.&lt;/p>
&lt;h1 id="whats-next">What’s Next&lt;/h1>
&lt;p>After multiple attempts with sophisticated prompts and complex query strategies, I was still stuck without a reliable solution. The approaches I’d tried so far all had fundamental limitations: either they couldn’t handle large graphs, generated hallucinated information, or simply wandered aimlessly without finding the correct answer.&lt;/p>
&lt;p>But I wasn’t ready to give up. I realized I needed to approach the problem in a completely different way. If complex prompting strategies weren’t working, perhaps I should try something radically simpler? How about creating a systematic testing approach with incrementally more complex questions so that I could get a comprehensive understanding of the model’s capabilities and limitations?&lt;/p>
&lt;p>In Part 2, I’ll share how I developed a rigorous testing methodology that led to a surprising discovery: sometimes, the most straightforward approach is the most effective. I’ll reveal how my most sophisticated prompting techniques were consistently outperformed by something far more basic and how this counterintuitive finding completely changed my approach to connecting LLMs with knowledge graphs.&lt;/p>
&lt;p>I’ll also share my process for creating a systematic testing framework that allowed me to measure performance across different prompting strategies with varying question complexity. This data-driven approach led to better results and revealed interesting insights about how LLMs actually reason with structured knowledge.&lt;/p></description></item></channel></rss>