<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://markusharrer.de/feed.xml" rel="self" type="application/atom+xml" /><link href="https://markusharrer.de/" rel="alternate" type="text/html" hreflang="en" /><updated>2026-03-14T18:54:01+00:00</updated><id>https://markusharrer.de/feed.xml</id><title type="html">Markus Harrer</title><subtitle>Passionate about improving legacy systems through software analytics, architectural thinking, and agentic software modernization.</subtitle><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><entry><title type="html">New Talks Section on markusharrer.de</title><link href="https://markusharrer.de/blog/2026/03/14/talks-section/" rel="alternate" type="text/html" title="New Talks Section on markusharrer.de" /><published>2026-03-14T11:00:00+00:00</published><updated>2026-03-14T11:00:00+00:00</updated><id>https://markusharrer.de/blog/2026/03/14/talks-section</id><content type="html" xml:base="https://markusharrer.de/blog/2026/03/14/talks-section/"><![CDATA[<p>There’s now a <a href="/talks/">Talks section</a> on this site. I’m collecting my presentations there – with slides and the key points from each one side by side.</p>

<p>First up: my talk <strong><a href="/talks/softwaremodernisierung-ki-sibb-2025/">Softwaremodernisierung mit GenAI – The Good, the Bad, the Unexpected</a></strong>, given at SIBB // Digitalverband Berlin-Brandenburg in May 2025.</p>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="news" /><summary type="html"><![CDATA[I've added a new Talks section to the site – with slides and the spoken essence of each talk, slide by slide.]]></summary></entry><entry><title type="html">AI Productivity Gains in Different Situations</title><link href="https://markusharrer.de/blog/2026/02/18/ai-productivity-gains-in-different-situations/" rel="alternate" type="text/html" title="AI Productivity Gains in Different Situations" /><published>2026-02-18T12:25:53+00:00</published><updated>2026-02-18T12:25:53+00:00</updated><id>https://markusharrer.de/blog/2026/02/18/ai-productivity-gains-in-different-situations</id><content type="html" xml:base="https://markusharrer.de/blog/2026/02/18/ai-productivity-gains-in-different-situations/"><![CDATA[<p>Where does LLM-assisted software development actually help with development productivity, and where does it fall short of expectations? Rather than viewing AI in software development as a one-dimensional productivity accelerator, we explore these questions along several dimensions offered by a Stanford-affiliated study: project maturity, task complexity, and programming language popularity. The goal is to create a more realistic picture for the expectations around AI for software developers and engineering leaders alike, beyond the current hype.</p>

<p>Where do you even begin measuring productivity gains from using Large Language Models (LLMs) in software development? For this short analysis, I’m drawing on data from the talk <a href="https://www.youtube.com/watch?v=tbDDYKRFjhk">Does AI Actually Boost Developer Productivity? (Stanford 100k Devs Study)</a> by Yegor Denisov-Blanch. In the study, 136 teams from 27 countries were asked whether they see productivity improvements from using AI (more precisely: LLM-assisted software development).</p>

<p>The following charts are relevant to my exploration of the “what actually matters” factor. I show and interpret them here in this short article.</p>

<h2 id="1-the-context-brake">1. The Context Brake</h2>

<p>One of the most interesting findings from the talk is a 2×2 matrix that shows in which situations AI assistance actually delivers a productivity benefit for software developers. Rather than making blanket claims about AI productivity, the matrix breaks the question down along two dimensions: how mature the codebase is, and how complex the task at hand is. The results are more nuanced than the usual promises on the glossy brochures (or websites) of various AI tool vendors would suggest.</p>

<figure class="figure-indented">
  <img src="/assets/images/posts/2026-02-18_ai-productivity/productivity_matrix_task_mat.png" alt="Software Engineering Productivity Increases from AI Use by Project Maturity and Task Complexity" />
  <figcaption>Productivity increases from AI use by project maturity and task complexity</figcaption>
</figure>

<h3 id="my-interpretation">My interpretation</h3>

<p>The matrix shows that AI productivity gains are highest in greenfield projects with low task complexity, where study participants report a 35–40% increase. The reason is obvious to me: low-complexity tasks are often repetitive and clearly defined, so AI can reliably generate boilerplate-heavy code with minimal risk of error. On top of that, I think we’re in the league of to-do list apps here: programmed a thousand times, and a thousand times nothing further came of it.</p>

<p>However, gains diminish significantly as project maturity increases and/or task complexity rises (i.e., once things get serious):</p>

<ul>
  <li>In brownfield and legacy projects, gains drop to 15–20% even for simple maintenance tasks, as outdated code and intricate dependencies constrain what AI can safely contribute.</li>
  <li>For high-complexity tasks in systems that already resemble a Big Ball of Mud, gains shrink to just 0–10%, because the AI struggles to reason about tangled architectures, unclear implemented ideas, and deeply nested logic.</li>
</ul>

<p>This is hardly surprising to me at this point: the underlying training data comes in large part from publicly accessible code repositories. There’s a clear bias in what gets shared: code you wouldn’t be embarrassed about in public (at least that’s how it is for me). The actual mass of code that follows other ideals remains locked away in the closed software systems of enterprises. The first encounter with this kind of code can therefore be disorienting for an LLM, making it harder to adapt familiar patterns from its training data to the existing codebase. Or as Ludwig Wittgenstein said over a hundred years ago:</p>

<blockquote>
  <p>The limits of my language mean the limits of my world.</p>
</blockquote>

<p>But even in ideal greenfield environments, high-complexity work limits AI’s impact to 10–15%, because such tasks demand deeper human judgment that mechanical automation cannot replace. AI can assist, but it cannot yet replace the architectural thinking and contextual judgment that complex engineering and domain knowledge require. This is also related to the limited amount of available context capacity (see also my assessment in <a href="https://markusharrer.de/blog/2026/02/17/agentic-software-modernization-chances-and-traps/">“Agentic Software Modernization: Chances and Traps”</a>).</p>

<p><strong>TL;DR:</strong> AI delivers the most when the problem is well-scoped and the codebase is clean. High task complexity and legacy code are the two primary productivity killers when using AI — especially in combination (which is likely the reality for most of us).</p>

<h2 id="2-the-niche-penalty">2. The Niche Penalty</h2>

<p>The second chart shifts the lens from project maturity to programming language choice. It turns out that the popularity of the language you work in has a substantial impact on how much an LLM can actually help, driven primarily by how much training data exists for that language.</p>

<figure class="figure-indented">
  <img src="/assets/images/posts/2026-02-18_ai-productivity/productivity_matrix_task_lang.png" alt="Impact of Language on AI Gains" />
  <figcaption>Impact of programming language on AI productivity gains</figcaption>
</figure>

<h3 id="my-interpretation-1">My interpretation</h3>

<p>In popular languages (e.g., Python, Java), LLMs deliver their highest value: productivity gains of 20–25% on simple tasks thanks to abundant training data (e.g., through Reinforcement Learning on thousands of simple question-and-answer pairs), and 10–15% on complex ones. LLMs can still provide good support here, thanks to vast amounts of diverse training data in the respective popular programming language. But even in this best case, complex tasks still require human judgment, meaning AI acts as an accelerator rather than a replacement.</p>

<p>Conversely, niche languages (e.g., COBOL — though that’s already mainstream to me personally) see negligible gains of 0–5% for simple work due to limited training data. For high-complexity tasks, the situation deteriorates further: productivity can actually drop to as low as -5%, as the AI enters a hallucination-prone zone where it confidently produces plausible but incorrect output. This illustrates that without sufficient training data, AI tools can become a liability rather than an asset for complex engineering work. Personally, I don’t see this changing positively in the near future either. It’s also becoming apparent that even <a href="https://github.com/IBM/rpg-genai-data">actively soliciting code in niche programming languages</a> doesn’t lead to acquiring decent training data (and let’s be honest: which insurance company wants to put its COBOL-written calculation engine on GitHub?).</p>

<p>The underlying driver across all four quadrants is the same: the more training data available for a given language and task type, the more reliably AI can contribute. Language popularity is therefore not just a matter of personal preference but a direct indicator for the more productive use of LLM-assisted software development.</p>

<h2 id="3-heaven-or-hell">3. Heaven or Hell</h2>

<p>For the third chart, I informally combine the mean productivity gains from the two previous 2×2 charts into a third perspective. This shows productivity gains broken down by programming language popularity and project maturity. This perspective is particularly interesting to me because I have a concrete reason for it: I’m partly involved in projects that use programming languages that don’t even make it into the top 50 of the most popular programming languages on the TIOBE Index (https://www.tiobe.com/tiobe-index/), as well as languages that will never appear there because they only exist within a single company. It goes without saying that these are decades-old, massive software systems that are now slowly wanting to be modernized.</p>

<p><em>Note: This combined view is not a formally validated model but rather a pragmatic thought experiment, blending two independent data sources through simple averaging. It is meant to provide orientation rather than serve as a precise prediction.</em></p>

<figure class="figure-indented-wide">
  <img src="/assets/images/posts/2026-02-18_ai-productivity/productivity_matrix_lang_mat.png" alt="AI Assistant Productivity Gain Matrix" />
  <figcaption>AI assistance productivity gain matrix</figcaption>
</figure>

<p><small><em>Note: I shared a previous version of this chart with numbers in the quadrants <a href="https://www.linkedin.com/feed/update/urn:li:activity:7429940242922704896/">on LinkedIn</a>. The numbers have been removed in this version as they could give a misleading impression of precision. I also missed the opportunity to use the right colors for heaven and hell.</em></small></p>

<h3 id="my-interpretation-2">My interpretation</h3>

<p>When combining both dimensions — project maturity (greenfield vs. brownfield) and programming language popularity — four interesting quadrants emerge. The best-case scenario, “AI Heaven,” occurs when working in a popular language on a greenfield project, yielding the highest productivity gains. This is the ideal state: abundant training data meets a clean, unencumbered codebase. AI can operate at full potential. This is also why vibe coding and prototyping with languages like TypeScript and friends works so brilliantly.</p>

<p>Moving to brownfield projects in popular languages, gains drop noticeably. You’re now paying the bill for letting best practices around code hygiene slide (there’s also an excellent further talk by Yegor Denisov-Blanch on this: <a href="https://www.youtube.com/watch?v=JvosMkuNxF8">“Can you prove AI ROI in Software Eng?”</a>). The LLM still understands the well-known programming language well, but the complexity and technical debt of the existing codebase limit its contribution.</p>

<p>Interestingly, niche languages on greenfield projects still yield meaningful gains, only marginally lower than the legacy code scenario. This suggests that a clean codebase can partially compensate for weaker training data, but the language barrier still sets a meaningful ceiling. My prejudice here is that it’s simply always easier to start on a green field, regardless of the programming language (I still remember the days when people used to say “with Scala / F# we’re just faster,” which even back then left me cold. It gets interesting when you have a mountain of code that goes beyond a to-do list).</p>

<p>The worst-case scenario is “AI Hell”: a niche language combined with a brownfield codebase, producing only minimal gains. Here, both obstacles compound each other. The AI lacks sufficient training data for the language and simultaneously struggles to reason about a tangled legacy codebase — the result is unreliable outputs and a high risk of doing more harm than good.</p>

<p>The key takeaway is that language popularity and project maturity are both independently significant, and their negative effects are additive: each dimension reduces AI productivity on its own, and facing both together pushes gains to the lowest tier. Teams working in niche languages on legacy systems should be especially cautious about over-relying on AI tooling (see for example my article <a href="https://www.innoq.com/en/blog/2025/09/software-analytics-going-craizy/">“Software Analytics going crAIzy!”</a>).</p>

<p>PS: Did I mention that I’m a follower of the <a href="https://www.tqdev.com/2018-the-boring-software-manifesto/">Boring Software Manifesto</a> and have been preaching for years that everyone should join it? I believe that in the age of agentic software modernization, the manifesto is becoming more relevant than ever before. 😉</p>

<p><em>If you are interested in the charts, here you can find the <a href="https://github.com/feststelltaste/software-analytics/blob/master/notebooks/AI%20Productivity%20Gains%20in%20different%20Situations.ipynb">Jupyter Notebook</a> that created the images based on the existing data from the talk.</em></p>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="articles" /><category term="ai" /><category term="productivity" /><category term="legacy-systems" /><summary type="html"><![CDATA[Where does AI actually move the needle on developer productivity, and where does it fall short? An analysis across project maturity, task complexity, and programming language popularity.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://markusharrer.de/assets/images/posts/2026-02-18_ai-productivity/productivity_matrix_lang_mat_thumb.png" /><media:content medium="image" url="https://markusharrer.de/assets/images/posts/2026-02-18_ai-productivity/productivity_matrix_lang_mat_thumb.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Agentic Software Modernization: Chances and Traps</title><link href="https://markusharrer.de/blog/2026/02/17/agentic-software-modernization-chances-and-traps/" rel="alternate" type="text/html" title="Agentic Software Modernization: Chances and Traps" /><published>2026-02-17T16:44:23+00:00</published><updated>2026-02-17T16:44:23+00:00</updated><id>https://markusharrer.de/blog/2026/02/17/agentic-software-modernization-chances-and-traps</id><content type="html" xml:base="https://markusharrer.de/blog/2026/02/17/agentic-software-modernization-chances-and-traps/"><![CDATA[<p>Modernizing legacy software ( often massive, undocumented “brownfield” projects in languages like COBOL or even older RPG in all its beautiful, different versions) is one of the toughest disciplines in software engineering. The promise of “AI agents” is tantalizing: Can autonomous AI agents automate this exhausting modernization process?</p>

<p>I watched some videos on YouTube (see end of the article), reflected on those and my experience. I think my answer and the answer from experts conducting recent Stanford studies and leading AI engineering firms (such as OpenHands and HumanLayer) is a big <strong>YES</strong>, but not the way most people think about it.</p>

<p>Simply unleashing AI agents on an old codebase and hoping for a miracle is a recipe for disaster. Successful <a href="https://github.com/feststelltaste/awesome-agentic-software-modernization">Agentic Software Modernization</a> requires a fundamental shift in modernization workflows: away from vibe coding towards disciplined preparation and execution.</p>

<p>Based on current findings from the field, here are the essential Do’s and Don’ts for deploying AI agents in software modernization.</p>

<h2 id="the-core-problem-the-context-bottleneck">The Core Problem: The Context Bottleneck</h2>

<p>Before diving into the topic, we must understand the central constraint. AI models (LLMs) are “stateless.” They only know what exists in their current context window.</p>

<p>In complex legacy systems, it is impossible to cram the entire context (millions of lines of code, dependencies, business logic) into this window. When the window becomes too full (according to Dex Horthy of HumanLayer, often above ~40% utilization), the model enters the ‘Dumb Zone’ where response quality degrades rapidly and hallucinations increase.</p>

<p>While approaches like Retrieval-Augmented Generation (RAG) and agentic search (essentially smart use of glob and grep) help with larger datasets, they face a more fundamental problem identified by recent Stanford studies: Entropy. When AI agents work within existing low-quality codebases, the produced code mirrors the low standards of the existing environment (leading to a death spiral).</p>

<p>Consequently, more and more harmful code is produced in a short amount of time, effectively automating the creation of technical debt. The art of Agentic Software Modernization, therefore, must not be about generating more legacy code faster, but about surgically managing the AI agent’s access to the right context to understand and improve the system.</p>

<h2 id="emerging-practices-for-agentic-modernization-workflows">Emerging Practices for Agentic Modernization Workflows</h2>

<h3 id="1-implement-a-rpi-workflow-research-plan-implement">1. Implement a RPI Workflow (Research, Plan, Implement)</h3>

<p>The biggest trap is letting the agent code immediately. Instead, the process should be divided into strict phases:</p>

<ul>
  <li>
    <p><strong>Phase 1: Research (Understanding):</strong> The agent analyzes <em>only</em> the existing codebase to understand how a feature works. The output is not code, but a summary (e.g., a Markdown document) explaining where the relevant logic resides. This is also where you as a developer can participate. Using data-driven approaches like Software Analytics can help to have rigid Data Science Practices in place for analyzing software systems on a large scale. Also, you can enrich the codebase or the summary beforehand to guide an agent through it (e.g., signaling where outdated parts as well as no-go areas are and where code is that fits the current ideas of the software system).</p>
  </li>
  <li>
    <p><strong>Phase 2: Plan (Intent Compression):</strong> Based on the research, the agent creates a detailed plan of which files need to be changed and how. This plan represents the “compressed intent” of the modification. Personal tip from me: Make sure you scope down those activities. Also: in the best case you can switch from agentic workloads towards rule-based search and replace workloads, letting an agent craft change recipes and execute those deterministically instead of in a non-deterministic way.</p>
  </li>
  <li>
    <p><strong>Phase 3: Implement (Coding):</strong> Only now does the agent change or write code, based strictly on the approved plan or on your deterministic transformation rules.</p>
  </li>
</ul>

<p><strong>Why should you do this?</strong> If the plan is non-existent, wrong or too vague, 1,000 lines of generated code are worthless and changed code is tedious to review because of plenty of sloppy code. Therefore, invest human intelligence in reviewing the research and planning steps to create AI and human alignment early on and not just at the end with reviewing the final code.</p>

<h3 id="2-use-iterative-refinement-techniques-for-continuous-feedback">2. Use Iterative Refinement Techniques for Continuous Feedback</h3>

<p>Never attempt a complex migration (e.g., a COBOL file to Java) in a single “One-Shot” prompt. The team at OpenHands demonstrated that this almost always leads to hallucinations.</p>

<p>Instead, use an iterative loop with specialized roles:</p>

<ul>
  <li><strong>Engineer Agent:</strong> Attempts to solve the task (e.g., migrating code).</li>
  <li><strong>Critic Agent:</strong> A separate agent that <em>only</em> reads. It analyzes the generated code, runs tests, and provides harsh feedback (scores).</li>
</ul>

<p>The process runs in loops: The Engineer delivers -&gt; The Critic evaluates and sends feedback back -&gt; The Engineer improves -&gt; Repeat until a quality standard is met.</p>

<p>I’m personally interested in automating as much as possible by providing immediate feedback within the agentic loop. You want to leverage mechanisms like compilation errors, code duplication detection, and architectural violations as key levers to guide the agent mechanically.</p>

<h3 id="3-invest-in-codebase-hygiene-first">3. Invest in “Codebase Hygiene” First</h3>

<p>AI is not a magic wand that turns bad code into good code. The Stanford study by Yegor Denisov-Blanch shows a clear correlation: In clean environments (high test coverage, good modularity, typing), AI can autonomously drive a large share of sprint tasks.</p>

<p>In “dirty” environments (high entropy, technical debt), the AI struggles, produces more errors, and can actually accelerate technical debt (the “Rework” trap). Before scaling AI, you must clean up the foundation. This is what we developers have felt for decades: clean code amplified dev productivity, and now AI amplifies those gains.</p>

<p>Personally, I’m a big fan of enriching codebases with more semantic meaning. Renaming cryptic one-letter variables to reflect the actual technical or business domain is a high-leverage move and in many cases a no-brainer. Building out higher level concepts or refactoring towards well-known patterns or idioms is also something I’m very into. Most of those activities are safe refactorings, meaning they usually don’t break the code (unless you’re storing code or class names in the database or using reflection voodoo in Java).</p>

<h3 id="4-practice-active-context-compaction">4. Practice Active Context Compaction</h3>

<p>When an agent strays off the path, the human impulse is often to correct it within the same chat (“No, do it differently,” “That was wrong”). This is a mistake. Every failed attempt clutters the context window with “noise.”</p>

<p>A better approach is active <strong>context compaction</strong> (or as I call it: context reset and starting over):</p>

<ol>
  <li>Have the agent summarize the current state and findings into a compact file (<code class="language-plaintext highlighter-rouge">state.md</code> or the like).</li>
  <li>Start a completely new chat with a fresh context.</li>
  <li>Feed only the summary in as the starting point.</li>
</ol>

<p>This keeps the agent in the “Smart Zone” of its context window.</p>

<p>I actually need to do this for some side projects. There I’m using SOTA models from DeepSeek, Minimax or Moonshot with Claude Code. I find those really refreshing but they are limited regarding the context window. So my workflow needs to use active context compaction and rigid, external management of the current state and the next steps to get good results from these LLMs.</p>

<h3 id="5-maintain-traceability-links">5. Maintain “Traceability Links”</h3>

<p>When migrating legacy code (e.g., COBOL to Java), the connection to the original business logic must never be lost. OpenHands recommends that the agent insert comments in the new code that link exactly to the line numbers of the old code where that logic originated. This is essential for future debugging and audits.</p>

<p>When I use, e.g., graph analytics on the whole codebase or create flowcharts for interesting parts of code, I also want to make sure that those results are correct. For this, I use simple line numbers or identifiers or file names in the generated outputs so that I can do a quick check if they are not hallucinated and that I then have something I can work with.</p>

<h2 id="traps-to-avoid">Traps to Avoid</h2>

<h3 id="1-falling-into-the-vibe-coding-trap">1. Falling into the “Vibe Coding” Trap</h3>

<p>“Vibe Coding” describes the back-and-forth chatting with a model, guided more by feelings than by specifications (“Make that prettier,” “No, that feels wrong”). This leads to bloated context windows and confused models. AI Engineering in legacy system environments requires precision, not “vibes.”</p>

<p>So don’t get lost in trying to convince an AI agent to work on legacy code as if it were a greenfield project - the agent sees years of old habits manifested in code and needs different guidance.</p>

<h3 id="2-underestimating-rework">2. Underestimating “Rework”</h3>

<p>The Stanford studies (listed below) clearly show that while AI tools increase output (more Pull Requests), they often dramatically increase rework: the time developers spend repairing or rewriting AI-generated code. If you only look at speed/volume, you miss the massive cost of quality assurance.</p>

<p>As mentioned above, try to automate as much as possible to get rid of manual work.</p>

<h3 id="3-rely-blindly-on-line-by-line-code-reviews">3. Rely Blindly on Line-by-Line Code Reviews</h3>

<p>In a world where an agent can generate 20,000 lines of TypeScript code in minutes, traditional human line-by-line review is no longer scalable.</p>

<p>Do not rely solely on reviewing the final product. The <strong>Hierarchy of Leverage</strong> from Dex Horthy states: 1 Bad Line of Plan == 100 Bad Lines of Code. Shift the focus of human review “left”, to the research results and the plan, before the code is even written.</p>

<p>I also like to even go a step further: in the research stage, take a look at how to systematically get to the refactoring spots. If you can then come up with rule-based changes, most of them are then structurally equal over the completed codebases. This means that you don’t have to review line-by-line, but change pattern by change pattern, which is very efficient to do in a short amount of time.</p>

<h3 id="4-expect-magic-in-niche-languages">4. Expect Magic in Niche Languages</h3>

<p>AI model performance depends heavily on training data. For popular languages (Python, Java, JS), they work excellently. For niche languages or very old dialects (specific COBOL variants, obscure DSLs, and my new favorite one: RPG), using AI can actually decrease productivity according to Stanford data, because the agent hallucinates and the human spends all their time correcting it.</p>

<p>To know beforehand how I have to approach a legacy modernization project, I like to take a look at the corresponding tags on StackOverflow and the TIOBE programming popularity index. Every language that was not under the top 10 in the last few years needs to take a different approach. Maybe a broader reverse engineering towards specs or tests is needed or you even want to use a more traditional transpiler approach that could get the niche language converted to a more popular programming language level that an AI agent then can work with.</p>

<h2 id="conclusion-from-coder-to-architect-of-intent">Conclusion: From Coder to Architect of Intent</h2>

<p>Agentic Software Modernization works, but it requires discipline. The role of the human developer is shifting. We are becoming less writers of syntax and more the architects of the intent.</p>

<p>How I like to look at the current situation regarding agentic coding is that, yes, you could look at AI agents in greenfield projects as overmotivated junior developers, but in legacy systems environments, I like to think of AI agents as new senior developers to your company that can do really amazing things but need decent onboarding: step-by-step introduction to the system, letting them know the background of the existing code and carefully introducing them to the nasty parts of the systems over time.</p>

<p>I think with this image of AI agents and the Do’s and Don’ts from above, we can expect fewer complete disasters using AI Agents to tame complex legacy systems. And remember: Those who just click “Refactor all this” will end up in chaos.</p>

<h3 id="-sources--further-watching">📚 Sources &amp; Further Watching</h3>

<ul>
  <li><strong><a href="https://www.youtube.com/watch?v=4LUtguF160A">Calvin Smith / OpenHands: Refactoring COBOL to Java with Agentic AI with an Iterative Refinement Workflow</a></strong>
    <ul>
      <li>Topics: Iterative Refinement, Critic Agents, Traceability Links</li>
    </ul>
  </li>
  <li><strong><a href="https://www.youtube.com/watch?v=VvkhYWFWaKI">Dex Horthy (HumanLayer): Context Engineering SF: Advanced Context Engineering for Agents</a></strong>
    <ul>
      <li>Topics: Hierarchy of Leverage, 1 Bad Line of Plan vs Code</li>
    </ul>
  </li>
  <li><strong><a href="https://www.youtube.com/watch?v=rmvDxxNubIg">Dex Horthy (HumanLayer): No Vibes Allowed: Solving Hard Problems in Complex Codebases</a></strong>
    <ul>
      <li>Topics: RPI Workflow (Research, Plan, Implement), Context Compaction</li>
    </ul>
  </li>
  <li><strong><a href="https://www.youtube.com/watch?v=JvosMkuNxF8">Yegor Denisov-Blanch (Stanford): Can you prove AI ROI in Software Eng? (Stanford 120k Devs Study)</a></strong>
    <ul>
      <li>Topics: Measuring ROI, The danger of “Rework”, Entropy in codebases</li>
    </ul>
  </li>
  <li><strong><a href="https://www.youtube.com/watch?v=tbDDYKRFjhk">Yegor Denisov-Blanch (Stanford): Does AI Actually Boost Developer Productivity? (100k Devs Study)</a></strong>
    <ul>
      <li>Topics: Productivity stats, Niche vs. Popular languages, Codebase Hygiene</li>
    </ul>
  </li>
</ul>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="creaitions" /><category term="software-modernization" /><category term="ai-agents" /><category term="legacy-systems" /><summary type="html"><![CDATA[Separating practices that work from marketing buzz in AI-powered software modernization.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://markusharrer.de/assets/images/posts/2026-02-17-agentic-software-modernization-chances-and-traps.png" /><media:content medium="image" url="https://markusharrer.de/assets/images/posts/2026-02-17-agentic-software-modernization-chances-and-traps.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Code Transformation Tools Landscape</title><link href="https://markusharrer.de/blog/2026/02/10/code-transformation-tools-landscape/" rel="alternate" type="text/html" title="Code Transformation Tools Landscape" /><published>2026-02-10T11:30:00+00:00</published><updated>2026-02-10T11:30:00+00:00</updated><id>https://markusharrer.de/blog/2026/02/10/code-transformation-tools-landscape</id><content type="html" xml:base="https://markusharrer.de/blog/2026/02/10/code-transformation-tools-landscape/"><![CDATA[<p><img src="/assets/images/posts/2026-02-10_codetransformationtools.png" alt="The Landscape of Code Transformation Tools &amp; Methods" /></p>

<p>I’ve just published a new reference resource: <a href="https://github.com/feststelltaste/codetransformationtools/">Code Transformation Tools Landscape</a> on GitHub.</p>

<h2 id="what-is-it">What Is It?</h2>

<p>A visual guide that maps the ecosystem of code transformation and refactoring tools across two key dimensions:</p>

<ul>
  <li><strong>Scale &amp; Automation</strong>: How fast can we change things?</li>
  <li><strong>Structural &amp; Semantic Understanding</strong>: How well do we understand what we are changing?</li>
</ul>

<h2 id="why-this-matters">Why This Matters</h2>

<p>Instead of framing tools in a binary “human vs. AI” way, this landscape helps you find the right refactoring approach for your specific context. Whether you need quick text replacements, structural code search, AI-assisted transformations, or semantic refactoring—this guide shows where each tool fits.</p>

<p>Check it out at: <strong><a href="https://github.com/feststelltaste/codetransformationtools/">github.com/feststelltaste/codetransformationtools</a></strong></p>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="news" /><category term="tools" /><category term="refactoring" /><category term="code-transformation" /><summary type="html"><![CDATA[Published a visual reference guide mapping the ecosystem of code transformation and refactoring tools to help developers find the right tool for their context.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://markusharrer.de/assets/images/posts/2026-02-10_codetransformationtools-thumb.png" /><media:content medium="image" url="https://markusharrer.de/assets/images/posts/2026-02-10_codetransformationtools-thumb.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Finding Isolated Code in Legacy Systems</title><link href="https://markusharrer.de/blog/2026/02/09/rpg-island-prototype/" rel="alternate" type="text/html" title="Finding Isolated Code in Legacy Systems" /><published>2026-02-09T13:00:00+00:00</published><updated>2026-02-09T13:00:00+00:00</updated><id>https://markusharrer.de/blog/2026/02/09/rpg-island-prototype</id><content type="html" xml:base="https://markusharrer.de/blog/2026/02/09/rpg-island-prototype/"><![CDATA[<p>I’m excited to share a rough prototype I’ve been working on: <a href="https://github.com/feststelltaste/rpgisland">RPG Island</a>, a tool that helps identify “islands” of isolated code in legacy RPG/SQL systems.</p>

<p><img src="/assets/images/posts/2026-02-09-rpg-island-prototype-overview.png" alt="RPG Island visualization showing program dependencies and isolated clusters" /></p>

<h2 id="the-challenge">The Challenge</h2>

<p>When modernizing large legacy codebases, one of the biggest questions is: where do we start? Monolithic systems can have thousands of programs with complex dependencies. Some code is tightly coupled to the entire system, while other parts might be surprisingly isolated—perfect candidates for incremental migration.</p>

<h2 id="the-approach">The Approach</h2>

<p>RPG Island takes a graph-based approach to this problem:</p>

<ol>
  <li><strong>Parse</strong> - Extracts dependencies from RPG/SQL source code (fixed-format, free-format, and mixed-mode)</li>
  <li><strong>Graph</strong> - Loads program-to-program calls and program-to-table accesses into Neo4j</li>
  <li><strong>Cluster</strong> - Runs Weakly Connected Components algorithms to find isolated subsystems</li>
  <li><strong>Summarize</strong> - Uses LLM analysis (DeepSeek API) to generate descriptive names and functionality summaries for each island by analyzing the actual source code</li>
  <li><strong>Explore</strong> - Provides interactive visualization and Cypher queries to analyze the results</li>
</ol>

<p>The idea is simple: if a group of programs only talk to each other and don’t interact with the rest of the codebase, they form an “island” that can potentially be migrated independently. The LLM-powered summarization step helps teams quickly understand what each isolated component cluster is responsible for without manually reading through all the code.</p>

<h2 id="technology-stack">Technology Stack</h2>

<ul>
  <li><strong>Neo4j</strong> for graph database and clustering algorithms</li>
  <li><strong>Python</strong> for parsing and data processing</li>
  <li><strong>DeepSeek API</strong> for AI-powered island summarization and naming</li>
  <li><strong>Jupyter Notebook</strong> for interactive analysis</li>
  <li><strong>Docker/Dev Containers</strong> for reproducible environment</li>
</ul>

<p>The tool tracks four relationship types: <code class="language-plaintext highlighter-rouge">CALLS</code> (program-to-program), <code class="language-plaintext highlighter-rouge">ACCESSES</code> (program-to-table), <code class="language-plaintext highlighter-rouge">DEFINED_IN</code> (program-to-file), and <code class="language-plaintext highlighter-rouge">PART_OF</code> (nodes-to-islands). Line numbers are captured for each dependency, enabling precise navigation back to the source code.</p>

<h2 id="current-status">Current Status</h2>

<p>This is a <strong>prototype</strong>—it uses regex-based parsing and may need customization for real-world codebases with dynamic calls, vendor-specific extensions, or complex ILE concepts. But it’s a starting point for teams looking to understand their legacy system structure before making migration decisions.</p>

<h2 id="next-steps">Next Steps</h2>

<p>I’m interested in testing this approach on more diverse RPG codebases and refining the clustering logic. If you’re working with legacy RPG systems, I’d love to hear your thoughts!</p>

<p>Check out the project on GitHub: <a href="https://github.com/feststelltaste/rpgisland">feststelltaste/rpgisland</a></p>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="thoughts" /><category term="legacy-systems" /><category term="modernization" /><category term="neo4j" /><category term="graph-analysis" /><summary type="html"><![CDATA[A prototype tool that uses graph databases and clustering algorithms to identify isolated subsystems in legacy RPG codebases—making migration safer and easier.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://markusharrer.de/assets/images/posts/2026-02-09-rpg-island-prototype-overview-thumb.png" /><media:content medium="image" url="https://markusharrer.de/assets/images/posts/2026-02-09-rpg-island-prototype-overview-thumb.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Idiomatic Transpiling</title><link href="https://markusharrer.de/blog/2026/02/08/idiomatic-transpiling/" rel="alternate" type="text/html" title="Idiomatic Transpiling" /><published>2026-02-08T15:45:00+00:00</published><updated>2026-02-08T15:45:00+00:00</updated><id>https://markusharrer.de/blog/2026/02/08/idiomatic-transpiling</id><content type="html" xml:base="https://markusharrer.de/blog/2026/02/08/idiomatic-transpiling/"><![CDATA[<p>Have you ever seen auto-generated code that technically works but looks nothing like what a human would write? That’s the “JOBOL problem”—COBOL code mechanically translated to Java that violates every Java convention, creating what Federico Tomassetti calls “Frankenstein code.”</p>

<p><strong>Idiomatic Transpiling</strong> solves this by producing code that looks and behaves as if written by a language expert—prioritizing <strong>readability</strong>, <strong>maintainability</strong>, and <strong>language-specific conventions</strong> over strictly literal translation.</p>

<p>This isn’t just about aesthetics. It’s about creating maintainable code that development teams can actually work with long-term.</p>

<hr />

<h3 id="1-the-concepts-breakdown">1. The Concepts Breakdown</h3>

<p>To understand the term, it helps to separate the two words:</p>

<ul>
  <li>
    <p><strong>Transpiling:</strong> The process of converting source code from one language to another (e.g., Java to Python) or one version to another (e.g., ES6 to ES5). Traditional transpilers follow a three-step architecture: <strong>parse</strong> the source into an Abstract Syntax Tree (AST), <strong>transform</strong> it to the target AST, and <strong>generate</strong> output code. Standard transpilers usually care only about <strong>functional equivalence</strong>—ensuring the code <em>runs</em> the same way, regardless of how the output looks.</p>
  </li>
  <li>
    <p><strong>Idiomatic:</strong> Following the natural style, distinct grammar, and best practices of a specific language. Code idioms are more than syntax—they’re recognized patterns that experienced developers process as single “chunks,” reducing cognitive load. As research in cognitive science shows, our working memory can only handle 5-9 items simultaneously. Idioms compress complex constructs into single concepts, freeing mental capacity for higher-level problems.</p>
  </li>
</ul>

<p><strong>Therefore, Idiomatic Transpiling is:</strong></p>

<blockquote>
  <p>The automated translation of code where the output utilizes the unique features and “sugar” of the target language, rather than just simulating the logic of the source language. It employs a <strong>dual transformation strategy</strong>: recognizing and translating complete idioms first, then falling back to construct-by-construct mapping for unrecognized elements.</p>
</blockquote>

<h3 id="2-literal-vs-idiomatic-a-comparison">2. Literal vs. Idiomatic: A Comparison</h3>

<p>Imagine you are translating a <code class="language-plaintext highlighter-rouge">For Loop</code> from <strong>Java</strong> to <strong>Python</strong>.</p>

<h4 id="source-code-java">Source Code (Java)</h4>

<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">List</span><span class="o">&lt;</span><span class="nc">String</span><span class="o">&gt;</span> <span class="n">names</span> <span class="o">=</span> <span class="nc">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="s">"Alice"</span><span class="o">,</span> <span class="s">"Bob"</span><span class="o">);</span>
<span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">names</span><span class="o">.</span><span class="na">size</span><span class="o">();</span> <span class="n">i</span><span class="o">++)</span> <span class="o">{</span>
    <span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="n">names</span><span class="o">.</span><span class="na">get</span><span class="o">(</span><span class="n">i</span><span class="o">));</span>
<span class="o">}</span>

</code></pre></div></div>

<h4 id="-literal-transpilation-non-idiomatic">❌ Literal Transpilation (Non-Idiomatic)</h4>

<p>A standard transpiler might prioritize strict logic preservation. It creates code that works, but looks like “Java written in Python.”</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Technically works, but no Python developer writes this.
</span><span class="n">names</span> <span class="o">=</span> <span class="p">[</span><span class="sh">"</span><span class="s">Alice</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">Bob</span><span class="sh">"</span><span class="p">]</span>
<span class="n">i</span> <span class="o">=</span> <span class="mi">0</span>
<span class="k">while</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="nf">len</span><span class="p">(</span><span class="n">names</span><span class="p">):</span>
    <span class="nf">print</span><span class="p">(</span><span class="n">names</span><span class="p">[</span><span class="n">i</span><span class="p">])</span>
    <span class="n">i</span> <span class="o">+=</span> <span class="mi">1</span>

</code></pre></div></div>

<h4 id="-idiomatic-transpilation">✅ Idiomatic Transpilation</h4>

<p>An idiomatic transpiler understands the <em>intent</em> of the loop and utilizes Python’s iterator protocol.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Clean, readable, and "Pythonic"
</span><span class="n">names</span> <span class="o">=</span> <span class="p">[</span><span class="sh">"</span><span class="s">Alice</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">Bob</span><span class="sh">"</span><span class="p">]</span>
<span class="k">for</span> <span class="n">name</span> <span class="ow">in</span> <span class="n">names</span><span class="p">:</span>
    <span class="nf">print</span><span class="p">(</span><span class="n">name</span><span class="p">)</span>

</code></pre></div></div>

<h3 id="3-why-does-this-matter">3. Why Does This Matter?</h3>

<p>Standard transpilation is often a one-time operation—you treat the output as a binary executable that you never touch. Idiomatic transpilation is different because the output is meant to be <strong>maintained by humans</strong>.</p>

<p>Code idioms serve two critical functions in modernization:</p>

<ul>
  <li>
    <p><strong>Familiarization:</strong> Identifying and preserving idioms creates an “interpretation key” that helps developers navigate unfamiliar systems quickly. When migrating legacy code, understanding the original idioms is as important as the syntax.</p>
  </li>
  <li>
    <p><strong>Quality Migrations:</strong> Without idiom awareness, transpilers produce “Frankenstein code”—mechanically translated output that doesn’t belong in its target language. This creates technical debt from day one.</p>
  </li>
</ul>

<p><strong>Practical Benefits:</strong></p>

<ul>
  <li><strong>Legacy Modernization:</strong> If you are migrating a million lines of COBOL to Java, you don’t want the resulting Java to look like COBOL. You want it to look like modern Java so your team can actually work on it.</li>
  <li><strong>Debugging:</strong> It is significantly easier to debug code that follows standard conventions than code filled with generated wrapper functions and polyfills.</li>
  <li><strong>Performance:</strong> Often, the “idiomatic” way of doing things is also the most optimized path in that specific language (e.g., using Rust’s ownership model correctly vs. trying to force garbage collection patterns into Rust).</li>
  <li><strong>Objective Progress Tracking:</strong> By measuring coverage of recognized and translated idioms, teams can quantify migration progress and predict timelines more accurately.</li>
</ul>

<h3 id="4-identifying-and-translating-idioms">4. Identifying and Translating Idioms</h3>

<p>Historically, idiomatic transpiling was incredibly difficult to achieve with rule-based algorithms because it required understanding “intent” rather than just syntax.</p>

<p><strong>Human-Centric Approaches:</strong></p>

<ul>
  <li><strong>Expert Review:</strong> Experienced developers identify common patterns, though they may overlook everyday idioms they take for granted—similar to how native speakers don’t consciously analyze their own language idioms.</li>
  <li><strong>Fresh Perspectives:</strong> Junior developers unfamiliar with a codebase can identify unexpected patterns that experts consider “obvious.”</li>
</ul>

<p><strong>Machine-Centric Approaches:</strong></p>

<p>Algorithmic idiom mining analyzes Abstract Syntax Trees to identify recurring patterns. Modern approaches use:</p>

<ul>
  <li><strong>Fact-based clustering:</strong> Recording boolean properties of AST nodes (method naming conventions, parameter counts, return types), vectorizing them, and clustering to identify pattern families</li>
  <li><strong>Sequence analysis:</strong> Identifying statistically significant combinations of statements that reveal idiomatic usage</li>
  <li><strong>Iterative refinement:</strong> Treating discovered clusters as pseudo-types for deeper pattern discovery</li>
</ul>

<p><strong>The AI Revolution:</strong></p>

<p>Generative AI (LLMs) has made idiomatic transpiling significantly more accessible. Modern AI coding assistants act as “Idiomatic Transpilers,” understanding context and intent rather than just syntax. They don’t just translate code—they swap out language-specific libraries for ecosystem alternatives and adjust coding style automatically.</p>

<p>However, AI has limitations in distinguishing between design patterns, code clones, and true idioms without extensive training data on idiom mining. Purpose-built algorithms often provide more reliable pattern recognition for systematic migration projects.</p>

<h3 id="5-challenges--limitations">5. Challenges &amp; Limitations</h3>

<p>Idiomatic transpiling faces several inherent challenges:</p>

<ul>
  <li><strong>Context Loss:</strong> Complex business logic or domain-specific patterns may not translate perfectly. Idioms carry implicit assumptions about the runtime environment and ecosystem.</li>
  <li><strong>Library Differences:</strong> Direct equivalents don’t always exist between ecosystems (e.g., Java’s Spring Framework vs. Node.js Express). Migration requires understanding the target ecosystem’s conventions.</li>
  <li><strong>Testing Required:</strong> Transpiled code still needs comprehensive testing—functional equivalence isn’t guaranteed, especially when idioms are reinterpreted for the target language.</li>
  <li><strong>Domain Knowledge:</strong> Both source and target language idioms must be understood, along with domain-specific patterns unique to your codebase.</li>
  <li><strong>Pattern Recognition Noise:</strong> Algorithmic approaches generate thousands of potential patterns. Distinguishing meaningful idioms from coincidental code clusters requires sophisticated filtering and validation.</li>
  <li><strong>Scale vs. Quality Trade-offs:</strong> Hybrid approaches balance idiom-to-idiom translation with construct-to-construct fallbacks to ensure comprehensive coverage while maintaining quality.</li>
</ul>

<h3 id="summary-table">Summary Table</h3>

<table>
  <thead>
    <tr>
      <th>Feature</th>
      <th>Standard Transpilation</th>
      <th>Idiomatic Transpilation</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Primary Goal</strong></td>
      <td>Functional exactness (it runs).</td>
      <td>Maintainability (it reads well).</td>
    </tr>
    <tr>
      <td><strong>Output Style</strong></td>
      <td>Verbose, machine-like.</td>
      <td>Concise, human-like.</td>
    </tr>
    <tr>
      <td><strong>Maintenance</strong></td>
      <td>Output is usually discarded/regenerated.</td>
      <td>Output becomes the new source of truth.</td>
    </tr>
    <tr>
      <td><strong>Example Tool</strong></td>
      <td>Babel (JS), GWT (Java to JS).</td>
      <td>AI Code Converters, Modern Migration Tools.</td>
    </tr>
  </tbody>
</table>

<hr />

<h2 id="key-takeaway">Key Takeaway</h2>

<p>🎯 <strong>Idiomatic transpiling transforms code migration from a one-time conversion into a sustainable modernization strategy</strong>—because code that looks native to its language is code that teams can actually maintain.</p>

<p>The combination of algorithmic idiom mining and AI-powered translation creates a powerful toolkit for legacy modernization. By understanding the science behind code idioms—how they reduce cognitive load through chunking—and applying systematic transformation strategies, teams can produce migrations that are both comprehensive and genuinely maintainable.</p>

<hr />

<h2 id="references--further-reading">References &amp; Further Reading</h2>

<p>This article draws on insights from:</p>

<ul>
  <li>
    <p><strong>“Using Code Idioms to Define Idiomatic Migrations”</strong> by Federico Tomassetti (January 2025) - Explores the FactsVector algorithm for idiom mining, the JOBOL problem, and dual transformation strategies for transpilers. <a href="https://tomassetti.me/code-idioms-to-define-idiomatic-migrations/">Read on tomassetti.me</a></p>
  </li>
  <li>
    <p><strong>“RPG-Encoder: Research on idiomatic code translation and migration patterns”</strong> - Academic research on code migration approaches and pattern recognition. <a href="https://ayanami2003.github.io/RPG-Encoder/">View project</a></p>
  </li>
</ul>

<p>The concept of “chunking” and cognitive load in programming is detailed in Felienne Hermans’ <em>The Programmer’s Brain</em>, while pioneering work on idiom mining appears in Allamanis et al.’s <em>Mining Idioms from Source Code</em> (2014).</p>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="creaitions" /><category term="meta" /><category term="jekyll" /><category term="ai" /><category term="code-generation" /><summary type="html"><![CDATA[How AI-powered transpilers generate maintainable, idiomatic code instead of literal translations—and why it matters for legacy modernization.]]></summary></entry><entry><title type="html">Welcome to My New Site</title><link href="https://markusharrer.de/blog/2026/02/08/welcome-to-my-new-site/" rel="alternate" type="text/html" title="Welcome to My New Site" /><published>2026-02-08T13:30:00+00:00</published><updated>2026-02-08T13:30:00+00:00</updated><id>https://markusharrer.de/blog/2026/02/08/welcome-to-my-new-site</id><content type="html" xml:base="https://markusharrer.de/blog/2026/02/08/welcome-to-my-new-site/"><![CDATA[<p>I’m excited to announce that I’ve migrated my website to Jekyll! This will make it much easier for me to share regular insights, experiences, and thoughts on topics I’m passionate about.</p>

<h2 id="what-you-can-expect">What You Can Expect</h2>

<p>I’ll be writing about:</p>

<ul>
  <li><strong>Software Analytics &amp; Modernization</strong>: How to use data-driven approaches to understand and improve legacy systems</li>
  <li><strong>AI-powered Development</strong>: Exploring how generative AI and agentic tools are transforming how we write and evolve software</li>
  <li><strong>Architecture Evolution</strong>: Practical strategies for modernizing complex systems without the big rewrite</li>
  <li><strong>Wardley Mapping</strong>: Using strategic thinking to guide software evolution decisions</li>
  <li>and many more.</li>
</ul>

<h2 id="why-an-update">Why an Update?</h2>

<p>I moved to Jekyll under the hood which provides a perfect balance of simplicity and power. It allows me to:</p>

<ul>
  <li>Write in Markdown (my preferred format)</li>
  <li>Version control everything with Git</li>
  <li>Keep the clean, minimal design I love</li>
  <li>Focus on content rather than infrastructure</li>
</ul>

<p>This also enables me to co-create content using Claude Code on the command line. So hopefully you’ll see plenty of content here on this blog.</p>

<h2 id="the-migration-journey">The Migration Journey</h2>

<p>I started with a single <code class="language-plaintext highlighter-rouge">index.html</code> file live coded with Claude Code and then had Claude Code migrate it to Jekyll through some additional prompts.</p>

<h2 id="stay-tuned">Stay Tuned</h2>

<p>I have several posts in the pipeline about:</p>

<ul>
  <li>Using AI for legacy system analysis</li>
  <li>Practical patterns for agentic software modernization</li>
  <li>Lessons learned from consulting engagements in modernization projects</li>
</ul>

<p>You can subscribe to the <a href="/feed.xml">RSS feed</a> to stay updated.</p>

<p>Thanks for reading!</p>]]></content><author><name>Markus Harrer</name><email>hello@markusharrer.de</email></author><category term="news" /><category term="meta" /><category term="announcement" /><category term="jekyll" /><summary type="html"><![CDATA[I've migrated my website to Jekyll to make it easier to share thoughts on software architecture, AI-driven modernization, and legacy system evolution.]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://markusharrer.de/assets/images/posts/2026-02-08_startpage-thumb.png" /><media:content medium="image" url="https://markusharrer.de/assets/images/posts/2026-02-08_startpage-thumb.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>