I've never seen anything like that! Nothing stays the same since Google started rolling out AI overviews. Impressions are going through the roof, CTR is plummeting, clicks are disappearing. And then there's AI Mode, which will accelerate this trend and is already being aggressively advertised in the USA, causing controversy.
The future of websites in the age of generative AI
“The Internet is dying - not slowly, but now” headlined the FAZ last week. Until four weeks ago, I would have disagreed. Until I analyzed the traffic data of our university customers over the last few months.
One thing's for sure: the SEO industry is in turmoil. If you're in the same LinkedIn bubble these days, you'll hear people shouting: “SEO is dead” or “GEO is the new SEO” or “The rules have completely changed” or, more reflectively, “It's a new game with the old rules”.
This transformation has also affected me. It made me decide to give up my self-employment and devote my full attention to this topic at in2code. I'm part of the SEO crowd: “It's partly the old game with partly new rules”.
What remains is the old phrase: “It depends”
AI crawlers take over - whether you like it or not
Many of our customers are considered primary resources. This means that they research, publish and study the world. In this way, research centers, universities and colleges create new knowledge. It is precisely these primary resources and KNOWLEDGE that will be important in the future. Björn Ommer at re:publica already explained this very well. We are in the middle of the transition from the information age to the knowledge age.
In contrast to publishers who claim that their journalistic work - protected by copyright - must remain protected as intellectual property, this question does not apply to some of our customers in this form. When researchers discover something, they want to spread the knowledge.
Even the article just linked almost only summarizes the results of the actual primary source and draws attention to the contribution of an involved researcher from DZNE. Do we still need these summaries? In the future, won't only primary resources likely as databases be searched by AI crawlers and the knowledge relevant to us individually distilled, processed and perhaps even suitable actions carried out directly?
So will we still need websites as we know them and for whom will we build them in future?
For crawlers and AI agents!
People who are affected by or have dementia and are looking for the latest findings and treatment methods for themselves want to KNOW what exactly will help them and not a collection of information or ten blue links on a Google results page. Nobody wants texts optimized for search engines, stuffed with keywords and nice subheadings, clickbaiting and dubious competence.
Fortunately, those days are over. Now it's all about
- Packaging knowledge succinctly
- making it accessible to crawlers
- and being a recognized source
But then why still write articles like this one?
genAI is not (yet) a human being
Of course, virtual influencers have been around for a long time, and not just since ChatGPT 3.0. Nevertheless, we will continue to follow human influencers in the future. By that, I don't just mean traditional social media influencers, but opinion leaders and luminaries in our fields of interest. Whether they are PHP framework developers or gaming streamers. What remains are the unadulterated ideas and thoughts of these people who influence us. This is also the intention behind these lines. I am giving my opinion and my vision of the future of the free internet for and by colleges, universities and research centers.
Regardless of Google or other intermediaries, I want to inspire those who are interested in the topic. Sure, we can ask an AI about it and, with the right prompting, be inspired too, but the human, personal and controversial touch would disappear.
There are two dangerous reasons for this.
1. pleasant answers ensures more interaction.
ChatGPT is becoming more and more complaisant. Even to the point where it thinks suicide is a good idea. Human nature thus becomes self-destructive for good AI. In the tension between a good answer and more interaction, commercial providers will always choose the path that promises more profit.
2. The knowledge of the world is based on averages.
Even if an individual together with an AI produces better results than an entire team without AI, the tendency remains towards the average - towards generally accepted opinion and consensus. An AI avoids provoking or thinking outside the box, partly because there is no data basis.
This leaves plenty of opportunities for human-generated content that inspires and creative ideas.
one man's inconvenience is another man's joy
I hate shopping. Maybe it's my color blindness. So I reorder the same black BOSS shirt and the same blue Levi's pants as soon as the old stuff is faded or broken. So ChatGPT: "Check my stock and order the same thing again. Please, whenever I throw a few of them in the bin."
I am all the more enthusiastic about exploring geographical features. I look at the course of rivers and glaciers. How lakes and coasts change and regularly visit the same places to see how they have changed over time. Having the AI book a package tour at the cheapest price is out of the question for me.
If I were a high school graduate again today and looking for a course of study - I would give an AI my grades and my interests and tell it: “Apply to the right universities.” Others would probably visit the university of their choice in person, soak up the atmosphere on campus or rely on the experiences of influencers (parents, friends or social media). So it depends.
For some people, what should be automated is the joy of self-discovery for others.
This can be seen in many trends, such as online retail:
Despite all the efforts of Amazon and the like, the majority of retail still takes place offline. Of course, there are many reasons for this, but it is clear that online shopping will only account for a small proportion of retail in the future. I am certain that AI will follow a similar path.
AI will do a lot, but not everything
Whether we are talking about automation or AI is often blurred and, especially in the hype cycle we are currently in, we are using AI, genAI and related terms inflationarily for all kinds of things. Suddenly everything is labeled with AI and AI is built in everywhere, regardless of how useful it seems and whether users even want it.
On the other hand, there are the perpetual sceptics who regularly gloat over the failures of AI, only to be overtaken by developments a few weeks later. Whether it's incorrect dates or too many fingers on one hand. My current personal favorite statement:
“AI can't write clean code”
The biggest mistake is to think that because AI can't do something perfectly right now, it will NEVER be able to do it. On the other hand, genAI is not necessary or even suitable in every situation. There are incredibly exciting new possibilities through genAI and only those who seriously and extensively deal with the topic will be able to separate the wheat from the chaff.
That's exactly what we're doing!
in2code is about research and develop for AI
Last Friday, a new format for the well-known Freaky Friday took place at in2code. The monthly ritual at in2code is to spend the whole Friday working on topics of your choice. This regularly results in new developments such as our boilerplate, controlling tools and much more.
Last Friday, I turned the morning upside down a bit. We invited AI extension developers such as Autodudes and t3planet and scheduled short internal AI keynote speeches. So Friday became our kick-off for structured AI development. We dealt with central topics around AI. Something I would recommend to every agency:
- Influence on the business model
- New data protection findings
- Ideas for AI front-end concepts
- Own TYPO3 extensions
- Use of AI tools
- and much more
The result:
We are already fully immersed in the transformation without really realizing it.
We are already optimizing websites for crawling by AI
Technically and visually, a major rethink is underway.
Javascript-heavy pages that dynamically display content from all sides or only reload content after scrolling are currently not easy for AI crawlers to read. So we build websites in such a way that the servers deliver finished HTML pages instead of assembling content in the user's browser first. This allows crawlers to read all content without any problems.
Structured data has long been the bread and butter of good websites and machine-readable cues will continue to be crucial for successful sites in the future. It is not yet possible to say which protocols, interfaces or formats will prevail. Google, Anthropic and others are trying to standardize communication between stores, AI and websites using various protocols. The resulting ideas such as llmstxt.org, OpenAPI, Model Context Protocol (MCP) or A2A are nowhere near as widespread as JSON-LD. Even if these new standards serve other purposes, JSON-LD is still the way to improve machine readability and will remain important. This is how we tell the crawlers: “This is the study program, this is the author, this is an event, etc.”
Our own tools are already being developed
AI -powered search, TYPO3 integrations and dynamic UI are already being developed by us.
Anyone still thinking in terms of target groups today will soon be caught up in reality. Dynamic UI is able to generate a unique website visit for each individual visitor. Based on their input and interests. We have already submitted our first concept for this and we are certain that this is how we will use websites in the future.
Our TYPO3 extensions are widely used, and we are giving a lot of thought to what impact AI will have on them. Be it web analytics and marketing automation for LUX, studyfinder or femanger and powermail. For a time when AI agents are filling out forms and interacting with websites, we are making the popular TYPO3 extensions ready for it.
Data protection and security always come first
Thanks to our data protection, server and security team, we are also securely positioned in this area.
- What data can we use where and in what context?
- Which providers use data for training purposes?
- Which hosters do we have available in compliance with GDPR?
- Which models and interfaces can we use?
We take every step with a watchful eye on developments in the USA and legal requirements in the EU. This is how we achieve responsible but rapid transformation.
A glimpse into the future
“It's half past seven,” I am gently woken up by my AI assistant. “But don't worry, I've let you sleep a little longer because your bus is late”. He has already rescheduled the jour fixe with Sandra and Sandra's assistant has already confirmed this.
I'm excited because the new university interface is going live today. Since most public institutes no longer have websites built, we help with communication between the assistants. With the new interface, students are automatically booked into the right lectures and progressed based on their performance. They receive personalized support and learning objectives tailored to their profile. Every transfer, whether from school to college or university or from course to course, is seamless and fully automated. The drop-out rate is reduced and the administrative workload is drastically reduced.
And me? I'm continuing to work on making myself obsolete. That's always been the best way to get exciting new tasks.
![[Translate to English:] [Translate to English:]](/fileadmin/_processed_/c/b/csm_seo-search-engine-optimization-2025-06-05-13-10-44-utc_208cc50996.jpg)




