Here is an AI-generated podcast I created using the articles I have on this blog so far.
There are some minor mistakes that I mentioned in the video’s description. Overall it’s a good intro to my ideas about the new Web that can be built in the near future.
add backup options to LZ Desktop (right now, backups have to be created manually)
add multiple worlds to LZ Desktop (a world is a zoomable square where you organise your content).
add all the missing functionality and fix bugs in LZ Desktop
create a web browser (a separate desktop app) that supports both regular (Web 2) pages and new static pages (HDOC, CDOC, SDOC)
add an option to integrate Google Analytics with the Static Web Publisher plugin
promote Static Web Publisher plugin so that by the end of this year we could have a list of hundreds of websites that support Web 1.1
Crowdfunding this project
I need about a year—maybe less—to accomplish these goals. The problem is, I don’t have enough money to keep working on the project full-time.
I started this project in late November 2023. Initially, I worked on it while taking on freelance jobs, but by summer 2024, my workload became overwhelming, forcing me to pause development for a few months. Since October 2024, I’ve been on a sabbatical, working on this project full-time. I’m burning through my savings, and that can’t go on forever.
So, I’ll try crowdfunding. I need about $600 per month to cover basic expenses.
Why you should consider donating
If I don’t get enough funding, I’ll have to take on freelance work within a month or two. I’ll try to keep developing this project on the side, but realistically, there will be huge gaps in progress. What could take a year might end up taking years. Worse, the project might never properly launch if I can’t dedicate enough time to it.
Another risk: now that I’ve put the idea of Web 1.1 out there, others might jump in and start developing their own software. This could lead to a mess of incompatible standards—which is the last thing we need. At least in the beginning, until all the new data types are well-defined, you don’t want to have too many cooks in the kitchen.
So, it makes sense to crowdfund my project and complete it as fast as possible, as delays can lead to bad outcomes.
Can I commercialise this project?
Eventually, I may explore ways to monetize services built around this project. For example, I could offer seamless content transfer from a mobile phone to a user’s zoomable desktop, or streamline content sharing between users. Instead of exporting content to a file and emailing it, you could send it directly from the app, and it would appear in your friend’s app. These are features I could integrate into LZ Desktop.
A Xanadu-like publishing platform, also discussed in the same post.
These ideas, however, are a bit more ambitious.
The main issue with commercialising the project right now is the lack of users. Any early attempts at monetization would be a distraction—I’d spend too much time setting things up without generating meaningful revenue.
At this stage, crowdfunding is the most viable way to sustain the project, at least for the first year.
How to donate
Currently, I don’t have any payment methods configured. I will first start discussions about Web 1.1 on different forums. Then, if I see positive feedback, I’ll think about the ways to accept donations. So, stay tuned.
You may have noticed, that URLs for accessing content on Web 1.1 start with sw:// or sws:// instead of http:// or https://.
What is the reason for having those new URL schemes?
First of all, sw:// stands for “Static Web”. And sws:// means “Static Web Secure”. There is no special protocol behind those URL schemes. Instead, every time you make a request, the client app automatically replaces sw:// with http:// and sws:// with https:// in the URL, before making that request.
Having new URL schemes serves a couple of purposes:
You can visually distinguish static from non-static content.
URL schemes help client apps decide which app should handle each specific piece of content.
To better understand why new URL schemes are needed, let’s look at the following schemes.
Client web software as it is now
Web 1 (I use terms Web 1 and Web 1.1 interchangeably) consists of static web pages of the new types (HDOC, CDOC, SDOC). Web 2 consists of regular HTML pages, which may contain CSS and scripts.
Currently, you use regular browsers to view Web 2 pages (quadrant II) and LZ Desktop to view static web pages (quadrant III) which you have to save on your zoomable desktop first.
We don’t have any software that would work in quadrant I yet. And quadrant IV is not very important right now. Its only use currently is this: an HDOC can have an associated Web 2 page on its side which you can load in LZ Desktop. So, you can say that LZ Desktop operates in quadrants III and IV. In the future quadrant IV will also be used for other, technical, reasons, like authentication, to get Web 1 content that is accessible only to authenticated users.
If you want to save a Web 2 page in a spatial storage app (SSA), you would be saving a URL of that page. An SSA can be developed that also shows you Web 2 pages, but that would be like merging an SSA and a regular browser, which doesn’t make much sense, since you can instead jump from SSA to your regular browser to view your Web 2 page.
For the purposes of our current discussion quadrant IV may be ignored.
How you navigate the Web now
Currently you can jump between quadrants II and III using URL schemes. If you browse Web 2 in your regular browser and click a link that starts with sw:// or sws://, the URL will be passed to LZ Desktop where the content will be loaded.
And vice versa: if you click a link in LZ Desktop that starts with http:// or https://, that link will be opened in your default browser.
Client web software in the future
In the near future I plan to develop a browser that supports both regular web pages (Web 2) and static web pages (Web 1). That browser will operate in quadrants I and II. In a more distant future, when a lot of websites support static data formats (HDOC, CDOC, SDOC), mainstream browsers will also start supporting those formats. At least that is the goal.
Also, other SSAs may be developed by other people and companies.
How you will navigate the Web in the future
In a browser that operates in quadrants I and II you navigate between Web 1 and Web 2 seamlessly. Under the hood, Web 1 and Web 2 pages are loaded differently but from a user’s perspective, when you click a link, you either load a new page in the same tab or a new tab is opened, depending on ‘target’ attribute of the link.
When viewing a Web 1 page, somewhere in the interface of your browser there will be a Save button. By clicking it you pass the entire document from your browser to your SSA. In other words, you navigate from quadrant I to quadrant III. That mechanism is not yet implemented. We will also, probably, need a mechanism to pass a document from SSA to a browser (from quadrant III to quadrant I).
You may still need to be able to pass some links to Web 2 pages you find in your SSA to your browser (navigation from quadrant III to quadrant II).
An alternative to using SW and SWS links
Introducing new URL schemes is generally discouraged unless necessary. Can we manage without them? The alternative to using new URL schemes would be to always use http:// and https:// links for all types of content. The browser or SSA would then determine how to handle the content based on its Content-Type.
To see if this alternative is convenient, we need to keep in mind navigational diagrams, both for present Web and the future Web. Refer to the two diagrams from above that have arrows.
Let’s look at different navigation scenarios in present day Web and in the future Web, using only http:// and https:// links.
Present day Web
If you need to go from quadrant II to quadrant III, you click on the link in your regular browser that doesn’t know anything about Web 1.1. Since the link is just a regular http link, your browser will either download the content as a file, or show you the source code in a tab. That is not how we want to view a Web 1 page.
To actually load the content in LZ Desktop, you’d have to copy link address and paste it in the sliding panel in LZ Desktop. Super inconvenient.
We may create a browser extension that opens LZ Desktop when you click on certain links, but then we’d need to mark those links somehow, probably by giving them a special CSS class name. Who’s going to do it for every link that leads to static data types?
Second scenario: going from quadrant III to quadrant II. If you are in LZ Desktop and you click some link, the app must determine if the link leads to Web 1 content or Web 2 content. We can make a request, and examine the content type in the response. If it is Web 1 content, we open it in the app. If it is Web 2 content, we pass the URL to the default browser. In this scenario we would be making the same request two times: first in LZ Desktop and then in the browser.
To avoid double requests we can make users mark links manually as Web 1 or Web 2 links. But that’s annoying and error-prone.
Future Web
In the future Web you navigate between quadrants I and II inside your browser. In this case, if all links are http:// or https:// links, the browser will determine data type of downloaded content. If it’s a Web 2 page, the content will be injected into a webview. If it is Web 1 page, it will be loaded without using a webview.
So, in this future case you don’t need sw:// and sws:// links.
To navigate between quadrants I and III a different mechanism will be used. New URL schemes are not needed here as well.
There is also a case of navigating from quadrant III to quadrant II. If your SSA forwards every link into your browser then new URL schemes are not needed here as well, assuming all browsers in the future work with Web 1 and Web 2. But there may be different types of SSAs. Imagine an app that only works with Web 1 pages, and sends Web 2 links to a browser.
In short, there are different possible apps in the future. Some SSAs may still need to distinguish between Web 1 and Web 2 pages. You do it either by making the user mark links which is a very poor user experience, or by using different URL schemes. In this case the website owner has to decide which URL scheme to use. But this decision is often automated by the software on the backend.
Considering All Phases of Web Evolution
Even if we were convinced that in the future all browsers would have spatial storage functionality, so that one app would cover all 4 quadrants, avoiding the need for new URL schemes, what about the present day Web? Currently you have to use two different apps: a browser for navigating the Web, and LZ Desktop for saving content. I worry that if we only use http:// and https:// URL schemes, the navigation between apps would be so inconvenient, that it could negatively affect the adoption of the new data types.
Final Thoughts on URL Schemes
I’ve spent a lot of time thinking about how best to handle navigation between apps. While I’m not entirely convinced that introducing new URL schemes is the perfect solution, they do address key usability issues. Without them, early adoption of Web 1.1 could suffer due to poor user experience.
Note: The functionality discussed in this post is not yet implemented in LZ Desktop.
Because pages on the Static Web (Web 1.1) are self-sufficient and don’t rely on a live connection to the server, they can have a life of their own once downloaded. This will lead to a practice I call republishing. Imagine that anybody can take a page from your website and publish it on their website. Wait, what?!
It may sound crazy at first, but let me explain.
Why would people publish someone else’s content?
Let’s say you want to publish a commentary on someone’s article. You can create an HDOC, write your commentary, then create a connection to the article in question and create floating links between the two pages. Then you publish HDOC on your website. When someone downloads it, they will see that your document references another document, download that document as well, and then they will be able to see visible connections (floating links) that you created.
Here is an example of floating links between documents:
The Problem: Content Instability
All well and good, but what if the author of the article changes something in the article. It may break your floating links. There is a self healing mechanism that can fix broken links, but it doesn’t work in 100% of cases. Or, what if they completely delete their page? All your work writing commentary and adding links would go to waste. You need some way of stabilising the content of their article.
In a centralised system like Ted Nelson’s Xanadu this problem is solved by simply saving every version of every document and never deleting anything. But in a decentralised system like the World Wide Web you don’t have a guarantee that a document on the other end of a link will not change or will even exist in the future.
The Solution: Republishing
The best way to ensure stability in a decentralized system is to host a copy of the article on your own site and connect your commentary to that copy rather than the original.
Is this even legal?
I believe republishing can become an accepted and expected practice, just like linking to webpages is today.
By the way, linking wasn’t always a settled issue. In the early days of the Web some people seriously debated whether it was legal to link to someone else’s page without permission.
Why would that be a problem? Imagine I have a popular website, and you run an obscure one. If you link to my site, your site becomes more useful, possibly gaining popularity. Do you now owe me something for benefiting from my content?
Or what if I publish a private webpage meant only for friends? If you link to it from your popular website, you bring unwanted attention. Should you have asked first?
Today, the consensus is that if you publish content on the Web, you should expect others to link to it. If you want privacy, use authentication. And maybe, you should even be thankful that somebody links to your content, because that brings you more traffic.
Why should you be OK with republishing?
The key is how republishing is done and what the republisher gains from it.
When republishing someone’s page, you must not alter its content. In an HDOC, sections like <head>, <html>, <panels>, and <connections> remain intact. However, a <copy-info> section is added, containing the original page’s URL.
Client software (browsers and storage apps like LZ Desktop) will clearly indicate that the page is a copy, displaying the original URL as its primary address. The page will look as though it was fetched from the original site, while making it obvious that it’s a copy. Users will be able to view detailed information and see its true source.
Search Engines and Republishing
Currently, search engines don’t index HDOCs, CDOCs, or SDOCs, but once they do, they’ll be able to distinguish between native content of a website and republished copies. That means republished pages won’t impact the search ranking of the host site.
More importantly, republishers gain nothing from copying content other than stabilizing it for their floating links. Copying content is simply a technical detail of maintaining floating links, not theft. And just like with linking, you might even be grateful that others are preserving your content for free.
Finding webpages that no longer exist
Search engines could track every republished copy of an original webpage they find, ensuring that if that page disappears, users can still access reliable backups. However, this creates a risk: spammers might try to generate fake copies of recently vanished pages. To counter this, search engines may record multiple versions of each page, storing them as timestamped hashes. This way, when a page is lost, the search engine can analyze a network of its copies, identifying the most recent authentic version. If a spammer attempts to pass off a fake page, hash mismatches will expose the deception.
A Backup System for the Web
Republishing can serve as a redundancy mechanism, solving the problem of broken links.
Random websites will help to preserve only some pages by republishing them.
But in the future, there may exist services similar to the Web Archive that could store vast collections of static pages. These could be non-profits, commercial entities charging for access, or services that you pay to host backups of your content. Different business models could emerge.
Such services could do more than passively store backups. Imagine your browser encountering a broken link. Instead of displaying a “404 Not Found” error, it could automatically request a copy from a backup service and seamlessly load the missing page. The page would be marked as a copy but still deliver the content the user was seeking.
The Interplanetary Web
Now, let’s take this a step further. Imagine a future where humans colonize Solar System. If we don’t do anything about our Web before that happens, there will be a separate Web on each planet, because of time delays in communication between planets.
Many regular web pages are too dependent on live server connections. To have such pages available on Mars, for example, you’d have to have a copy of your entire web server there.
Some popular websites like Wikipedia will probably be hosted this way on multiple planets. But most website owners won’t bother to host a copy of their websites on another planet.
And so, the Web on Mars will be mostly separate and different from the Web on Earth.
However, if we turn our Web into a web of static documents, time delays won’t be a problem. We’ll be able to use republishing mechanisms discussed above to have a copy of the entire Web in many places across the Solar System.
Sure, some things that you have to run in containers, won’t work across large distances. For example, people from Earth and Mars won’t be able to play real time online games together. But that’s expected, and nothing can be done about it.
When I’m ready to implement this functionality, I plan to publish a license or a declaration of principles to clarify the expectations around republishing.
In my view, Web 1.1 is fundamentally about sharing. Readers should be able to download, cache, and even republish content by default.
However, there will also be an option to opt out on a case-by-case basis, ensuring flexibility for content creators who prefer to restrict republishing.
Disclaimer
Of course, none of this is legal advice. I’m not saying you can republish content today without consequences. If you think you could get in trouble for doing so, don’t do it. What I am saying is that republishing could one day become as normal as linking, helping to create a more stable, and scalable Web.
HDOC, CDOC and SDOC may all have <connections> section. It contains a list of documents the current document wants to connect to. Each connection may have a set of floating links.
Child Element: <doc> (multiple)
Contains information about a document.
Attributes:
title(optional): Connected document’s title
url(required): Connected document’s url
hash(optional): SHA256 hash of the connected document’s content.
HDOC: Hash is calculated over textContent, not the HTML or innerText (to avoid whitespace modifications affecting highlight indices).
CDOC: Hash covers the entire <svg> section, including the <svg> tag.
Currently, the hash is generated upon export but is not verified when loading documents. This feature will be added later.
Child Elements of <doc>
A <doc> may contain floating links, which link:
Text segments in HDOCs
Points in collages (CDOCs)
Points in 3D scenes (SDOCs, not supported currently)
Floating links are presented as lines with key value pairs. Examples:
What end you use, depends on the document. For text documents you use a text end, for collages – a point end. There may be different combinations.
Point-to-point links, for visible connections between two collages, are currently not supported but may be supported in the future.
Two parts of a floating link are divided by an underscore.
Point end
Example:
p|x:45.462;y:218.567;r:0.209
p → Point end type
x, y → 2D coordinates in a collage
r → Radius of a visible marker
Text end
Example:
t|i:365;hi:363;l:15;hl:17;h:47c1c8
t → Text end type (default, can be omitted)
i → Index of the first character of the highlighted text
l → Length of the highlighted text
hi → Index of hashed range
hl → Length of hashed range
h → SHA256 hash
Most text ends will appear without the t prefix:
i:365;hi:363;l:15;hl:17;h:47c1c8
Hashes in text ends
Hashes are used, so we could tell if the link is broken because the text of a document was changed. And if the link is broken, in many cases the hash can help fix it by moving it to another index. The client app can simply move a range of a known length across the text and check at each tested index if the hash of a text within that range matches the known hash.
Hashes are generated for text segments that are unique and at least 10 characters long (character limit is used by LZ Desktop app when creating floating links, but it is not a requirement that will be set as a Web standard that all client apps must follow).
If highlighted text is too short or non-unique, the hash is computed for a larger surrounding range.
Default behavior: The hashed range extends left unless near the start of the document, in which case it can grow right as well.
When the highlighted text is both long enough and unique, the hashed range coincides with it, making hi and hl unnecessary:
i:35;l:22;h:abb7b7
For text-to-text links, if both ends have the same hash, the second hash can be omitted:
i:35;l:22;h:abb7b7_i:6771;l:22
How this format can be extended
I have only implemented floating links for the simplest possible use cases. In the future a lot more options can be added.
We may want to be able to have multiple ends for one floating link. For example, you may want to create one commentary that is connected to multiple places in another document.
We may need to distinguish different types of links. So, a type field can be added to floating links.
Types can be, for example, Reference Link, Commentary Link, Correction Link, and many others. Some links may not even be links between two documents, but simply annotations within one document.
New floating link ends
For CDOCs (2D collages) a lot more link ends can be added besides a simple point marker. For example, you may want to frame something with a rectangle. You may want to add texts as overlays. All such cases can be handled by introducing new floating link ends.
A collage may contain texts, so maybe we should be able to have text ends that are used in collages.
Also, it may be useful to be able to target specific images within a collage instead of using absolute coordinates. This way, if an image position was changed, the link will not be broken.
In 3D scenes (SDOCs) a support for 3D point ends and possibly other types of ends may be added in the future.
Proper Xanalinks
As I mentioned in other posts, this project is inspired by Ted Nelson’s project Xanadu. In Xanadu, there was a completely different mechanism for stabilising content of documents, so that links are never broken. That mechanism can be used in the Web 1.1 as well. It probably won’t be widespread, but I think it should exist as an option.
Documents that support that mechanism, will be simply HDOCs that have an <edl> section.
A new floating link end will be introduced for Xanalinks. It will be more complex than a regular text end with a hash.
Because it’s just one end, it can be combined with other types of ends. So, you’ll be able to connect a Xanadoc (HDOC with an EDL section) to a regular HDOC. Or to a collage, 3D scene, or another Xanadoc.
If for whatever reason you don’t want to use stabilized content addresses from EDL you’ll be able to use a simple text end over a Xanadoc. But in this case you won’t be using all the features a Xanadoc can provide.
The way I have presented Web 1.1 so far might make it seem like it’s all about saving documents locally in spatial environments. But that’s not the whole picture.
The new proposed data types – HDOC, CDOC, and SDOC – can be used anywhere. Specifically, they can function within a regular browser, inside tabs. These data types are just special types of web pages.
If all goes well, I plan to develop the first Web 1.1-compatible browser this year. However, this is only a small step and not an urgent one, since there isn’t yet enough Web 1.1 content to necessitate a dedicated browser.
For now, we can use Web 2 for navigation and discovery, and Web 1.1 will be used for saving. But that’s a temporary arrangement.
The Big Goal
Ultimately, the aim is to encourage mainstream browsers to support Web 1.1 data types. The way to achieve this is by making Web 1.1 widespread on existing websites first.
Initially, Web 1.1 will require a special app like LZ Desktop for saving documents in spatial environments. Not many people will use such apps, as many don’t even own personal computers. But that’s not a problem. Websites may still support Web 1.1 to accommodate those few who want to use Static Web.
I’ve discussed incentives for website owners to support Web 1.1 in another post.
What to expect in the future
As more websites adopt Web 1.1, mainstream browsers will have more reasons to support its data formats—just as they currently support viewing PDFs. Eventually, they could support HDOCs, CDOCs, and SDOCs, even displaying visible connections between them.
Once Web 1.1 documents can be used in regular browsers, search engines may start indexing them, just as they do with HTML pages and PDFs.
A New Type of Website
With browser and search engine support, Web 1.1 documents will become first-class citizens of the web. You’ll be able to build an entire website using only these new data types, without any traditional HTML pages.
Technically, you could do this right now, but it wouldn’t be practical without browser and search engine support—most users wouldn’t be able to find or view your content.
The Three Stages of Web’s Evolution
I like to compare this transformation to the lifecycle of a butterfly. Why? Because just as a butterfly looks nothing like its initial form—a caterpillar—the Static Web, in its final form, will be vastly different from what it is today. So, the three stages of the web’s evolution can be named:
Caterpillar
Pupa
Butterfly
Caterpillar Stage (Current Phase)
At this stage, Web 1.1 is used only for saving content. Navigation and discovery still rely on Web 2 pages and traditional browsers.
On Caterpillar stage a lot of regular web pages (red) get alternative static versions (blue) that you can save
Pupa Stage
This stage begins when mainstream browsers start supporting Web 1.1 data types. Users will then be able to navigate both Web 1.1 and Web 2 within their browsers. However, website owners will still need to maintain Web 2 versions, as many users will continue using browsers that don’t yet support Web 1.1.
On Pupa stage you have two Webs which you can freely navigate in your browser. There is a lot of duplication, but you have to keep old versions for now.
Butterfly Stage
This final stage begins when all major browsers and search engines fully support Web 1.1. On this stage you can remove Web 2 pages from your website and replace them with redirects to Web 1.1 versions of the same pages. Web 2 will shrink.
On Butterfly stage old web (Web 2) shrinks. Web is mostly made of static documents.
Timeline for Transformation
I don’t know how long this transition will take. It could fail early on if Web 1.1 doesn’t gain momentum. But since there’s no limit on retries, we can keep pushing forward.
If we manage to get a critical mass of websites to support Web 1.1, I think, we can duplicate entire Web rather quickly, because users and website owners are incentivised to use Static Web, and there will be an element of social proof at play, because you will see download buttons all over the Web.
Then we may get stuck on the Pupa stage. If a major browser or search engine refuses to support Web 1.1 formats, the transition could stall. However, if enough of the web shifts to Web 1.1, they will have to adopt it eventually.
Overall, this transformation will take years—possibly many years. But Web 1.1 offers immediate benefits at every stage, even in its early form.
This isn’t some far-off vision that depends on universal adoption. A small group of enthusiasts can start by publishing their websites on Static Web. Other website owners will follow because it enhances user engagement and retention. Eventually, this process may transform the entire web.
What Web 1.1 Might Look Like in the Future
I see two major developments on the horizon for Web 1.1:
Web 1.1 as a Decentralized Social Network
Xanadufication of the Web
I will briefly describe them here, but I will discuss them in more detail in other posts.
Web 1.1 as a Decentralized Social Network
Several projects, such as Bluesky’s AT Protocol and Tim Berners-Lee’s Solid project, aim to decentralize the web. They give users control over their data, hosting it on personal servers while allowing apps to access it.
However, these projects face adoption challenges because they require individuals to host their own content—something most people won’t do.
Web 1.1 can present an alternative solution to the same problems those projects aim to solve. In Web 1.1 the pages (HDOCs) are much more standardised than regular HTML pages. They have only one column of text and standard navigation panels. It reminds me of social networks where you just provide your content and don’t have a say in how the website looks.
Web 1.1 as a decentralized social network
To make Web 1.1 a social network we need only to standardize a few things.
For instance, every Web 1.1 site could include a standardized News page for posting short updates (like tweets). A standard “Follow” button would allow users to subscribe to these updates. Aggregator services could scrape these updates and generate personalized feeds—similar to Twitter’s “For You” section.
Unlike other decentralization efforts, Web 1.1 targets website owners who already host content. They only need to install a plugin to participate, making adoption much easier.
More than that, those people have other incentives to support Web 1.1 beyond simply participating in a social network.
I plan to study AT Protocol and Solid project to understand if our approaches can be merged or at least aligned somehow. One possibility I see for them is to start providing a CMS (Content management system) that allows your to create a website made of HDOCs. But that will be practical only on Butterfly stage. If you do it now, you’ll have to support Web 2 as well, and that may be too complicated.
Xanadufication of the Web
Web 1.1 is heavily inspired by Ted Nelson’s Xanadu project, which introduced concepts like visible connections between texts, transclusions, and versioning.
In Web 1.1 we have visible connections between texts. But those are simplified links. They are not as powerful as proper xanalinks. But there is a way to add more Xanadu-like features to the Web. I have a more or less detailed plan of how it can be done.
I will dive deeper into details in another post, but here I will just mention key differences between my proposed implementation and that of the original Xanadu project. Here I assume that you have a detailed knowledge of Xanadu project. If you don’t, I’ll explain everything in another post.
Here are the main differences:
Documents will not be assembled on the client, only on the server.
EDL will not be a separate document, instead it will be included into an HDOC.
Layout (headers, paragraphs, etc.) will be done using HTML. ODL may be stored on the server, but overlays will apply HTML tags to texts. So, there will be, for example, H1 overlay, P overlay, A overlay, and so on. ODL will not be included in HDOC, because all the necessary information will already be in the HTML.
Many servers will not store texts separately from EDLs. They will pretend to do so. Texts in HDOCs will have stabilized addresses as if they are stored in some files somewhere but in many cases they will be only stored in their HDOCs.
Websites will decide whether to adopt Xanadu-style features. Client apps will distinguish between simple HDOCs and HDOCs with an EDL section. Both simple links and proper xanalinks will be supported by client apps.
A lot of smaller websites will either not use additional features at all, or use them in a very limited way. In a decentralized setting features like transclusions and selling pieces of content may be too impractical.
But if somebody creates a big publishing platform where people can publish and sell their content, they can design it very close to the original specifications of the Xanadu project and plug it into the Web 1.1 ecosystem that will support the new advanced features.
In short, you will have transclusions and micro-transactions, and versioning on big websites. But it will be possible to create visible connections from those big websites to the content from other smaller websites that don’t have those features.
A lot of Xanadu’s features require a degree of centralization. So, it only makes sense that you won’t find them everywhere on the Web, but only on some larger websites.
Conclusion
There are 3 stages of Web’s transformation. Besides that, there are two possible developments in the future: Web turning into a decentralized social network and Xanadufication of Web.
I will focus on these two developments only when it becomes clear that Web 1.1, in its simplest form, is gaining traction.
In previous posts I talked about the issues with the modern Web, how Web 1.1 (Static Web) can solve them, and how easy it is to publish existing websites on this new Web. But will people actually do that?
In this post I will talk about incentives. I believe, in the end of the day, everybody has an incentive to use Web 1.1.
Saving documents is a big deal
Saving doesn’t work well on the modern Web. When you try to save a page, it often appears broken. Even if it isn’t, where do you save it? In the cluttered Downloads folder, where you’ll never find it again? Or in some manually created folder structure that still isn’t much better? The reality is that people don’t save web pages—not because they don’t need to, but because it’s inconvenient.
With Continuous Space Interface (CSI) and new static data formats, saving web pages becomes a seamless experience. You can store pages you like directly on your zoomable desktop or even in a 3D environment, making them easy to find and interact with later.
Imagine browsing the Web and effortlessly collecting your favorite pieces of content, arranging them visually, and accessing them whenever you want. Many people, myself included, would love this ability.
The Role of Continuous Space Interface
Looking at demos of LZ Desktop you may think that this new interface is too exotic and won’t be very popular. I agree that CSI probably won’t become the primary interface for most people. It’s not well-suited for small-screen devices like mobile phones, and many users don’t own desktop computers. But that’s fine. As I mentioned in another post, CSI doesn’t need to be mainstream to have a significant impact on the Web.
Website owners want you to save their content
If you run a business, create content, or influence an audience, you want people to engage with your material. You want them to return to your website repeatedly.
Think about offline businesses. They hand out business cards, flyers, and brochures to stay on customers’ minds. Why? Because having a physical object increases the likelihood that a customer will remember them and return.
Now, imagine if every page on your website could serve the same purpose. People could download an article, place it on their desktop, and revisit it whenever they want. You wouldn’t know how often they engage with your saved content, but when they do, they might click a link and return to your site.
By supporting Web 1.1, website owners can boost engagement and retention. Instead of relying on users to remember and navigate back, they can let users save content in a meaningful way that encourages repeated interaction.
Backend Support Matters More Than Frontend Adoption: The RSS Example
Web technologies gain traction in two ways: frontend popularity and backend support. Frontend popularity is about how many people actively use the technology. Backend support is about how many websites implement it.
Take RSS, for example. It allows users to subscribe to website updates, yet in terms of direct usage, RSS is relatively unpopular. But if you look at website adoption, it’s widespread—almost every blog or news site supports it.
Why? Because RSS was implemented at the platform level long ago. If you run a WordPress site, RSS support is built-in by default. Most site owners don’t bother disabling it, so the technology persists, even if its user base is niche.
This same principle applies to Web 1.1. If backend support becomes widespread—through simple implementations like a WordPress plugin—it doesn’t matter how many users actively seek it out at first. As new data formats gain traction, mainstream browsers like Chrome and Safari will have more reason to support them. For most users, their first encounter with Web 1.1 will be through these browsers rather than dedicated apps.
One is bigger than zero
In the example of RSS, if even one person wants to subscribe to your website, why not let them? Saying “one is too low a number, so I’ll have zero instead” makes no sense.
The same logic applies to Web 1.1. Implementing support for it is often as easy as installing a WordPress plugin. And there’s no reason not to. If even one person wants to download your content, why deny them that option?
Chicken-and-egg problems
What if Web 1.1, despite all its benefits, never becomes popular? Many good ideas have failed due to chicken-and-egg problems.
You might ask: Why would someone install an app for a new kind of Web that doesn’t really exist yet? Or why would a website owner publish their site on the new Web that seemingly nobody uses?
Come for the tool, stay for the network
There is a concept called “Come for the tool, stay for the network”. Here is a blog post about it. Take Instagram, for example. Initially, it was promoted as a tool for adding filters to photos. People installed it because they wanted the filters, not because their friends were already on the platform.
Likewise, when we are talking about initial stages of Web 1.1 adoption, we shouldn’t worry about how popular this new Web is.
The only app that can use Web 1.1 data formats currently is LZ Desktop. As a tool this app is useful even if we never create the new Web. It is just a big desktop, where you can collect all the things that matter to you, like text notes, web links, images. Web pages of the new kind (HDOC), that you can use in this app, come as a bonus. Xanadu-inspired visible connections between texts from different web pages come as another bonus.
Bring Your Own Network (BYON)
If you’re a website owner, you don’t need to wait for Web 1.1 to become mainstream. Assume no one has heard of it. Simply add a download link to your website, allowing visitors to save pages in the new format. Next to it, provide a link explaining how they can install the software to use Web 1.1 (for WordPress all of that is done by installing a plugin). In other words, promote Web 1.1 yourself. I call this approach BYON—Bring Your Own Network. Your network is whoever visits your website. Some of those people will start using Web 1.1 because of you.
Another scenario: Imagine you’re a teacher creating learning materials with features unavailable on the modern Web—such as visible connections between pages. You publish these materials on your website and instruct your students to download LZ Desktop to access them. Once again, Web 1.1 serves as a tool first. Your students are your network that you bring to Web 1.1.
Conclusion
Hopefully, you can now see that there is no insurmountable barrier to the widespread adoption of Web 1.1. There is no unsolvable chicken-and-egg problem—Web 1.1 simply needs to be used as a tool for saving web pages, and adoption will follow naturally.
In the early days of the World Wide Web, Tim Berners-Lee—the man who invented it—maintained a website where he listed newly created websites. Back then, the addition of each new site was an event worth noting.
I plan to do something similar for Web 1.1. Anyone who installs my plugin or provides static content in any other way can send me a link, and I will maintain a directory of such websites. This list will be accessible from the LZ Desktop client app via the ‘Explore Static Web’ button.
As the list grows, I may eventually develop a simple search engine to help users navigate it more easily.
This is yet another reason not to worry about the initial size of the Web 1.1 network. Even though it starts small, it will be highly navigable from the beginning, making it easy for early adopters to explore and contribute. As more websites join, discovery tools will improve, and organic growth will follow naturally.
SDOC is a format for 3D scenes on the web. Just like HDOC and CDOC it is a static format, meaning that it cannot include any scripts.
SDOC is currently not defined, but it will be similar to CDOC in that the main content will probably be located in a section that will use some popular format for 3D scenes, just like CDOC uses a popular 2D vector graphics format (SVG).
SDOC will probably have <head> section, <copy-info> section, and <connections> section just like HDOC, and CDOC.
SDOCs may contain HDOCs, CDOCs and even other SDOCs. All those documents may be included by reference.
What will 3D scenes be used for?
One use case I can think of is having something like a site map. Only instead of links to different pages you could have a 3D scene where you can surround your reader with your content.
In the video example, I move things around. This will be possible with local scenes that user creates. SDOCs downloaded from the web will probably be immutable.
Also, in the demo the 3D scene looks basic. But since the SDOC format will be based on some popular 3D format, you should be able to create scenes of any complexity.
CDOC is a format for 2D collages on the web. Just like HDOC it is a static format, meaning that it cannot include any scripts. CDOC may contain images, texts and anything an SVG file could contain. But also it may contain documents like HDOCs, SDOCs, and possibly even other CDOCs. Those documents are included by reference. Parent CDOC only defines how they look and positioned within its collage and their URLs. Documents are never included inline.
This format is currently implemented only partially. For example, adding other documents inside a CDOC is not yet supported in LZ Desktop.
HEAD
CDOC has head section just like HDOC and HTML. Inside of it it may have a <title> tag. Other things that may go into head section are not yet defined.
<cdoc>
<head>
<title>Title of my post</title>
</head>
</cdoc>
SVG
The content goes into one inline SVG. You can use any elements SVG supports. There may be some restrictions on the use of classes. Scripts are not supported.
Documents included by reference will probably be presented by any valid SVG element (for example, an image) wrapped into an anchor tag with a specific class.
Panels like in HDOC are probably not needed. However there may be a need for side panel for some interactivity. So, some way to add the URL of side panel webpage may be added in the future. For now panels are not supported in CDOC.
COPY-INFO
This section will be similar to that of HDOC, but the mappings may include URLs not only of media files but of the embedded documents as well.
CONNECTIONS
This is an XML structure that contains information about documents the current document wants to connect to as well as floating links that connect those documents with the current documents.
Connections section will be supported by all three document types: HDOC, CDOC, and SDOC. For that reason its description is on a separate page.
HDOC stands for ‘HTML document’. It is a static format which means you can’t add a script to it. You can use CSS classes from a list of predefined classes, but you can’t define your own CSS classes or use inline CSS.
HEAD
HDOC has head section just like a regular HTML file. Inside of it it may have a <title> tag. Other things that may go into head section are not yet defined.
<hdoc>
<head>
<title>Title of my post</title>
</head>
</hdoc>
HTML
While regular html file contains <body> tag, HDOC has <html> tag that serves the same purpose. Inside it you can enter html usually starting with <h1> header.
<hdoc>
<html>
<h1>Title of my post</h1>
<p>This is a paragraph.</p>
</html>
</hdoc>
PANELS
Optionally HDOC may have panels section. It is responsible for header and footer which are standardised and will look the same on all websites. You as an author can only specify colors, main logo or website name, and a list of links. It may also contain link to a paired web page (usually used for comments).
Root Element: <panels>
Attributes:
bgColor(optional): A color string representing the background color of both top and bottom panels.
textColor(optional): A color string representing the text color in both top and bottom panels.
Child Elements:
<top-panel>(optional): Defines the top panel of the page.
<side-panel>(optional): Defines the URL of a page that will be shown on the side of the main document.
<bottom-panel>(optional): Defines the bottom panel of the page.
Child Element: <top-panel>
Defines the top section of the webpage.
Attributes:
bgColor(optional): A color string for the background color of the top panel.
textColor(optional): A color string for the text color in the top panel.
Child Elements:
<site-name>(optional): Represents the site name.
Attributes:
href(optional): URL to navigate to when the site name is clicked.
Content: The text of the site name.
<logo>(optional): Represents a site logo.
Attributes:
src(required): URL to the logo image.
href(optional): URL to navigate to when the logo is clicked.
<a>(optional, multiple): Represents a hyperlink in the top panel.
Attributes:
href(required): URL of the hyperlink.
Content: The text of the link.
Child Element: <side-panel>
Defines a side panel of the webpage, typically used for comments section.
Attributes:
side(optional): Specifies which side the panel appears on.
Values:
"left": The panel is on the left side.
"right" (default): The panel is on the right side.
Content:
The URL of webpage to be displayed in the side panel.
Child Element: <bottom-panel>
Defines the bottom section of the webpage.
Attributes:
bgColor(optional): A color string for the background color of the bottom panel.
textColor(optional): A color string for the text color in the bottom panel.
Child Elements:
Content: The text of the message.
<section>(optional, multiple): Defines a section within the bottom panel.
Attributes:
title(optional): The title of the section.
Child Elements:
<a>(optional, multiple): Represents a hyperlink in the section.
Attributes:
href(required): URL of the hyperlink.
Content: The text of the link.
<bottom-message>(optional): Defines a message at the bottom of the panel.
All panels – <top-panel>, <side-panel>, <bottom-panel> – are optional.
You should use either <site-name> or <logo> but not both. Default background color for all panels is white, and default text color is black. Attributes bgColor and textColor of <panels> tag define colors for both top and bottom panels. The same attributes of <top-panel> and <bottom-panel> tags will override those set in <panel> element. So if both panels should have the same colors you should define them only globally in <panels> element. If they must be different then define them for each individual panel.
COPY-INFO
Optional section that is used only when HDOC is a copy of some other HDOC in which case this section is required.
This section is currently not supported by LZ Desktop, but will be in the future.
Child Elements:
<source>(required, multiple): A URL of the source page. If hdoc represents a copy of a copy of some hdoc, then multiple source tags must be used. In general you should avoid making copies of copies if the original document is available. But if its unavailable and you have to make a copy of a copy, using multiple source tags allows us to save the history of the document.
Attributes:
copied-at(required): An ISO 8601 timestamp of the moment when the copy was made. For example (UTC), 2025-01-01T12:00:00Z or (with Timezone Offset) 2025-01-01T12:00:00+02:00
<media-mappings>(optional). This element represents a collection of mappings for media files, allowing you to replace old URLs with new ones. Contains one or more <m> elements, each defining a mapping between an old URL and a new URL.
Since section is currently not supported, the details may change when the support is finally implemented.
CONNECTIONS
This is an XML structure that contains information about documents the current document wants to connect to as well as floating links that connect those documents with the current documents.
Connections section will be supported by all three document types: HDOC, CDOC, and SDOC. For that reason its description is on a separate page.