I was wondering if the nature of decentralization would negatively affect SEO, since people can access the same post from many different instance
https://lemmy.ml/robots.txt , https://lemmy.world/robots.txt , etc don’t seem to disallow posts, so the text-based content should be easy to index, at least for these instances.
related news: Google is getting a lot worse because of the Reddit blackouts.
How long does it usually take for google to index websites? Because I tried the string
lemmy site:lemmy.ml after:2023-06-15
and only one post turned up for me and it wasMemes
… the current state of affairs does not seem promising 😔 And if I tried with another instance with the same keywordslemmy site:kbin.social after:2023-06-15
nothing even turned up.I wonder though, will search engines adapt to Lemmy and its fediverse system? Or will search engines die? Or will we see dedicated search engines to search through the fediverse?
How long does it usually take for google to index websites?
Anything between a couple of hours to more than a week, I don’t think having a “real-time feed” through Google is important though. Other than world cup scores, their results were never about speed.
I tried searching for the title of this post verbatim and it isn’t in google results period.
That could just be because of lag between when the post is created and when the Google crawler finds/indexes the page
Oh, good point. Yes, probably? We can not simply assume search engines know that all of these point to the same content:
- https://slrpnk.net/c/technology
- https://feddit.de/c/[email protected]
- https://sopuli.xyz/c/[email protected]
- https://beehaw.org/c/[email protected]
Or even worse, due to defederation, they may not all point to the exact same content.
Without further investment either from lemmy or the search engine’s side, they are probably seen as distinct sources, not aggregated. Which makes each individually less relevant and less likely to show up .
Also note none of the adresses above contain ‘lemmy’. How would users search for content on lemmy in these cases? Can’t do “technology site:lemmy”, or?
But I can say, lemmy content is visible. Haven’t seen it on the first page of ecosia yet, but on page 2 or 3.
This is relatively simple to solve from a technology perspective. You just incorporate the canonical URL meta tag on federated sites that reference the source URL. It’d be trivial to implement, provided the authoritative URL is known.
Maybe you could use use site:lemmy.ml, because they federate with most instances, they’re likely to have most of lemmy’s content?