Read this post and wrote a simple Tampermonkey script as a solution.
// ==UserScript==
// @name Fix community link
// @version 0.1
// @description try to take over the world!
// @match https://sh.itjust.works/post/*
// ==/UserScript==
(function() {
'use strict';
const postLinks = document.getElementById("postContent").querySelectorAll("a:not(.community-link)") // get every links that is NOT a community link
const fixLink = (aTags) => {
for (let aTag of aTags) {
const isCommunityLink = aTag.pathname.startsWith("/c/");
aTag.href = isCommunityLink?aTag.pathname + "@" + aTag.host:aTag.href
};
}
fixLink(postLinks)
const comments = document.getElementsByClassName("comment-content");
for (let comment of comments) {
let commentLinks = comment.querySelectorAll("a:not(.community-link)");
fixLink(commentLinks)
}
})();
Any advice? I especially hate the fact that the way to check if it’s a link for lemmy community is through pathname but I thought there’s can’t be a real solution besides listing all the lemmy instances or actually making a request somehow.
Any inputs are welcome!
I’m not trying to be combative, I’m trying to understand. I’d like to see the failure in action so I can appreciate and pursue the proposed solution.
But when I added the community bit to the first URL, the browser resolved it and stripped the credentials, so the resulting URL matched. But the example credentials weren’t real so it’s not a great test; if there’s an real example I can test, please share it. Though I don’t see why I’d auth to an instance just to view it from a different instance.
When I added the community bit to the second URL, it was not a match, as it shouldn’t be. The pattern must match the entire URL.
You can find it in action on regex101 with the regex indeed matching the query string in the maliciouswebsite and not matching even just something with the port and no user/password
It is valid (just weird & not recommended) to give a user:pw combo to a website that doesn’t ask for one in the headers. Browsers stripping it off is a different thing
The sheer number of things you have to take into account to properly parse a URL should convince you to not use regexes for it
The fact that it’s less code, more correct, faster and more readable to use new URL() should also be enough to convince you to not use regexes
I meant to communicate that the Redirector addon uses the given pattern to see if the entire URL string matches, not part of it. So the malicious URL does not match.
I’m wondering if there’s a real URL for which the Redirector approach will not work.