Table of Contents

Cannon is an idea for a project attempting to compute canonical/normalised URLs and extract some information from them ('entities'), merely by looking at the URL, and ideally without using the "rel=canonical" metadata.
I describe the problem it tries to solve here: "urls are broken", also see "motivation".

At the moment it's a subproject of promnesia: see
and tests/

If anyone knows of similar efforts/prior art, please let me know! I'd really like to avoid reinvening the wheel here.


promnesia as a primary application (for me) [[promnesia]]

. [[linkrot]]

* motivation

Once you are sold on motivation in this section, and wondering why would this require a separate library/database, check out "testcases" section.

[2020-04-04] I want urls that represent information, regardless the way it's presented

let alone all the tracking/etc crap

[2020-05-23] "document equivalence" is a good term: How to establish (or avoid) document equivalence in the Hypothesis system : Hypothesis

why not use "rel=canonical" metadata field?

[2020-05-27] Google no longer providing original URL in AMP for image search results

[2019-10-11] mobile versions of sites sometimes have different "canonical", e.g.

No one would argue that a tweet is the same regardless where it's presented, yet there is no easy way to unify this

[2020-05-28] is messing with canonical [[cannon]]

[2019-11-02] e.g. this link doesn't have 'canonical' even though it's a mirror:

[2019-11-08] no canonical on gist

same as – hmm, this thing redirects now..

[2019-08-19] parent and sibling relations can be determined from the URL [[cannon]] [[promnesia]]

e.g. subreddit-post/user-comment/user-tweet, etc.

[2019-11-01] if the original page is gone I can still easily link my saved annotations (Instapaper/Pocket/Hypothesis) to archived page

[2019-09-07] urls a good candidate to determine 'entities' because they sure at least somewhat curated [[cannon]]

[2019-02-24] normalization is tricky.. for some urls, stuff after # is important . for some, it's utter garbage

however we can sort of get away with normalizing on server only?

[2019-08-07] The Problem With URLs

[2020-01-02] motivation: siloing: instapaper 'imports' pages and assigns an id:

so you can't connect your annotations on instapaper to notes etc

[2021-03-07] could normalize historic URLs which are already down? [[linkrot]]

perhaps not super useful if we can't access them, but still

* projects that could benefit from it

Apart from Promnesia, I believe it could be quite useful for other projects.

[2019-06-27] Hmm could be helpful for hypothesis? [[hypothesis]]

[2021-01-16] discuss about cannon (maybe on Slack)? [[hypothesis]] [[cannon]]

[2019-05-24] Annotation of content on sites like Facebook or Twitter? - Google Groups [[hypothesis]]

kinda related since they basically want canonical urls

[2021-01-30] Ignore URL parameters - Feature Requests - Memex Community [[worldbrain]]

[2021-01-22] wonder if we could cooperate? [[agora]] [[cannon]]

[2021-01-24] would be useful to use the same normalising engine for #archivebox for example? [[webarchive]]

[2021-02-07] could be useful for surfingkey/nyxt browser to hint 'interesting' urls?

[2019-12-26] [[linkrot]]

e.g. if the link is not present in, it doesn't mean it's not archived under a different canonical

if it's implemented as a helper extension/library, it could be useful for many other extensions

e.g. blockers, various highlighters, hypothesis, etc

[2020-12-07] einaregilsson/Redirector: Browser extension (Firefox, Chrome, Opera, Edge) to redirect urls based on regex patterns, like a client side mod_rewrite

[2020-11-20] could reuse URL underlying etc with ampie? [[ampie]]

* prior art

URL normalization algorithm should be shared with other projects to the maximum extent possible.
If not the exact algorithm, at least the 'curated' parts of it like regexes, testcases, etc should be shared.
It's a crap boring work that should be only done once (e.g. like timezones database).

[2020-06-30] ClearURLs / Addon: looks super super promising

Once ClearURLs has cleaned the address, it will look like this:

[2021-03-10] Not super convinced JSON would work well in general, but anyway it's already pretty good.

[2020-11-22] WorldBrain/memex-url-utils: Shared URL processing utilities for Memex extension and mobile apps. [[worldbrain]]

[2019-07-09] h/ at 0fc8a0d345741d43b4f80856a7cbb8f5afa70f80 · hypothesis/h [[hypothesis]]

[2019-07-09] excluded query params!

[2019-07-09] right, I could probably reuse hypothesis's canonify and contribute back. looks very similar to mine

[2020-05-12] coleifer/micawber: a small library for extracting rich content from urls

[2021-03-10] ok, pretty interesting. it probably uses network, but could at least use it for testing (or maybe even 'enriching'?)

[2019-03-27] sindresorhus/compare-urls: Compare URLs by first normalizing them

compareUrls('HTTP://', '');

[2019-12-25] sindresorhus/normalize-url

stripWWW can't handle amp etc

[2019-07-09] hypothesis: h/

[2019-04-16] niksite/url-normalize: URL normalization for Python

[2020-04-27] john-kurkowski/tldextract: Accurately separate the TLD from the registered domain and subdomains of a URL, using the Public Suffix List.

hmm could use this for better extraction…

[2019-03-27] rbaier/python-urltools: Some functions to parse and normalize URLs.

* ideas

[2021-03-07] maybe we can achieve 95% accuracy with generic rules and by handling the most popular websites

for the rest

if 'children' relations can't be determined by substring matching, perhaps cannon should generate 'virtual' urls? [[promnesia]] [[cannon]]

a special service to resolve siloed links like ? [[linkrot]]

Could also be useful for But a bit out of scope for this project..

just specify admissible regexes for urls so it's easier to unify?


maybe normalise to this? – huh, normalise to this?
TODO m.readdit/old.reddit

maybe stripp off subdom completely?—wpDQ&list=PL0kyDgrqAiUEF5d7krLIds1ebhTxCjm&shuffle=221

[2019-11-09] also this to summarize

sqlite3 promnesia.sqlite 'select domain, count(domain) from (select substr(normurl, 0, instr(normurl, "/")) as domain from visits) group by domain order by count(domain)'

rethinking the whole approach…


ok so how do we generalize from two examples?
e.g. say we also have -> youtube/abacaba
we get
youtube | keep
ru | drop
watch | drop
v abacaba | keep
I suppose it could guess that if we keep a query parameter once, we'll keep it always?
and if we extracted a certain substring without a query parameter, we'll also always keep it as is?

TODO how about this?
it's a reply to
which is a comment to

use shared JS/python tests for canonifying? [[ffi]] [[promnesia]]

[2019-09-03] should be idempotent?

hmm, maybe the extension can learn normalisation ruls over time? by looking at canonical and refining the rules?

sample random links and their canonicals for testing

background thing that sucks in canonical urls and provides data for testing? [[promnesia]]

how do we prune links that are potentially not secure to store? like certain URL parameters

need checks that url don't contain stupid shit like trailing colons etc

hmm could use this api for checking normalization? [[cannon]]

http get ''
    "archived_snapshots": {
        "closest": {
            "available": true,
            "status": "200",
            "timestamp": "20210219235548",
            "url": ""
    "url": ""

* testcases

Some tricky cases which would be nice to get right

[2020-11-15] Wendover Productions - YouTube

[2020-04-19] roam links

[2021-02-07] [[cannon]]

[2021-02-16] A Relational Turn for Data Protection? by Neil M. Richards, Woodrow Hartzog :: SSRN [[cannon]]


[2019-06-23] A Brief Intro to Topological Quantum Field Theories. - YouTube

eh, rules might be a bit complicated. E.g. if both v and list are present, we wanna ditch list, otherwise keep list

[2020-11-16] ">normalise DOI

Ah sure: This DOI:  should lead to this paper: .

m.wikipedia normalisation could also be useful for hypothesis? [[hypothesis]]


[2019-07-23] mm, it's got canonical though..

[2019-07-23] perhaps promnesia should respond both to canonical and its own idea of normalised (preferring canonical)

[2019-04-20] fragments: Aharonov-Bohm Experiment

url normalising… this is an example where fragments are important

[2019-08-26] here I guess it could yield url with hash + parent url?

[2019-08-26] always assume that parents in uri hierarchy are actual parents? I guess that's fairly reasonable

[2019-08-25] stuff like this:

[2019-08-25] this is also motivation for canonifying. this is a redirect link in tweet, and there is no way to associate it with canonical

[2020-05-02] [[cannon]]

[2020-04-30] Writing well | defmacro

support for and test on this page

[2020-05-28] Wayback Machine*/

[2019-11-15] maybe ?

[2019-11-09] github: this end up trimmed with … :(

[2019-11-07] github:

[2021-01-24] [[cannon]] : notes[[][SecureBoot - Debian Wiki]]

[2021-02-28];sid=20170930133438 [[cannon]]

'sid' matters here

hmm, server doesn't normalise properly?? (url escaping)Грамматикализация

semiconductors video should be unified properly. well, or again hierarchical thing? might be too spammy for 'watch later'

[2019-12-23] need parent link to trigger on this in cannon

[2020-06-16] hmm, both id and # ?

[2020-02-08] : ugh need to keep id

[2020-01-12] old.reddit and new reddit

[2019-06-02] handle

[2020-11-30] the ? is sneaky

[2020-11-22] # is just redundant?

[2019-08-25] Lisp Language ? is sneaky

better regex for url extraction

eh, urls can have commas… e.g.,_applicatives,_and_monads_in_pictures.html
so, for csv need a separate extractor.

[2020-11-18] Vanquishing ‘Monsters’ in Foundations of Computer Science: Euclid, Dedekind, Frege, Russell, Gödel, Wittgenstein, Church, Turing, and Jaśkowski didn’t get them all … by Carl Hewitt :: SSRN

should be more defensive

ValueError: netloc ' +79869929087,' contains invalid characters under NFKC normalization

[2019-08-26] did I do it?** [2020-12-09] 'bug' parameter


[2019-02-18] make sure ? extracted correctly


id here is important

[2021-03-15] pages don't even have canonical? [[cannon]]

* misc

would be convenient to normalise reddit annotations so annotations from all comments would be collected

[2019-09-03] potential pypi project?

hypothesis: wonder how it works on timestamped stuff?

hmm some local and remote pages may overlap

e.g. this is very likely to be mapped to normal py docss

[2020-05-11] Vision, Mission & Values — 2020 Update - - Medium

fragments are often random and useless
even default org-mode is guilty

[2019-07-09] Changed how threading works. by JakeHartnell · Pull Request 952 · hypothesis/h [[hypothesis]] [[reddit]]

reddit: tested on [[hypothesis]]

huh, so reddit seems to normalise to the main page, and displays annotations as 'orphaned' for comment views?

[2019-07-09] so look like reddit referes to the 'post' page as canonical. Right.


[2021-03-26] URLTeam - Archiveteam [[cannon]]

[2021-03-25] seomoz/url-py: URL Transformation, Sanitization [[cannon]]

[2021-03-03] (5) Jon Borichevskiy (@jondotbo) / Twitter [[promnesia]] [[cannon]]

hmm how to resolve twitter renames?…