- new
- past
- show
- ask
- show
- jobs
- submit
isn't this more like a port of `html5ever` from rust to python using LLM, as opposed to creating something "new" based on the test suite alone?
if yes, wouldn't be the distinction rather important?
The first iteration of the project created a library from scratch, from the tests all the way to 100% test coverage. So even without the second iteration, it's still possible to create something new.
In an attempt to speed it up, I (with coding agent) rewrote it again based on html5ever's code structure. It's far from a clean port, because it's heavily optimized Rust code, that isn't possible to port to Python (Rust marcos). And it still depended on a lot of iteration and rerunning tests to get it anywhere.
I'm not pushing any agenda here, you're free to take what you want from it!
It looks to me like this is the last commit before the rewrite: https://github.com/EmilStenstrom/justhtml/tree/989b70818874d...
The commit after that is https://github.com/EmilStenstrom/justhtml/commit/7bab3d2 "radical: replace legacy TurboHTML tree/handler stack with new tokenizer + treebuilder scaffold"
It also adds this document called html5ever_port_plan.md: https://github.com/EmilStenstrom/justhtml/blob/7bab3d22c0da0...
Here's the Codex CLI transcript I used to figure this out: https://gistpreview.github.io/?53202706d137c82dce87d729263df...
You also mention that the current "optimised" version is "good enough" for every-day use (I use `bs4` for working with html), was the first iteration also usable in that way? Did you look at `html5ever` because the LLM hit a wall trying to speed it up?
As for bs4, if you don't change the default, you get the stdlib html.parser, which doesn't implement html5. Only works for valid HTML.
$ uv run run_tests.py --check-errors -v
FAILED: 8337/9404 passed (88.6%), 13 skipped
It seems this the parser is creating errors even when none are expected: === INCOMING HTML ===
<math><mi></mi></math>
=== EXPECTED ERRORS ===
(none)
=== ACTUAL ERRORS ===
(1,12): unexpected-null-character
(1,1): expected-doctype-but-got-start-tag
(1,11): invalid-codepoint
This "passes" because the output tree still matches the expected output, but it is clearly not correct.The test suite also doesn't seem to be checking errors for large swaths of the html5 test suite even with --check-errors, so it's hard to say how many would pass if those were checked.
That said, the example you are pulling our out does not match that either. I'll make sure to fix this bug and other like it! https://github.com/EmilStenstrom/justhtml/issues/20
There's also something off about your benchmark comparison. If one runs pytest on html5lib, which uses html5lib-test plus its own unit tests and does check if errors match exactly, the pass rate appears to be much higher than 86%:
$ uv run pytest -v
17500 passed, 15885 skipped, 683 xfailed,
These numbers are inflated because html5lib-tests/tree-construction tests are run multiple times in different configurations. Many of the expected failures appear to be script tests similar to the ones JustHTML skips.
Emil Stenström wrote it with a variety of coding agent tools over the course of a couple of months. It's a really interesting case study in using coding agents to take on a very challenging project, taking advantage of their ability to iterate against existing tests.
I wrote a bit more about it here: https://simonwillison.net/2025/Dec/14/justhtml/
I cloned the repo and ran `wc -l` on the src directory and got closer to 9,500. Am i missing something?
Edit: maybe you meant just the parser