We’ve been searching for a memory-safe programming language to replace C++ in Ladybird for a while now. We previously explored Swift, but the C++ interop never quite got there, and platform support outside the Apple ecosystem was limited. Rust is a different story. The ecosystem is far more mature for systems programming, and many of our contributors already know the language. Going forward, we are rewriting parts of Ladybird in Rust.
Sloppybird
Birdpoop browser you say? Never heard of of it.
Rest in peace to this browser.
Yeah seems about right for this project. I really wanted this to be a serious browser, but nothing about this dude is serious.
Also I know he backed this statement up with much better testing but these AI brainrot things people say kill me: “I ran multiple passes of adversarial review, asking different models to analyze the code for mistakes and bad patterns.”
You’re not the first I hear saying his bad news/not serious. Afaik I didn’t hear a thing about him until ladybird. What did I missed?
Those were some disappointing reads
God that screenshot is giving me severe second hand embarrassment
“I coded this with hundreds of handcrafted AI prompts.”
“That sounds hazardous, but did you test it?”
“I had multiple AIs test it!”
artisan AI prompts. we are supposed to be paying extra!
Let us all hope Servo does not go down the same path.
Is it a good sign for Rust code when it’s described as having “a strong ‘translated from C++’ vibe”? Or when the developer says too much Rust might be something they “can’t merge”?
out of context?
Please coordinate with us before starting any porting work so nobody wastes their time on something we can’t merge.
If you look at the code, you’ll notice it has a strong “translated from C++” vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline. The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode.
that seems reasonable to me
I think my statement came across as more alarmist than I meant it. E.g.
Is it a good idea to just translate something from C++ like that? It seems technically feasible but there’s something “off” about the whole thing. Apparently you can translate C++ directly to Rust, but anecdotal statements claim that while Rust supports C++ conventions, you wouldn’t typically build a Rust app using them.
Looking back previously, the developer originally talked about switching to Swift, then decided not to switch to Swift.
And in the past, “Ladybird devs have been very vocal about being ‘anti-rust’ (I guess more anti-hype, where Rust was the hype).”
It all just suggests rudderlessness from the developers right now. Must Rust be a priority? Did Swift need to be?
Why it wouldn’t be? Surely not having idiomatic rust doesn’t eliminate other benefits of switching to the language, like better tooling, memory safety, and perhaps more people willing to contribute. Over time the codebase can be improved but the main goal in the transition seems to not break existing functionality, which they seem to have accomplished for LibJS.
I don’t think “why not” is a great response in general - especially when the same developer also invested time in Swift that was ultimately wasted.
It’s not a why not response. I’m asking back why do you think it wouldn’t be worth it even as a literal translation from C++, because in my view, that would be a first step towards a proper Rust port, and it still brings benefits to the table.
I haven’t looked at the code, but the mem safety may be out if the translation just slapped unsafe and transmute everywhere.
And “working code” is often very hard to replace, it can be hard to justify code changes when the original “works just the same”. So, I would expect the weird ported code to live on unless there is a major effort to rewrite it.
There’s no reason to believe it’s mostly unsafe. And even if that’s the case, changing from unsafe rust to safe is less of a leap than cpp to rust.
Having done some C to rust auto-translation some time ago, it definitely was wildly unsafe. Maybe it’s better now, but there is no reason to assume it’s mostly safe now either. Even recently I did some regular vibe coding to test it out, and it generated some very questionable code.
Even if there is zero “unsafe”, there could be loads of unchecked array accesses, or unwraps causing panics, which while “safe”, will cause crashes.
Fixing unsafe can be a mixed bag, some will be easy, some will require much deeper changes. And without looking at the code, impossible to say which it will be.
Guess I got excited about this browser for nothing.
I was enthusiastic about this project. But I am afraid these recent tangents will only reduce momentum.
@Beep@lemmus.org @technology@lemmy.world
Ah, the smell of irony by the morning! Adopting a programming language often praised by its “safety”, while the entire pretension of “safety” is alchemically transmuted into a sewage and deliberately flushed up (not down) by a clanker who drinks from the cesspool with the same determination and thirst that of a Chevy Opala gurgling down entire Olympic pools worth of gasoline.
Being serious now, the foreseeable future for Web browsing is definitely depressing: Chromium needs no introduction (used to be an interesting browser until Google’s mask “don’t be evil” fell and straightforwardly revealed their corporate face and farce), Firefox have been “welcoming the new AI overlords” for a while, text browsers (such as Lynx) are far from feasible for a CAPTCHA(and Anubis)-driven web… now, one of the latest and fewest glimmers of hope, an alternative Web browser engine, is becoming the very monster the fight against which was promised to be the launchpad purpose (“They who fights with monsters should be careful lest they thereby become a monster”). I wouldn’t be surprised if Servo were to enshittify, too. Being able to choose among the sameness is such a wonderful thing, isn’t it?
I mean, I’m not the average Lemmy user who got this (understandably) deep hatred against AI, I am able to hold a nuanced view and finding quite interesting uses (especially when it comes to linguistics) for the clankers (especially the “open-weighted” ones). However, this, to shoving AI everywhere and using AI to “code for you”, it’s a whole different story. A software should be programmed in the way programming (as posited by Ada Lovelace) was intended to, not “vibe coded” by a fancy auto-completer who can’t (yet) deal with Turing completeness, especially when it comes to a whole miniature operational system that browsers became nowadays. When coding a whole OS, AI shouldn’t even be touched by a two million light-years pole, let alone by a two-feet pole.
Why leave WebKit out?
@paraphrand@lemmy.world @technology@lemmy.world
Oh, right, WebKit, I forgot mentioning it, thanks for reminding me of it!
It’s the engine I likely used the least throughout my digital existence. I mean, I likely used Lynx more than I used WebKit, hence my forgetfulness.
However, if we’re talking about the WebKit-based Linux browsers (such as Konqueror), IIRC, they’re a bit out of spec when it comes to the “modern Web”: WebKit’s adoption of latest specs tends to be slower than Firefox and Chromium.
Now, if we’re talking about Safari specifically, then… it’s part of Apple’s walled garden, one where even “Firefox from App Store” is actually a reskinned Safari (at least in iOS).
Be it Safari or Konqueror, deep inside, the WebKit engine seems to me like the “Apple’s Chromium”, so mentioning WebKit doesn’t really improve the awful prospect for browser engines that we’re facing nowadays.
However, if we’re talking about the WebKit-based Linux browsers (such as Konqueror), IIRC, they’re a bit out of spec when it comes to the “modern Web”: WebKit’s adoption of latest specs tends to be slower than Firefox and Chromium.
WebKit-GTK is up to date. 30 seconds of research in your favorite search engine and you would have found it out.
Chromium needs no introduction (used to be an interesting browser until Google’s mask “don’t be evil” fell and straightforwardly revealed their corporate face and farce), Firefox have been “welcoming the new AI overlords” for a while, text browsers (such as Lynx) are far from feasible for a CAPTCHA(and Anubis)-driven web…
To me the hope lies in Firefox forks
@INeedMana@piefed.zip @technology@lemmy.world
Yeah, me too. Unfortunately, the forks can only get so far in removing upstream AI garbage and other proprietary/corporate-oriented whistles-and-bells. If, say, some AI feature becomes so ingrained inside Firefox upstream, so deeply it ends up becoming some hard dependency for fundamental functioning of the browser (i.e. a feature that, if removed at the code-level, would render Firefox simply unable to function), no WaterFox, IronFox, Fennec or LibreWolf would be able to keep up with the latest versions: they’d either need to do a hard fork trying to independently maintain an entire codebase for a browser, or they’d need to use downgraded versions.
Not even to say about licensing shenanigans. We’ve seen many open-source projects suddenly changing their licensing to include legalese thin letters. We’ve seen open-source projects requiring developers to sign up some kind of NDA before being allowed to contribute with code. Seems like initially-open licenses aren’t written on stone when it comes to big projects, and Firefox is a big project.
The universe of open-source software is being slowly hijacked by corporate interests. This is not different with Firefox, which (as I said in another reply to someone in this thread a few minutes ago) is Mozilla’s main product (if not the main product, it’s certainly among their main projects). The same Mozilla which has been pivoting to AI (e.g. acquisition of Anonym; subtle phrasing changes from “About Firefox” page which used to state how “Firefox will never sell your data”, now this phrase is gone).
I use WaterFox on a daily basis. It’s by far the best browser I’ve been using. I tried LibreWolf but it doesn’t really likes my Portuguese ABNT2 keyboard (which has accents I use often), even after disabling ResistFingerprint, so I ended up sticking with WaterFox. On mobile, I use Fennec on a daily basis, and I’m worried about the end of “sideloading” on Android which will likely mess with its installation. But I’m aware of how both browsers rely on upstream code from Mozilla Firefox, whose enshittification is already an ongoing phenomenon. And that’s really depressing when it comes to the future of browser landscape, because we’re hoping for a true alternative. Servo is the last bastion of said hope (until it gets EEE’d by corporate interests, given how Linux Foundation itself is increasingly surrounded by corpos.
I’m more of a GNU/Stallman person who values autonomy and libreness as non-negotiable principles. I’m only using Android because I’m stuck with it due to certain societal impositions (banks and gov apps), otherwise I’d be long using a custom phone, which wouldn’t even be Linux, but something way more “unorthodox” for a phone such as FreeBSD or Illumos/OpenIndianna, systems of which I already used on a PC environment and got quite fond of.
We’ve seen open-source projects requiring developers to sign up some kind of NDA before being allowed to contribute with code.
you mean the DCOs? those are nothing like NDAs. if not, which ones you mean?
Someone could theoretically fork ladybird and strip out the AI, but it would be a lot of work.
Not that much, there’s a git log, just find when they started doing AI and fork from just before then
it’s not that they are adding AI to the browser. it’s that they are letting an AI develop the browser
they are letting an AI develop the browser
The port from C++ to Rust was assisted by an LLM. That’s different from “ChatGPT, write me a web browser”.
I cannot speak for the quality of that code, nor advocating for it, I merely want to make that distinction.
Andreas Kling’s ladybird? Don’t wanna touch that with a 10ft pole.
https://hyperborea.org/reviews/software/ladybird-inclusivity/
If any creator can separate work from personal and the product is good I really couldn’t care less with what they use their own time for.
I’m pretty sure you could find people with other unsavoury opinions in the devteams for both chrome and firefox, what then? Lynx?
Hyperborea.org is that owned by a racist or is the domain ment to be making fun of it?
Based on the content of that article alone (especially near the bottom), I don’t think it’s run by a racist at all. Can confirm via their social media presence.
Weird. I would not feel comfortable using that domain. I thought i was reading a neo Nazis blog on the issue.
Can you explain why you feel that way? “Hyperborea” is not a term I’m familiar with vis a vis Nazism.
Its a very common meme with neo Nazis and white nationalists. They are the only ones I see mention hyperborea. This explains the usage if you skip over the explanation of what hyperborea is https://knowyourmeme.com/sensitive/memes/hyperborea
The author explains why he chose the name on his site. I don’t think this is a neo-nazi / white nationalist thing despite the irony.
Yeah he picked it back in 2000s and its aged badly. Not his fault
Sigh of course it’s a Nordic thing. I should have guessed. White nationalists also love other Heathen/Norse symbolism.
Good to be careful, so thanks for educating me.
Thanks for the reference!
Good grief, the irrational ai kneejerk hate in this community is insane. This seems like a perfect use case - a code base with good test coverage and well defined output expectations, where a human has guided the translation and checked the results. The human in question has saved a lot of time. And still all the comments are “hurr durr slop amirite”. SMH fucking head.
Hes not even using the rust part either. Its still using c++. It’s just being developed on the side.
I think it’s important to note that human review = quality always, it’s case by case and a lot of times the nuance and potential problems don’t appear until the code has been fiddled with manually in my experience.
Basically saying that it depends on how it was used, but my hunch is using AI for new languages for production use without an expert to help is a bad use case since the new comer has no idea what nuances exist. Unless that’s not the case here
Nah.
no u
yeah, this part of the community is as insufferable as the overly enthusiastic vibe coders
And the people complaining about “insufferable” critics of AI, and assuming all criticism is “knee-jerk”, are themselves contributing to the cycle of negativity.
no shit, pick any post that mentions AI here and see who are the first ones to comment every single time.
If you don’t want a “cycle of negativity”, maybe let’s not throw rocks at anything for just bringing up the topic?
how’s acknowledging this herd behavior contributing to it. Some of us are also tired of reading the same crap in the comments.
I’ve suddenly lost all interest in this browser’s development. From what I’ve heard, LLMs are pretty bad at generating Rust code for some reason. If they used LLM to bulk convert C++ code to Rust, the quality of the code is questionable at best.
Surely you read the article?
"The requirement from the start was byte-for-byte identical output from both pipelines. "
The bytecode from C++ is identical to the Rust output.
I don’t think it’s possible to write rust code that compiles to the exact same binary as c++. compilers make different optimizations, and make overall a different structure, especially across languages.
I think they meant the rust library produces the same output from the same input as the c++ library.
if llms indeed generate worse rust code than for other languages, that’s not that big of a problem because the compiler will catch a lot of mistakes. if it compiles, it will run, and no memory safety bugs unless unsafe is also used. the llm could pick the wrong functions for some uses, but that should be caught relatively easily with testing, which can be automated partly
The things people criticize are so fucking brainless these days. AI this, slop that.
Not a single one of you made fun of “let’s rewrite it in Rust.” You can’t even elevate to the level of mildly funny parroting.
“Let’s stop halfway through our multi-year project to rewrite it in another language” is peak nerd shiny distraction. I say this as one who resists the urge every day. Way to delay your project by several more years, clown.
All things considered the way they’re approaching the migration is fine enough - they’re only moving specific portions at a time, they’re not stopping C++ development, and they’re making sure it doesn’t introduce regressions. Adopting a memory-safe language for something like a browser makes sense because it completely eliminates that class of vulnerabilities.
The problem is the way they’re approaching the code itself. From their wording, it sounds like they’re relying on AI heavily for both writing and reviewing the code. Rust has a steeper learning curve than most languages and is very different from C++. They even mention in the blog that their current Rust code looks like C++ code ported over. If they don’t take the time to actually learn Rust before adopting it, it’ll just lead to security logic issues that their AI couldn’t catch because C++ and Rust don’t always behave the same way. And that’s completely ignoring all of the other ethical/technical issues with AI
Be that as it may, the time to choose Rust was at the beginning. It existed then, but they made their technology choice. Continuing to develop in C++ while doing the migration just means more throwaway code and duplicated effort. This decision is truly the worst of both worlds.
Gross
Every minute that passes, Gemini (not the Google one!) looks more viable, which is already a shame because as I described in lemm.ee before it went down, that itself feels like “Gopher but in the format of a brutalist buttplug”.
What we need is some sort of return to HTML + CSS 1.0, or a web engine that simply ditches JS, so that development can be tackled by Individuals again.
Separation of server styles, server markup and client styles is definitely something Gemini lacks, not having server styles at all.
But it’s not as much a problem of browsers as it is of the environment in which information is shared and propagated. While we still connect to websites using a browser, those websites will behave however their owners wish, inflating web standards and requiring complex browsers.
I was dreaming of something like “hypertext Usenet”, and making descriptions of another system I was interested in trying to make, I am still not even close to that, and I’m not sure I’m still interested, because it appears NOSTR now has much of what I wanted in its standards.
Basically if you imagine a system for propagating posts addressable by ids and with markup inside, referring to styles and containing hyperlinks by ids to other posts, you can throw away the idea of a website, and still have the hypertext web. That markup can be anything, while the URLs in the links leading to images and such (and other pages) are using those ids or are at least Blossom-compliant.
I think NOSTR of new protocols is the one most likely to eventually attain such functionality. People here wouldn’t like it, I suppose, because of huge intersection with Bitcoin community and because most clients and client libraries are for the web. But there’s now a C client library, functional enough, and architecturally NOSTR is worlds above the thinking of designers of Lemmy, for example.
https://github.com/DioxusLabs/blitz - This kinda covers the “engine without JS” part, it’s a bit more limited, but also in alpha. It’s part of the dioxus project, they use it for rendering UI on desktop.
Oh now that’s interesting. I’ve always wondered (not knowing much about how they’re made) if it would be possible to make a desktop compositor for Linux that simply uses HTML and CSS. One could even natively embed local pages in it.
Gnome shell uses JS and CSS (i beleive anyway), and the w11 start menu is supposedly react native, so I think the answer is definitely yes.
Gnome shell uses JS
I knew there was a reason why it sucked so much!














