- new
- past
- show
- ask
- show
- jobs
- submit
The fix consists of implementing an XXX present since the code was added:
/*
* XXX validate that domain name only contains valid characters
* for two reasons: 1) correctness, 2) we do not want to pass
* possible malicious, unescaped characters like `` to a script
* or program that could be exploited that way.
*/
https://www.freebsd.org/security/patches/SA-25:12/rtsold.pat...They should probably go through their whole system and verify that there aren't more shell scripts being used, e.g. in the init system. Ideally a default distro would have zero shell scripts.
It doesn't mean getting away from scripting languages; it means getting away from shell scripts in particular (the parent poster said specifically "zero shell scripts"). If the script in question was written in Lua, or heck even Javascript, this particular issue most probably wouldn't have happened, since these scripting languages do not require the programmer to manually quote every single variable use.
And cesarb is correct - the issue isn't scripts; it's shell scripts, especially Bash and similar. Something like Deno/Typescript would be a decent option for example. Nushell is probably acceptable.
Even Python - while a terrible choice - is a better option than shell scripts.
Probably never going to happen. There is a dearth of good scripting languages, and I would imagine any POSIX committee is like 98% greybeard naysayers who think 70s Unix was the pinnacle of computing.
Also the shell isn't even "the primary interface of your OS". For Linux that's the Linux ABI, or arguably libc.
Unless you meant "human interface", in which case also no - KDE is the primary interface of my OS.
This is an extremely naive take as are the rest of your comments. Any language in the wrong hands is error prone.
Talk about naive!
I've always believed sh, csh, bash, etc, are very bad programming languages that require excessive efforts to learn how to write code in without unintentionally introducing bugs, including security holes.
vulnerable to remote code execution from
systems on the same network segment
Isn't almost every laptop these days autoconnecting to known network names like "Starbucks" etc, because the user used it once in the past?That would mean that every FreeBSD laptop in proximity of an attacker is vulnerable, right? Since the attacker could just create a hotspot with the SSID "Starbucks" on their laptop and the victim's laptop will connect to it automatically.
Joking, but not that much :)
As far as I know, access points only identify via their SSID. Which is a string like "Starbucks". So there is no way to tell if it is the real Starbucks WiFi or a hotspot some dude started on their laptop.
Aka "unknown" or "public" Network....don't do that.
[1] except availability, we still can't get it right in setups used by regular people.
And when you connect to a non-public WiFi for the first time - how do you make sure it is the WiFi you think it is and not some dude who spun up a hotspot on their laptop?
Take eduroam, which is presumably the world's largest federated WiFi network. A random 20 year old studying Geology at Uni in Sydney, Australia will have eduroam configured on their devices, because duh, that's how WiFi works. But, that also works in Cambridge, England, or Paris, France or New York, USA or basically anywhere their peers would be because common sense - why not have a single network?
But this means their device actively tries to connect to anything named "eduroam". Yes it is expecting to eventually connect to Sydney to authenticate, but meanwhile how sure are you that it ignores everything it gets from the network even these low-level discovery packets?
Anyways, this feels like a big issue for "hidden" FreeBSD installs, like pfSense or TrueNAS (if they are still based on it though). Or for servers on hosting providers where they share a LAN with their neighbors in the same rack.
And it's a big win for jailbreaking routers :D
It's a coordination problem. Note that the original notion of resolvconf, IIUC, was it was only stitching together trusted configuration data. That's no excuse, of course, for not rigorously isolating data from execution, which is more difficult in shell scripts--at least, if you're not treating the data as untrusted from the get go. It's not that difficult to write shell code to handle untrusted data, you just can't hack it together without keeping this is mind. And it would be much easier if the resolver infrastructure in libc had a proper API for dealing with resolv.conf (and others), which could be exported by a small utility which in turn could be used to slice and dice configurations from shell scripts.
The problem with the new, alternative monoliths is they very quickly run off into the weeds with their crazy features and configuration in ways that create barriers for userland applications and libraries to rely upon, beyond bootstrapping them to query 127.0.0.1:53. At the end of the day, resolv.conf can never really go away. So the proper solution, IMO, is to begin to carefully build layers around the one part that we know for a fact won't go away, rather than walking away with your ball to build a new playground. But that requires some motivated coordination and cooperation with libc developers.
Why not? And I don't mean this in tongue-in-cheek, but as a genuine interrogation: why not go the macOS/systemd route?
DNS is a complex topic. Much more complex than people admit it is, and that can definitely not be expressed fully with resolv.conf. I do agree that it is too late to get rid of it (and was not my concern actually), but it is too limited to be of actual use outside of the simple "I have a single DNS server with a single search domain". IMHO, a dedicated local daemon with its own bespoke config definitely has value, even if it solely provides a local cache for applications that don't have one already (like most of them outside of browsers). And for more complex cases, simple integration with the network configuration daemon provides actual value in e.g. knowing that a specific server is reachable through a specific interface that has a specific search domain. That is, native routing to the correct servers to avoid the timeout dance as soon as you have split networks.
Also, for the local ad-hoc configuration part. We already have nsswitch which is its own can of worms that pretty much nobody have ever heard about let even touched its configuration. Heck, I've written DNS servers but only looked once at nsswitch. resolved's configuration is integrated in the systemd ecosystem, has an approachable and well documented configuration, and is pretty useful in general.
Anyways, the main gripe I had was not really at the mess that is DNS on Linux, but the general stance in the UNIX-like world against anything that's not a lego of shell scripts because "that's not the unix philosophy". Yeah you can write an init system fully with sh, have their "units" also all be written in sh, but oh lord has stuff like systemd improved the situation for the init + service part. Having a raw string from a network packet land in a shell script is a recipe for disaster, seeing how much quoting in scripts is famously difficult.
> The problem with the new, alternative monoliths is they very quickly run off into the weeds with their crazy features and configuration
Agreed for the crazy features. systemd is a godsend for the modern linux world, but I'm skeptical when I see the likes of systemd-home. Yet the configuration is not where I'd pick at those systems though, because they tend to be much more configurable. They are opiniated, yes, but the configuration is an actual configuration and not a patchwork of shell scripts somewhere in /etc, when they're not direct patches to the foundational shell scripts!
> in ways that create barriers for userland applications
How so? In the specific example of resolved, I'd argue it's even less work for applications, because they don't need to query multiple DNS servers at once (it'll handle it for them), don't need to try resolution with and without search domain, etc.
In the end, I find that resolved's approach at symlinking its stub resolv.conf is the most elegant approach with our current setups.
PS: I talk a lot about resolved because that's the one I know best, not the one I think is the best! It has loads of shortcomings too, yet it's still a net improvement to whatever was in place before.
So far tailscale magicdns just works on FreeBSD.
I second that systemd is great, for services. Anything beyond that? Just a gargantuan opaque buggy overreach.
resolv.conf is limited, but it's also been highly stable for decades, and it's sufficient if not ideal for controlling how getaddrinfo works (at least for on-the-wire requests), including controlling things like EDNS0, parallel requests, etc. Most if not all libc resolvers support things like parallel querying and other simple knobs which are configurable (if at all--see musl libc) through resolv.conf, demonstrating that it's expressive enough for most if not all common requirements shared among various client-side stub resolvers.
> And for more complex cases, simple integration with the network configuration daemon
But which one? Are you suggesting integration by way of loading it's configuration(s) (which puts us back at square 0), or by a modified query protocol, or by interfacing with the broader but even more diverse native configuration systems? None of the options seem remotely practical from the perspective of most open source projects, unless they're specifically targeting a single environment like Linux/systemd/resolvd. I don't see a viable pathway to make that happen. By contrast, embracing and hopefully improving resolv.conf as an integration point could be done piecemeal, environment by environment. The syntax is already effectively universal across systems, with the options directive providing most of the knobs. We could even make an initial push through POSIX by officially standardizing the syntax, which may even convince musl libc to make its resolver actually configurable.
> In the specific example of resolved, I'd argue it's even less work for applications, because they don't need to query multiple DNS servers at once (it'll handle it for them), don't need to try resolution with and without search domain, etc.
Yes, in most cases it's sufficient for userland applications to just make simple requests to the locally managed resolver service defined in resolv.conf. But the cases and projects needing more control over how they do their requests, using their own resolvers, only grows, especially with the proliferation of DNS schemes-see, e.g., the various recent HTTP-related DNS records which often require multiple queries and can benefit from parallel queries managed internally. A prime example is getaddrinfo itself, some implementations of which do parallel queries for A/AAAA lookups. Which brings us back to my main point: resolv.conf is the only common centralized point across almost all environment (Windows being the major exceptoin) for configuring basic DNS services.
I'm not arguing for improving resolv.conf integration as a way to replace local DNS services or their configuration. Just that for decades the staleness of resolv.conf has been a conspicuous and growing pain point from both a system configuration and userland integration perspective, and a little coordinated love & attention across the ecosystem, if only firmly committing to what's already there (especially for glibc and FreeBSD) as a reliable and more easily leveraged source of truth for code that needs it, would go a long way.
> IPv6 users that do not configure the system to accept router advertisement messages, are not affected.
Maybe I'm missing something but isnt that a workaround?
"PC or computers or hardware that uses OS that consume FreeBSD, has a faulty software for the router's firmware?"
"The router's software performs ad distributions?"
"The version of internet, the router uses, is updated, whereas, the target machine, or the user's machine is still running a old version"
"The security patch works for the modern but not the precursor version?"
"This leaves older systems obsolete in the market?"
"is this a step-by-step instructions to business owners to introduce new products, selling that older products are obsolete" ?
If you are a real human, the most interesting question you're bringing up is What about all the appliances backed by FreeBSD? Yes, they are obsolete if they use IPv6 and accept RAs and if they don't get updates.
Google tracks IPv6 adoption at almost 50% globally and over 50% in the USA (https://www.google.com/intl/en/ipv6/statistics.html)
IPv6 is mainstream.
It truly is a bad one but I really appreciate Kevin Day for finding/reporting this and for all the volunteer work fixing this.
All I had to do was "freebsd-update fetch install && reboot" on my systems and I could continue my day. Fleet management can be that easy for both pets and cattle. I do however feel for those who have deployed embedded systems. We can only hope the firmware vendors are on top of their game.
My HN addiction is now vindicated as I would probably not have noticed this RCE until after christmas.
This makes me very grateful and gives me a warm fuzzy feeling inside!
You should go into comedy, this would kill at an open mic!
As for noticing it quickly, add `freebsd-update cron` to crontab and it will email you the fetch summary when updates are available
Always makes sense to subscribe to the security-announce mailing list of major dependencies (distro/vendor, openssh, openssl etc.) and oss-security.