- new
- past
- show
- ask
- show
- jobs
- submit
Where? Lets take a random example: https://hub.docker.com/hardened-images/catalog/dhi/traefik
Ok, where is the source? Open source means I can build it myself, maybe because I'm working in an offline/airgapped/high compliance environment.
I found a "catalogue" https://github.com/docker-hardened-images/catalog/blob/main/... but this isn't a build file, it's some... specialized DHI tool to build? Nothing https://github.com/docker-hardened-images shows me docs where I can build it myself or any sort of "dhi" tool.
Vanilla docker/buildkit works just fine as we use it in Stagex with just makefiles and Containerfiles which makes it super easy for anyone to reproduce our images with identical digests, and audit the process. The only thing non default we do to docker is have it use the containerd backend that comes with docker distributions since that allows for deterministic digests without pushing to a registry. This lets us have the same digests across all registries.
Additionally our images are actually container native meaning they are "from scratch" all the way down avoiding any trust in upstream build systems like Debian or Alpine or any of their non deterministic package management schemes or their single-point-of-failure trust in individual maintainers.
We will also be moving to LLVM native builds shortly removing a lot of the complexity with multi-arch images for build systems. Easily cross compile all the things from one image.
Honestly we would not at all be mad if Docker just white labeled these as official images as our goal is just to move the internet away from risky and difficult to audit supply chains as opposed to the "last mile" supply chain integrity that is the norm in all other solutions today.
A big part of this for us is transparency. That’s why every image ships with VEX statements, extensive attestations, and all the metadata you need to actually understand what you’re running. We want this to be a trustworthy foundation, not just a thinner base image.
We’re also extending this philosophy beyond base images into other content like MCP servers and related components, because the more of the stack that is verifiable and hardened by default, the better it is for the ecosystem.
A few people in the thread asked how this is sustainable. The short answer is that we do offer an enterprise tier for companies that need things like contractual continuous patching SLAs, regulated-industry variants (FIPS, etc.), and secure customizations with full provenance and attestations. Those things carry very real ongoing costs, so keeping them in Enterprise allows us to make the entire hardened catalog free for the community.
Glad to see the conversation happening here. We hope this helps teams ship software with a stronger security posture and a bit more confidence.
Don't you personally feel disgust mentioning AI stuff?
Yeah, I realize it is mandatory to mention AI today in every piece of communication of any company; but on a personal level, isn't that something that requires a bit of dying every time?
To the point that redhat created podman that can do what you want.
With Bitnami discontinuing their offer, we recently switched to other providers. For some we are using a helm chart and this new offer provides some helm charts but for some software just the image. I would be interested to give this a try but e.g. the python image only various '(dev)' images while the guide mentions the non-dev images. So this requires some planning.
EDIT: Digging deeper, I notice it requires a PAT and a PAT is bound to a personal account. I guess you need the enterprise offering for organisation support. I am not going to waste my time to contact them for an enterprise offer for a small start-up. What is the use case for CVE hardened images that you cannot properly run in an CICD and only on your dev machine? Are there companies that need to follow compliance rules or need this security guarantee but don't have CICD in place?
The enterprise hardened images license seems to be a different offering for offline mirroring or more strict compliance…
The main reason for CVE hardened images is that it’s hard to trust individuals to do it right at scale, even with CI/CD. You’re having to wire together your own scan & update process. In practice teams will use pinned versions, delays in fixing, turn off scanning, etc. This is easy mode
1. 'generous' initial offering to establish a userbase/ecosystem/network-effect
2. "oh teehee we're actually gonna have to start charging for that sorry we know that you've potentially built a lot of your infrastructure around this thing"
3. $$$
Our view is that this was largely a marketing maneuver by Docker aimed at disrupting Chainguard’s momentum.
The deeper issue in the container security space is a lack of genuine innovation. Most offerings are incremental (and offer inferior) variations on what Chainguard has already proven.
When Chainguard’s funding round last February drew significant industry attention, it triggered a rush into “secure images” as a category. We know because VCs have been reaching out to us incessantly. That, in turn, pushed Bitnami to attempt monetization of what had historically been free images, and Docker to offer free images to fill the vacuum Bitnami left following their attempt to monetize.
We were monitoring Docker closely and suspect that following their "Docker Hardened Images" splash they realized it was a lot harder to sell into the industry than they initially realized.
The reason source code is rarely shared in this space is straightforward: once it's open-sourced, a meaningful barrier to entry to the hardened image industry largely disappears.
Truthfully, at current prices you're 100% paying for quality of life. From all public pricing figures I've seen, it's cheaper to build hardened images, in-house than to buy from a vendor.
Our offering at VulnFree is technically priced below the cost to build in-house, but our real value add is meeting dev teams where they are per our custom hardened images.
Meanwhile, nix already has packaged more software than any other distro, and the vast majority of its software can be put into a container image with no additional dependencies (i.e. "hardened" in the same way as these are) with exactly zero extra work specific to each package.
The nixpkgs repository already contains the instructions to build and isolate outputs, there's already a massive cache infrastructure setup, builds are largely reproducible, and docker will have to make all of that for their own tool to reach parity... and without a community behind it like nix has.
Offering image hardening to custom images looks like a reasonable way for Docker to have a source of sustained income. Regulated industries like banks, insurers, or governmental agencies are likely interested.
Bait and switch once the adoption happens has become way too common in the industry.
It's what the people who created OG Docker are building now
Not a problem for casual users but even a small team like mine, a dozen people with around a dozen public images, can hit the pull limit deploying a dozen landscapes a day. We just cache all the public images ourselves and avoid it.
https://www.docker.com/blog/revisiting-docker-hub-policies-p...
> Is Docker sunsetting the Free Team plan?
> No. Docker communicated its intent to sunset the Docker Free Team plan on March 14, 2023, but this decision was reversed on March 24, 2023.
There's an excellent reason: They're login gated, which is at best unnecessary friction. Took me straight from "oh, let me try it" to "nope, not gonna bother".
Docker is just grasping at straws. Chainguard is worth more than Docker. This is just a marketing plot (and it's clearly working given the number of devs messaging me).
https://www.docker.com/blog/security-that-moves-fast-dockers...
Note: I work at Docker
This would be like expecting AWS to protect your EC2 instance from a postinstall script
Update the analogy to “like EC2 but we handle the base OS patching and container runtime” and you have Fargate.
There's a "Make a request" button, but it links to this 404-ing GitHub URL: https://github.com/docker-hardened-images/discussion/issues
oh well. hope its good stuff otherwise.
We can harden that image for you. $800/img/mth for standard setups. Feel free to reach out on our contact form and our automations will ping our phones, so you can expect a quick response (even on weekends).
But, we pay for support already.
Nice from docker!
None of the alternatives come anywhere close to what we needed to satisfy a threat model that trusts no single maintainer or computer, so we started over from actually zero.
I've also noticed it's downloading many different versions of the same set of packages, which seems odd for bootstrapping a build. I finally lost patience and stopped it. Sure, in the real world I'll probably start from a stage3 container, but so far, trying it out for myself has been pretty disappointing.
For shorter term we are starting to archive at archive.org and CERN and hope to have the fetch script be able to fail over to those soon.
The GNU servers are the worst, and unreliable for hours at a time, and have lots of rate limiting.
At the moment collecting all the sources directly from upstreams, while great for trust building, is the biggest pain point. Sorry about that!
For the super short term join #stagex:matrix.org and anyone would be happy to wormhole you their "fetch" directory.
From scratch is ideal, distroless is great too
Then use firewalls around your containers as needed
Chainguard still has better CVE response time and can better guarantee you zero active exploits found by your prod scanners.
(No affiliation with either, but we use chainguard at work, and used to use bitnami too before I ripped it all out)
I work at Chainguard. We don't guarantee zero active exploits, but we do have a contractual SLA we offer around CVE scan results (those aren't quite the same thing unfortunately).
We do issue an advisory feed in a few versions that scanners integrate with. The traditional format we used (which is what most scanners supported at the time) didn't have a way to include pending information so we couldn't include it there.
The basic flow was: scanner finds CVE and alerts, we issue statement showing when and where we fixed it, the scanner understands that and doesn't show it in versions after that.
so there wasn't really a spot to put "this is present", that was the scanner's job. Not all scanners work that way though, and some just rely on our feed and don't do their own homework so it's hit or miss.
We do have another feed now that uses the newer OSV format, in that feed we have all the info around when we detect it, when we patch it, etc.
All this info is available publicly and shown in our console, many of them you can see here: https://github.com/wolfi-dev/advisories
You can take this example: https://github.com/wolfi-dev/advisories/blob/main/amass.advi... and see the timestamps for when we detected CVEs, in what version, and how long it took us to patch.
Do with that knowledge what you may.
What about a safer container ecosystem without Docker?
Podman solved rootless containers and everything else under the sun by now.
All docker is doing is playing catch-up.
But guess what? They are obsolete. It's just time until they go the way of HashiCorp's Vagrant.
Docker is only making money of enterprise whales by now, and eventually that profit will dry up, too.
If you are still relying on docker, it is time to migrate.
I did work for a client recently where they were using Podman Desktop and developers are using Macbooks (Mx series).
They tried to run an amd64 image on their machine. When building a certain Docker image they had it was segfaulting with a really generic error and it wasn't specific to a RUN command because if you keep commenting out the next one, it would appear on the next one. The stack trace was related to Podman Compose's code base.
Turns out it's a verified bug with Podman with an open issue on GitHub that's affecting a lot of people.
I've been using Docker for 10 years with Docker Engine, Compose, Desktop, Toolbox, etc. and never once have I seen a single segfault, not even once.
You know what's interesting? It worked perfectly with Docker Desktop. Literally install Docker Desktop, build it and run it. Zero issues and up and running in 10 minutes.
That company got to pay me for a few hours of debugging and now they are happily paying clients for Docker Desktop because the cost for the team license is so low that having things "just work" for everyone is a lot cheaper than constant struggles and paying people to identify problems.
Docker Desktop is really robust, it's a dependable tool and absolutely worth using. It's also free until you're mega successful and are generating 8 figures of revenue.
Shouldn't be using podman compose. It's flimsy and doesn't work very well, and I'm pretty sure it doesn't have Red Hat's direct support.
Instead, activate Podman's Docker API compatibility socket, and simply set your `DOCKER_HOST` env var to that, and use your general docker client commands such as `docker`, `docker compose` and anything else that uses the Docker API. There are very few things that don't work with this, and the few things that don't are advanced setups.
[0] https://github.com/containers/podman/blob/main/docs/tutorial...
[1] https://github.com/eriksjolund/podman-caddy-socket-activatio...
Chainguard came to this first (arguably by accident since they had several other offerings before they realized that people would pay (?!!) for a image that reported zero CVEs).
In a previous role, I found that the value for this for startups is immense. Large enterprise deals can quickly be killed by a security team that that replies with "scanner says no". Chainguard offered images that report 0 CVEs and would basically remove this barrier.
For example, a common CVE that I encountered was a glibc High CVE. We could pretty convincingly show that our app did not use this library in way to be vulnerable but it didn't matter. A high CVE is a full stop for most security teams. Migrated to a Wolfi image and the scanner reported 0. Cool.
But with other orgs like Minimus (founders of Twistlock) coming into this it looks like its about to be crowded.
There is even a govt project called Ironbank to offer something like this to the DoD.
Net positive for the ecosystem but I don't know if there is enough meat on the bone to support this many vendors.
https://docs.docker.com/dhi/features/#dhi-enterprise-subscri...
Paying for something “secure” comes with the benefit of risk mitigation - we paid X to give us a secure version of Y, hence its not our fault “bad thing” happenned.
I recall being an infra lead at an Big Company that you've heard of and having to spend a month working with procurement to get like 6 Mirantis / Docker licenses to do a CCPA compliance project.
The question I'd be interested in is, outside of markets where there's a lot of compliance requirements, how much demand is there for this as a paid service...
People like lower CVE images, but are they willing to pay for them. I guess that's an advantage for Docker's offering. If it's free there is less friction to trying it out compared to a commercial offering.
That includes anyone who wants to sell to the US government (and probably other governments as well).
FedRAMP easentially[1] requires using "hardened" images.
[1]: It isn't strictly required, but without out things like passing security scans and FIPS compliance are more difficult.
Note that you don't have to be DoD to use Iron Bank images. They are available to other organizations too, though you do have to sign up for an account.
Some images like Vault are pretty bare (eg no shell).
My company makes its own competing product that is basically the same thing, and we (and I specifically) were pretty heavily involved in early Platform One. We sell it, but it's basically just a free add-on to existing software subscriptions, an additional inducement to make a purchase, but it costs nothing extra on on its own.
In any case, I applaud Docker. This can be a surprisingly frustrating thing to do, because you can't always just rebase onto your pre-hardened base image and still have everything work, without taking some care to understand the application you're delivering, which is not your application. It was always my biggest complaint with Ironbank and why I would not recommend anyone actually use it. They break containers constantly because hardening to them just means copying binaries out of the upstream image into a UBI container they patch daily to ensure it never has any CVEs. Sometimes this works, but sometimes it doesn't, and it's fairly predictable, like every time Fedora takes a new glibc version that RHEL doesn't have yet, everything that links against starts segfaulting when you try to copy from one to the other. I've told them this many times, but they still don't seem to get it and keep doing it. Plus, they break tags with the daily patching of the same application version, and you can't pin to a sha because Harbor only holds onto three orphaned shas that are no longer associated with a tag.
So short and long of it, I don't know about meat on the bone, but there is real demand and it's getting greater, at least in any kind of government or otherwise regulated business because the government itself is mandating better supply chain provenance. I don't think it entirely makes sense, frankly. The end customers don't seem to understand that, sure, we're signing the container image because we "built" it in the sense that we put together the series of tarballs described by a json file, but we're also delivering an application we didn't develop, on a base image full of upstream GNU/Linux packages we also didn't develop, and though we can assure you all of our employees are US citizens living in CONUS, we're delivering open source software. It's been contributed to by thousands of people from every continent on the planet stretching decades into the past.
Unfortunately, a lot of customers and sales people alike don't really understand how the open source ecosystem works and expect and promise things that are fundamentally impossible. Nonetheless, we can at least deliver the value inherent in patching the non-application components of an image more frequently than whoever creates the application and puts the original image into a public repo. I don't think that's a ton of value, personally, but it's value, and I've seen it done very wrong with Ironbank, so there's value in doing it right.
I suspect it probably has to be a free add-on to some other kind of subscription in most cases, though. It's hard for me to believe it can really be a viable business on its own. I guess Chainguard is getting by somehow, but it also kind of feels like they're an investor darling getting by on the reputations of its founders based on their past work more than the current product. It's the container ecosystem equivalent of selling an enterprise Linux distro, and I guess at least Redhat, SUSE, and Canonical have all managed to do that, but not by just selling the Linux distro. They need other products plus support and professional services.
I think it's a no-brainer for anyone already selling a Linux distro to do this on top of it, though. You've already got the build infrastructure and organizational processes and systems in place.
I've been in contact with some of the security folks at Iron Bank. The last time we dug into Iron Bank images, they were simply worse than what most vendors offered. They just check the STIG box.
I'm not sure if Chainguard was first, but they did come early. The original pain point we looked into when building our company was pricing, but we've since pivoted since there are significant gaps in the market that remain unaddressed.