I like that it's a bash script only - there was this bash-based Docker reimplementation, so maybe these two fit well together?
But come on, how many levels of abstraction do we want to pile onto another? Namespaces, containers, Docker files, compose, swarm, k8s, helm, cloudfront, terraform, auth servers/OAuth, ad-hoc "REST" auth kludges, dynamic languages, distro- and lang package managers/ecosystems, blabla - because IT'S SO MUCH EASIER LOL. All of which are one-of-a kind snowflake solutions to self-inflicted problems on top of an O/S that is already highly portable, for good reasons. We're just kicking the can down the road, or alternatively, blow up cyclical/generational bubbles to trap freshmen and idiots. Cloud tooling seems like a zero-sum game at this point, wasting talent for three "cloud providers" to make loads of cash. The whole thing is antithetical to humanist, local-first, site autonomy principles in the name of growth for very few.
The problem is imo non openness and (too) quick development of cloud services. Cloud iaas is genius and a necessary step to increase efficiency. But imagine a open source standard for these services, instead of bigquery, lightsail bla bla, and im sure you wouldnt need 50% of "tools" and bloat that are aroud today..?
> there was this bash-based Docker reimplementation
https://github.com/p8952/bocker? Last commit was 7 years ago, and even then this was more of an experiment / PoC - I really don't think this was ever meant as a viable replacement. (IMHO Bash is a horrible language as well.)
I agree with your sentiment on container/cloud tooling. It doesn't need to be this complicated, the needs of the 1% are dictating the experience for the 99%. However Docker (the basic CLI interface) and Dockerfile (the format) did a lot to bring containers to the mainstream, and you can still get quite a lot out of it by just sticking to the basics. For self-hosting simple services, or even just deploying your application, on plain old VPS/box-under-the-desk, it's still just plain brilliant; at least compared to loading files into a shotgun and aiming in the general direction of /opt, /usr/local, /home/app, /etc.
So, if you would, say, spend 10 minutes to install `dock` I guarantee you, you'd love how uncomplicated it makes everyday container usage. It doesn't get much simpler than a single four-letter command with no arguments.
I don't see the point of evading Dockerfiles. All of these customizations to me seem like they're just doing what you would do in a Dockerfile for creating a base image, but in a non-friendly way for people not using Dock.
From reading, there seems to be planned support for VMs, which is where the mental click is for evading dockerfiles to me, but when creating VMs you have the same thing. The equivalent (in my mind) of this for VMs exists as Multipass, where you pass a cloud-init file for configuration.
I think it's cool for playing, but if I could create a Dockerfile in the directory of the project, as simple as "FROM ubuntu:latest" and when I run Dock it takes this Dockerfile and then builds a new image ontop of it (with all these customizations, dropping me into a shell) that is where this would shine to me. It may just be a workflow thing, but if I am building a container, I am already building Dockerfiles. They're not just for Docker, many many tools use them for containers more generically.
The painful part when building a container is when I am doing something decently involved (e.g. implementing a proprietary software daemon & driver for a German camera manufacturer that has to have custom udev rules and many, many customizations,) and just want to extend my natural terminal environment into that space to explore what's up. Then when I am done, I should be able to run docker build and get the clean, bare container. It's very rare that I just need an arbitrary container that will go away when working on a project. Periodically I may generate an ubuntu pod or something, but it goes away fast and generally lives for less than 10 commands.
With that being said, I don't mean to poo-poo, as I do really like the aesthetic, the documentation is beautiful, and the thought put into it. I just don't see where it fits into my (or many people I know)'s workflow pretty much strictly because of the Dockerfile thing. Some of the other tools look cool though, so I threw a buck or two in the hat :)
Thank you. Yeah, I guess it's a matter of what you're used to. For me personally, I've never gotten used to Dockerfiles. It seemed like another configuration file format invented for nothing. With rules I had to remember. Entrypoint? Ugh. So I've just done everything in Bash, made it imply which container/image I want from $PWD and it saved me a ton of time. And, actually, I do tend to create/destroy containers quite often because of how easy it is (no editing files or even providing cli arguments to `dock`).
As for supporting VMs - you misunderstood. I meant support for other container engines. To me, it would most notably be BSD jails. In fact, if I get to it, I'd like to do something much more cool... Perhaps it would make sense to have a VM with FreeBSD running on a machine and then multiple BSD jails running inside - all managed by a relatively small tool-set, such as `dock` - I'd probably have to rename or fork it. But the cool part of it all is that containers won't even depend on an architecture, since they'd be running inside a VM.
Maybe I am missing the use case, but a dockerfile seems so useful for having an easily readable declaration of what exactly is inside the container. If I need to upgrade a package at a later date, I can simply change a line and rebuild the container. The same goes if I need to remove something; just delete the line in the dockerfile. You can copy things from one dockerfile to another. You can see how it has changed over time by checking it into git. You can have multiple people make changes to it and develop it over time. You can reorder things. You can build containers from other containers, etc.
With this tool, how do you even know what has been installed in the container? How do you share them, or make changes, or update them? Is this just for personal use where you don't care about sharing your work?
That's the point. This tool is not for the teams. It's for your personal development needs. I often find myself working on multiple projects, which require roughly the same environments. Say, I may need a container with PHP for multiple projects, but each would differ slightly in some way. Maybe some configuration file, maybe something else. I would just `cd` to the project's dir, type `dock php8` and it would create a container based on the "dock/php8:stable" image. Then I'd install whatever else I need. I wouldn't need to bother editing a Dockerfile. I would get a nice shell immediately as I type the command. And I wouldn't need to remember that I'm running a container (even if it was stopped), because if I go the same directory and type `dock` (no arguments), it would automatically start/connect to the right container.
I've tried using Dockerfiles with teams. Over time it gets absolutely ugly, because you have to keep track of yet another configuration file collectively. It's all fun and games until someone creates a branch with an updated Dockerfile, you check it out, it alters your container and then you go back to master only to find out nothing works. That's just one example of madness.
We cannot rely on containers as a collective development tool. They're not, it was a dumb idea in the first place. Dockerfiles effectively make system configuration and setup a part of your vcs repo. And that is wrong. We might as well check in the root filesystem into the repo altogether then.
Every developer works on a team by default: a team made up of you and future-you. Even for personal projects it is very important to document and commit this stuff.
I've never used Docker in production, there was no need for containers. We did use heavier VM, but kept the environment and configuration such that it wouldn't need some kind of special handling. That is to say, once a team member cloned the project(s), set up the environment on their machine and successfully made the project run, there wouldn't be any constant tweaking with the environment afterwards.
Even you're running a lot of micro-services, your goal is NOT to make it so that they are impossible to run without hours and hours of environment setup. In fact, even though they are different micro-services, you'd probably want to make their environment similar or the same where it's possible.
The purpose of containers for me personally is to isolate potential crap coming my way from various package managers, such as nodejs and, if I were to use them in production, to minimize a potential security breach impact.
But the idea that it's the same environment on the developer's machine and that we should deploy containers and not code, is absurd to me.
> It's all fun and games until someone creates a branch with an updated Dockerfile, you check it out, it alters your container and then you go back to master only to find out nothing works.
ticks some boxes for me - author prefer/works best in solo mode, otherwise such change would be reviewed by team members, before being put into master branch.
I think you're not seeing the actual point. The review isn't solving the issue. If you have two branches with alternating environments and you switch back and forth between them, how do you suppose your Dockerfile would handle it? What if that change isn't as simple as switching the Ruby version, but "compile and install some software" or some obscure config changes? I mean, it would be breaking all of the time, you'd end up having two containers anyway and you'd be scratching your head as to how to switch between them easily.
I'm pretty sure everyone prefers "check out branch, run build command" to transparently do the right thing rather than mental book-keeping of "does the state of my docker container match the Dockerfile of my feature branch"?
> Perhaps it would make sense to have a VM with FreeBSD running on a machine and then multiple BSD jails running inside
I was recently considering creating this specifically for FreeBSD on ARM, or at least getting FreeBSD working on QEMU+arm64 and running some jails in there.
I’ve been working with lima and colima on macOS for a Linux VM docker type wrapper and wanted a FreeBSD equivalent.
You could get something like this using a Bubblewrap script. I have one which marks everything but the current directory read only and drops me into a shell in that container.
So a similar idea would be to overlay mount your system for that container so you can edit, and then on exit the changes vanish. Might try this actually :)
This looks very interesting and I love the care that was put into avoiding more dependencies than strictly needed. Just make the progran do what it needs to do and nothing else, and make the effort to do it portably and future-proof.
This gave me a feeling that each day is less and less easy to get: that this software was not just written, but carefully crafted.
> This website was written in pure html and css by hand, contains no Javascript, doesn't use cookies, doesn't track you and still works pretty damn well.
Definitely subscribing the mentioned feeling. This is clearly an effort of the author to get away from superfluous layers, an ideal that sadly is getting lost in a world of prebuilt bloat. Props for that!
Indeed, you got the vibe just right. I wanted this tool-set to be time-proof. Been working "on my machine" for 1,5 years. I had to do some serious clean-up and re-writing so that it started working on other people's machines, but once you get it running - it does not break. I've long noticed that things that are simple
- such as tiny scripts or libraries I'd write - would outlive larger projects I would work on. So these days I only try and write software that can last. Bash isn't perfectly portable, to be honest (FreeBSD doesn't even have it by default, and MacOSX has an outdated version of it), but what else? I've recently refreshed my memories of Perl, wrote a few scripts with it and, somehow, it felt harder to deal with than Bash...
Thank you, typo fixed. I think there are more. I usually do try and proof-read, but there's never enough time.
Hardly a cross-platform solution like what the current Docker app does, with tons of moving parts which you are 1 upgrade away to breaking everything with that installation and its back to re-installing it again or wasting time digging down and trying to fix what went wrong for just one distro in the worst case.
> You may not be able to run this default image under MacOSX, although dock scripts themselves are fully compatible.
Probably developers using Windows are also unable to run this script as well. So I guess they are no better off with a 1 click install with Docker Desktop.
It seems the video I saw had more than 2 steps. If it is really a 2 steps installation then it is 50 steps back when it all breaks.
Author here: it actually doesn't break. Been running it for 1,5 years. It's bash, there isn't much that can break. And it uses Docker cli with no hackery or experimental options.
The only thing I am working on right now: a way to avoid building special images for MacBooks or ARMs, but rather have a patch-tool (a bash script, essentially), which would pull any image you want from Docker Hub and then run patches on it (patches are also simple, readable Bash scripts which would work kind of like migrations for DBs), so that you will quickly have a Dock-compatible image and if you wanted a Python or a Ruby env you'd use an extra set of included patches or write your own.
It sounds more complex than it would actually work, honestly.
Under Windows, WSL/WSL2 is new (5 years old) big thing which is integrated with Docker Desktop as well.
I've tried to run `dock` under WSL2 and install works fine, container creation works fine, but connecting through SSH fails - due the fact that Docker is really running in different VM and is not directly accessible (ssh just times out).
Looks like, adding some small modifications of way of exposing ssh/connecting there should allow it as well.
On side note - bash scripting is very nicely done! But running through `shellcheck` gives some hints on safer handling, i.e.
echo "StrictHostKeyChecking no" >> ~/.ssh/dock_config_$ssh_suffix
^---------^ SC2086: Double quote to prevent globbing and word splitting.
I wasn't paying much attention to all sorts of safety issues, like potential RCEs, because this was initially intended for local development, not production use. But do let me know if you see how someone would be able to exploit those scripts without `dock` user knowing about it.
Also, thank you for trying it out and reporting it works even on Windows, pleasantly surprised.
I did some monkey patching for exposing ports and it works even further - okay, it automagically logins via ssh now as expected. I can probably send it as is to Github/Gitlab so you have an idea.
Disclaimer: was not considering any edge cases with my changes.
> But do let me know if you see how someone would be able to exploit those scripts
My (and my teams) rule of thumb is simple here - if shellcheck complains, that better be reviewed, usually it complains for a reason and makes code better/safer.
or just use podman skopeo and buildah because thats gonna let you investigate pod issues you'll get in K8s and makes you conformant with things like rootless, or your companies devops blessed load balancer container config in your pod.
edit: Sorry, I'm posting this from a bar and I think it sounds really arrogant :( these guys probably worked damn hard on dock so I'm cashing out and going home to try it.
Buildah is the way for scripting your containers. Like most/all Red Hat products, a best practice solution was invented for dealing with the pains of some-Docker-thing and you get big corporate backing so you don't need to worry about deprecation or a single dev jumping ship.
Since it's aiming for Linux, I wish there was a comparison to two similar technologies: toolbox and plain systemd-machine. I'm not sure if dock provides anything they don't?
I'm not sure either, I've never even heard about toolbox. And systemd... Yeah. I never really cared about whether or not "it is", until configuring things started to get out of hand.
But you are not correct in assuming this is aiming for Linux - my next goal is adding BSD jails. I see absolutely no reason why different container engines would bother user with architecture, different APIs and apps etc. And, then, I think there's actually something, that although it sounds crazy, might work wonderfully on any OS, see my comment here: https://news.ycombinator.com/item?id=31625920
Fun story. I gave Nix a try - my standard 2 weeks - and realized it's not solving my problems, while creating a set of different ones. Ironically, `dock` was conceived while I was experimenting with Nix. But Nix... and Docker, in part... They are manifestations of this need for certainty we humans have, it doesn't lead us to good places :)
Isn’t that a different problem? Like containerization is useful for dependency + config management, which I understand nixos is good for, but containers are also useful for “spin up a web server (with X configuration) and spin up a db server (with B configuration) at the SAME time on my local machine (configuration Z)”
Well for example you could set up your nix derivation for a thing and run it on more or less any OS, without needing to also ship Ubuntu or whatever base layer of your Docker image.
It's not on Github/Gitlab, otherwise would open issue there, so:
[ 17:52:08 ] git clone -b stable \
> https://gitea.orion3.space/DOCK/dock.git && cd dock && \
> git submodule init && git submodule update
Cloning into 'dock'...
fatal: Remote branch stable not found in upstream origin
and with curl, seems `\` is extra/should have CRLF
[ 17:52:17 ] curl https://dock.orion3.space/releases/dock_stable.tgz \
dock_s> -o dock_stable.tgz && \ tar xvzf dock_stable.tgz &&
> rm dock_stable.tgz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 235k 100 235k 0 0 87414 0 0:00:02 0:00:02 --:--:-- 87382
Command ' tar' not found, did you mean:
Thank you, both issues fixed. At least that's how you know someone tried it. Issue #1 was extra \ without the newline character that was supposed to follow and Issue #2 was me forgetting to push the branch after I recently migrated the site.
I wholeheartedly agree. My initial motivation for building `dock` was actually looking at all the vulnerabilities in different packages for various package managers and thinking "I would not like that on my system". Then I decided it would generally be a good idea to isolate other stuff. Like you can actually run your browser from a container, without it having access to your filesystem and, possibly, other information your OS provides. Dock for me wasn't about NOT setting up the environment, because I WAS actually setting it up for every single container I'm using. But it was about not having to set it up again again, let alone managing conflicts.
I tend to use Podman/Buildah more than Docker these days. But I've been pretty happy with Toolbox, which seems similar. I agree that for many things, it's a lot more convenient than using a Dockerfile.
I also thought of podman/buildah when reading this. The idea is you can create container images (and commit changes) using the shell. No need for Dockerfiles if you don't want them.
That is significantly more steps than Distrobox or Toolbox (2:39, vs probably 0:30), and it uses Docker and Ubuntu. Folks, we should be forever grateful what Docker and Ubuntu achieved but they have lived well into the "become the villain" territory.
I have my containers built by Gitlab[1], and I jump into them with Distrobox. It honestly couldn't be simpler.
Wait, 2 steps for the installation are too many? :)
On a serious note, I think it has been made clear this isn't intended just for Docker, but any other container engine too, potentially. And from my experience using it every day, it saves a ton of time. As opposed to regular Docker usage patterns. I just navigate to my project directory and type `dock`. There is no need for any external service, such as Gitlab or even Docker Hub.
Docker: almost any thread about Docker on HN has someone complaining about it. It doesn't work properly on M1 (which is arguably Apple's fault, but the likes of Rancher have done a much better job than Docker). Several of their tools are in a half-baked state. Other tools are abandonware.
I worked on a project recently that did their own roll-your-own equivalent. It was in python and many of the devs didn't have a python env installed. The leaders provided everyone with an image with the env, Pycharm, and assorted configs all pre-installed. You just needed to redirect DISPLAY 0. There were a few teething moments but once it was running, it worked well.
I can see something that might help standardize setting that up being very useful.
Despite it's original idea of managing multiple versions of say Python, I find it useful when installing `starship` tool for command prompt and easily updating it later.
After just reading most of the first half of the docs (skimming some parts, because in occasions it delves too much into fine details, I feel it lacks a bit of conciseness), I'm still not sure I understood well why this tool needs to use specifically built images, instead of just grabbing whatever is the current `ubuntu:20.04` image and install stuff on top of it.
The page didn't achieve conveying this clearly to me. I might be a bit obtuse today but as I went on and on reading, the question remained about why exactly this tool is not just starting from well known (and trusted) Docker images. The "trusted" part is important: I kind of trust that official Ubuntu images are not tampered with anything fishy.
You're quite correct in addressing the issue, indeed, why wouldn't it just grab an image from Docker Hub and make it dock-compatible? And that's what I'm working on. See my comment here: https://news.ycombinator.com/item?id=31625086#31625750
Best ideas aren't always the first ones in the queue. Remember, I built it for myself first and never thought I'd do a release, so there was no point to it initially.
Very nice to hear that. Being able to build from base images adds a level of confidence.
I see that it is first and foremost a tool written for personal productivity, so congrats on deciding to release it! No doubt, improvements can be added progressively, when/if they happen. Well done.
realy don't understand why I need to use this "tool"... IMHO: docker desktop and some other desktop tools can do more as well as other tui open source tools
We download, install and run scripts and binaries written in a variety of languages ALL of the time. This one's in Bash - even a child can sort of check it out. It is intentionally written in Bash and the code base is small (for this kind of project), so as not to obscure much.
What's the difference between that and running make? I'm sure you wouldn't have complained if the website asked you to run make install instead. You can still read the script, like you would a makefile.
> While everyone hates tar, because it's impossible to remember the correct cli-options to it
Huh? You must be kidding me, there are people who have trouble with tar, enough to warrant an XKCD?
`tar xvf filename.tar` is so trivial I don't even think about it - it's just some muscle memory at this point. There's `x` for eXtract, `v` for Verbose` and `f` for the Filename to follow. I only check manpages if I need to exclude some directories or do filename transformations - because my brain is too limited to remember all those details.
Well, I guess I'm really one of those people. There is no way I'm ever remembering correct arguments to `tar`. In fact, aliases or even the `untar` script that I have in the bashjazz/utils set... Like I would still be mentally trembling, wondering if it's really going to do what I expect it to. With respect to that, `gzip` is light years away. I have no idea why someone decided they need to ask users to enter four cli-arguments/flags in order for tar to do its job. Must have been a reason, it'd be interesting to look it up. But yes, everyone should pick their battles and `tar` ain't mine.
I need to be familiar with those options about once every year or so, practically guaranteeing that I will forget by the next time I need it. With .zip, there's nothing to remember. I wish tar were capable of detecting what flags you need based on file extensions or contents.
I once had copious amounts of free time and tried Linux from Scratch. I typed "tar -xzf" so many times (who needs verbosity), now it's burned into my brain.
But come on, how many levels of abstraction do we want to pile onto another? Namespaces, containers, Docker files, compose, swarm, k8s, helm, cloudfront, terraform, auth servers/OAuth, ad-hoc "REST" auth kludges, dynamic languages, distro- and lang package managers/ecosystems, blabla - because IT'S SO MUCH EASIER LOL. All of which are one-of-a kind snowflake solutions to self-inflicted problems on top of an O/S that is already highly portable, for good reasons. We're just kicking the can down the road, or alternatively, blow up cyclical/generational bubbles to trap freshmen and idiots. Cloud tooling seems like a zero-sum game at this point, wasting talent for three "cloud providers" to make loads of cash. The whole thing is antithetical to humanist, local-first, site autonomy principles in the name of growth for very few.