@gsuberland @dragonarchitect It really just sounds like a bad implementation in this case that disallows proper customization in the GUI of installed containers. I've never done any work with any CI/CD or IaaS implemenations and I still think Docker is great for single instance applications /because/ the containers can reliably be set up in a repeatable way on any machine the engine runs on, /because/ they are immutable outside of mount points and env variables. Less friction than DIYing it IMO
Snep :floofHappy:'s Post
In Reply To: this post
Likes: 0
Boosts: 0
Hashtags:
Mentions:
@gsuberland@chaos.social
@dragonarchitect@rubber.social
Comments
Displaying 0 of 1 comments
Graham Sutherland / Polynomial
@snep @dragonarchitect so one example of a major friction point here is when things have bugs. a while back I had installed a popular webapp thing in a jail. they also publish it via docker. there was a bug that affected my install. I reported it and sent a PR but it didn't get fixed in a release for 9 months.
on CORE: `iocage console thething; nano /path/to/broken.file` and patch it
on SCALE: haha hope you like maintaining and publishing your own dockerfiles (also you don't get updates now)
@gsuberland Yeahh, that definitely is annoying whenever it happens. I'm surprised the bug remained for this long, but some devs teams are like that, I s'pose.
Though, at least with regular Docker, you should still be able to get updates even when you've built a patched Dockerfile ontop of another container. If you specify to always use the latest version of a container as a base, your changes will be applied on top of the most up-to-date version with each rebuild you kick off.
by Snep :floofHappy: ;
Mentions: @snep@y.diskcat.com @dragonarchitect@rubber.social
Likes: 0
Replies: 1
Boosts: 0