Just to brag about it, did it 10 years ago: https://github.com/ponsfrilus/arch-novnc
A fun idea might be to combine something like this with Tailscale & their Mullvad add-on, so you get ephemeral browsing environments with VPN connectivity, could make it easy to test from various countries simultaneously on a single host.
Worth mentioning Jess Frazelle was running desktop applications in docker a while ago. Not a full desktop, but also quicker to rebuild individual apps.
https://blog.jessfraz.com/post/docker-containers-on-the-desk... https://github.com/jessfraz/dockerfiles
I've been running stuff in LXC for ages (and before that, custom chroots). A while ago I made the switch to Wayland - and now started moving things over to podman, which has the added benefit of being able to share the stuff easily:
https://github.com/aard-fi/tumbleweed-images/tree/master/way...
I use two different setups - on some systems I only run things like browsers in conatainers, on others I also run the desktop itself in a container. Not published yet are my helper scripts, that'll need some more cleaning up.
> The usability was surprisingly decent. LibreOffice and GIMP worked fine, although there was a bit of a lag. I would estimate about 70% of native performance, but still very usable.
You can get better performance and lower latency for your remote desktop and eliminate lag by using a gaming-focused remote desktop server, e.g. using Sunshine: https://github.com/LizardByte/Sunshine .
You will need to give it access to a gpu though.
This article is bad in a lot of ways, but all of sudden it jumps to running Linux in a browser:
“After my failed custom image attempt, I found a promising XFCE-based Debian image on Docker Hub. I downloaded it in minutes and, with a few commands, launched it. When I opened the URL, I was greeted by a fully functional Linux desktop, running right in my browser. The pure geek joy of seeing a complete OS served from inside a Docker container was a feeling I won’t forget. It worked”
And on Windows, docker for Linux containers is just a VM, so this part isn’t really true:
“Containers are not isolated Operating Systems: Docker containers share the host kernel. This is what makes them lightweight and great for single services. Whereas desktop environments expect system services like (systemd, logind, udev, DBus) and device access to be available. Containers don’t provide that by default
Can you not use the X11 server packaged with WSL as your display driver, and avoid piping this all into the web browser?
Seems very inefficient to have to render everything through the browser
WSL doesn't have an X Server, it has a Wayland compositor. That said, yes, you can use that. You can even run a different compositor nested so you get one single window with a desktop if you want.
Ah wayland. Many things changed since the time I've been using Linux in my professional work. However does Wayland support connectivity? I.e. can you display Wayland session on another computer via TCP/UDP? If not then Wayland won't work with wsl2 which is basically a VM
Not in the same way that X does. However, Microsoft specifically implemented an RDP backend into Weston, which ships with WSL2. So the Linux Windows you see are actually being sent over RDP transparently.
Waypipe[0] offers a Wayland alternative for sharing windows over the network.
> WSL doesn't have an X Server, it has a Wayland compositor
Which has Xwayland support. You can still run X11 apps.
On Windows, doesn't this technically mean OP is running Linux inside a Linux VM inside Windows? From what I understand Docker is Linux tech and to use it anywhere else a (small) Linux VM is required. If true, I would just dispense with the extra layer and just run a Linux VM. Not to discourage experimentation though!
Almost.
For one thing, Docker is not really "Linux inside Linux". It uses Linux kernel features to isolate the processes inside a container from those outside. But there is only one Linux kernel which is shared by both the container and its host (within the Linux VM, in this case).
For another, running Linux containers in a Linux VM on Windows is one (common) way that Docker can work. But it also supports running Windows containers on Windows, and in that case, the Windows kernel is shared just like in the Linux case. So Docker is not exactly "Linux tech".
I think GP is likely referring to Docker Desktop, which is probably the most common way to use Docker on Windows.
Running Linux containers using Docker Desktop has a small Linux VM in which the containers are run and then Docker does some mucking about to integrate that better with the Windows host OS.
I thought docker only supports windows as a host if you enable wsl, in which case you're running on hyper v and Linux kernel as part of wsl2, so absolutely Linux tech on a Linux vm on Windows... Am I wrong?
You are. You can run Docker for Windows, and run Windows binaries in reasonably isolated containers, without involving Linux at all [1]. Much like you run Linux containers on Linux without involving Windows.
It's Docker Desktop what assumes WSL; Docker engine does not. Also, you seem to need Windows Server; IDK if it can be made to work on a Pro version.
[1]: https://learn.microsoft.com/en-us/virtualization/windowscont...
Docker Desktop defaults to WSL2 but it has no assumptions whatsoever. You can run it with HyperV
You are. Docker Desktop supports two different container platforms: usual Linux ones and Windows Containers.
With the former a Linux kernel is required. You have two options: using WSL2 and benefiting from all the optimizations and integrations that Microsoft made, or running a full Hyper-V VM that gives absolute control and isolation from rest of the system.
For the latter, you need a Pro license and need to enable Containers feature (deployment requires more expensive Server licenses). Then you can run slimmed down Windows images like "nano server" which doesn't have GUI APIs.
Docker supports either hyper-v, or wsl2 as a host for the Linux kernel - they generally push people towards wsl2. I vaguely recall wsl2 uses a subset of hyper-v the name of which escapes me atm.
Can he install Wine in the Docker container to run Windows games from it?
Steam and it's remote play options seem more enticing to set up for me.
Isn’t this the case on macOS too?
I desperately wish I could run docker properly (CLI) on the Mac rather than use docker desktop, and while we are making a dream list, can I just run Ubuntu on the Mac mini?
I’ve been using colima for cli docker on my arm mac. It’s pretty straightfirward using homebrew.
Colima is great. However, in the upcoming macOS 26 Tahoe, and mostly in macOS 15 Sequoia, Apple is beginning to provide a first-party solution:
https://github.com/apple/container
I've been experimenting with it in macOS 15, and I was able to replace Colima entirely for my purposes. Running container images right off of Docker Hub, without Docker / Podman / etc.
(And yes, it is using a small Linux VM run under Apple's HyperKit.)
I ran into various issues I think, but my main objective was running a full k3s cluster this way, reckon this is achievable with full networking support now? Also if I already had colima setup, does new apple container provide any benefits beyond just being made by apple?
Try Orb docker. It is fast. It ha a Kubernetes cluster feature.
This thread is amazing - thank you all.
I’m surprised I didn’t stumble into any of these options, I searched and didn’t find.
It might not be Ubuntu but Asahi Linux runs Fedora pretty well on M2 Pro and older Apple Silicon Mac Minis: https://asahilinux.org/fedora/#device-support
No, WSL2 does not run "inside Windows", but on the "Virtual Machine Platform", a sort of mini hyper-v.
Sup dawg, I heard you like OSes.
I develop my apps in the most possible native way I can: deb packages, apt repo, systemd, journald etc. however I would like to also be able to run it in docker/vm. Is there a good systemd-in-docker solution for this to basically not run anything differently and not have to maintain two sets of systems?
Have you looked at systemd-nspawn[0]? Its not docker so it wouldn't be useful for writing Dockerfiles but it is light containers that work beautifully with systemd.
Thanks, this looks awesome! Will play around on my CI/CD first to see if it's any good for the build-server to add trixie builds. Might use in prod deploys later.
Update: I looked at it. Overall it looks very promising on paper.
However, the bootstrapping process is so much work that I've ended up ditching it for now. I don't want to "automate the automation" yet.
Trying out podman now.
Containers with systemd as an init process are considered first-class citizen by the Podman ecosystem (the base images are named accordingly: e.g, ubi10-init vs ubi10)
You might be better served by Incus/LXD which run "Linux containers" (ie: a full distro including systemd, SSH etc) as opposed to OCI containers.
https://github.com/Azure/dalec
Build system packages and containers from those packages for a given target distro.
Behind the scenes it uses buildkit, so it's no extra stuff you need, just docker (or any buildkit daemon).
You could use Nix to build the package and provide a nixos module and a docker image from the same derivation. Now you only have to manage three systems instead of two. /s
Samsung DEX had a Linux desktop package in 2018. It was a lxd container based on Ubuntu 16.04. They developed it in collaboration with Canonical. Unfortunately they deprecated it shortly after, maybe already in 2018. The next Android update would remove it.
It worked but Android killed it mercilessly if it used too much memory or the rest of the system needed it.
Some current Android devices that have USB-C 3.1+ and support dp-alt-mode (USB-C to HDMI) will detect when an external display is connected and provide a full extended desktop. [0]
You can connect mouse, keyboard, and display to the Android device through an unpowered USB-C hub that offers the respective ports. Battery life depends on the make/model of Android device.
I have a Motorola phone and the experience is very nice.
[0] _ https://uperfect.com/blogs/wikimonitor/list-of-smartphones-w...
I still remember how much I liked the idea. Really tried to use it, but the experience with both browsers and vscode was....not that great.
Kinda hope they revisit this idea in a near future again
Google is implementing a full Linux VM in Android 16. This is probably how we'll get something similar.
https://www.androidauthority.com/android-16-linux-terminal-d...
I use this https://www.reddit.com/r/selfhosted/comments/13e25l9/tutoria...
My clients are a rpi 4 and an older ipad. Sometimes use an Android phone as well.Works really well.
> Google acts as a meet-me point and also provides the authentication mechanisms including MFA.
On one hand, it made me chuckle a bit. On the other hand, it could be reasonable in many scenarios.
I run my server on a connection that's a cgnat and nat by home router. So, no option for me other than chrome remote desktop. It also does p2p.
If you create an outbound tunnel, your options are whatever you want. nat and cgnat only affect inbound routing.
check into tailscale or cloudflare tunnels/argo
You know, it's funny—I always hear people say they want to keep their Windows-only applications and run Linux alongside it, but I made the switch almost a decade ago and honestly can't say I'm worse off for it. And frankly, there's never been a better time to make that leap; the Linux desktop has finally hit its stride and become genuinely mature, with the polish and features one would expect from a modern operating system.
Apart from a handful of games, I haven't actually needed Windows for anything. So I'm curious—what Windows-only software is keeping you on it, OP?
Not OP, outside of games I keep a dual boot pretty much exclusively for Visual Studio - imo it's one of the best debuggers I've ever used. I know gdb is just as powerful if not more, but it is so much less ergonomic, even with the different frontends I've tried. Edit and continue for C/C++ is such a killer feature too. I stick away from msbuild or using it as an editors it's purely my debugher.
Granted it helps that a lot of the time I need "advanced" debugging relates to msvc/windows specific issues, which while I could run it in wine, it's just easier if I'm on windows anyway.
I’m not OP but for me I end up having trouble with games and maintaining dual boot for it isn’t worth it. Most recently I was trying to install gamescope on PopOS LTS for retro gaming, but it was too old of a distribution for gamescopes dependencies, so I upgraded to Cosmic and it broke my software KVM. I use PopOS because it has great NVIDIA support and I’ve run into issues before with other distros.
At that point I switched back to windows but I’ll try again after a few months. I always keep trying.
I think if I didn’t play games I’d be fine with Linux. I hate Windows except that everything just works.
With Steam, I haven't seen a game that doesn't work yet. I was just playing Clair Obscur yesterday, it worked great. I don't know about Gamescope, but I think you can run whatever Windows thing you want through Proton and it'll probably work well.
It’s been hit or miss for me. I mostly play strategy games. I often get weird game graphics, flickering and things like that that go away when I give up and reinstall windows.
If you play anything multiplayer (and especially anything competitive), it’ll break periodically or not work in the first place due to anti-cheat.
The remastered C&C used to work but now the launcher crashes. No idea why!
In general, when that happens, I select "force specific compatibility tool" and select the last version, and then it works.
Not OP, but for me is the music plugin industry that almost never provides a linux VST. Some will work with wine, some won’t.
But for everything else I’m on linux as well.
Even better, I have justification to buy yet more hardware synths I dont need
I absolutely love the direction KDE Plasma Wayland session is headed; I think it looks great, it definitely runs great, and it really is just packed with features. I do have some personal KDE gripes I'd like to work on, mainly just improving the KIO fuse integration more, but wow have things progressed fast.
Still, I caution people to not just jump to Linux. The actual very first problem is not software. It's hardware. Firstly, running cutting edge motherboards and GPUs requires a more bleeding edge setup than typical LTS distros give you; you'll be missing audio codec drivers and the GPU drivers will be missing important updates, if things even boot. Secondly, NVIDIA GPUs are currently in a weird place, leaving users with trade-offs no matter what choices they make, making it hard to recommend Linux to the vast majority of users with NVIDIA GPUs. Thirdly, and this one is extremely important, nobody, Nobody, should EVER recommend people run Linux on random Windows laptops. This is cruel and unusual punishment, and it's a bad idea. It's a bad idea even if Arch wiki says it works pretty good. It's a bad idea even if a similar SKU works well, or hell, even if a similar SKU ships with Linux out of the box. It's just a bad idea. The only two big vendors that even really do a good job here are System76 and Framework, and they still have to use a bunch of components from vendors that DGAF about desktop Linux. It is impressive that you can more or less run whatever desktop hardware and things usually work OK, but this logic doesn't apply to laptops. This point can't be stressed enough. I have extensive experience with people trying to switch from Windows to Linux and it's genuinely a challenge to explain to people how this doesn't work, they don't have the frame of reference to understand just how bad of an idea it is and learning the hard way will make them hate Linux for no reason.
Still, even with good hardware, there's plenty of software woes. You'll be missing VSTs. You might have to switch to Davinci Resolve to edit video, Krita to do digital painting, and Blender to do... Well, a lot of stuff. All good software, but a very non-trivial switch.
I'm really glad to see a lot more people interested in running Linux and I hope they have a good experience, but it's worse if they have inflated expectations of what they can do and still have a good experience with. Being misleading about how well Linux or WINE will work for a given task has never really helped the cause, and probably hurt it a lot.
I won't argue about Proton/Steam, though, that shit's miraculous. But honestly, a lot of people like playing competitive multiplayer games, and those anti-cheat vendors don't give a damn about your Linux, they're thrilled to start integrating Secure Boot with TPM attestation as it lets them try to milk more time out of the "maybe we can just secure the client" mindset. (I personally think it's going to die soon, in a world where ML has advanced to the point where we can probably do realtime aimbots that require absolutely no tampering with the client or the computer it runs on, but we'll see I guess.) But for me who doesn't care, yep, it's pretty good stuff. Whenever there's a hot new thing out chances are it already works on Proton; been playing a lot of Peak which was somewhat popular in the last couple months.
> (I personally think it's going to die soon, in a world where ML has advanced to the point where we can probably do realtime aimbots that require absolutely no tampering with the client or the computer
I pray you are right, I fear other bullshit excuses will be found though..
What issues have you seen recently in the wild where Linux just does not work on a laptop? I have pretty good experiences just putting Ubuntu on old laptops for people who asks me to fix their computer.
For old laptops it's significantly less bad, but definitely old. 5+ years old is usually a good start, though these days it may be better if it's even older.
The most significant issues:
- Peripherals simply don't work at all, as in, no touchpad or keyboard, or at least no touchscreen. This is definitely an issue with a variety of laptops including some Microsoft Surface and Dell laptops.
- Power management. Frequently, machines fail to sleep or resume reliably.
- Audio is low quality and quiet. This problem was publicized pretty well by the Asahi Linux project, but it is far from unique to MacBooks: a lot of laptops now require OS-level audio processing to have good audio quality. Even my Framework 16 partly has this issue, though it can be alleviated partly with a BIOS option. I believe this also impacts some System76 laptops.
- WiFi/Bluetooth instability. This issue is probably worst with some Realtek radios, but I've also seen it from time to time with Mediatek.
- Sometimes, issues booting at all. Yep. Sometimes it just won't boot, as sometimes the kernel will just break support, and maybe unbreak it later. That's the nature of just running random shit, though.
I think that illustrates enough so I'll stop there, but also don't forget the hurdles to even get started. Often times the very first thing you want to do is disable secure boot which differs a bit per system. This isn't always truly necessary, but even if you're using a Linux distribution that works with Secure Boot it's often a good idea, as there are a variety of things that you can't do easily with Secure Boot on Linux.
Older laptops are less of an issue since Linux tends to get a lot more mature with older hardware as time goes on, but it's still a little hit or miss, especially with vaguely recent laptop hardware that has weird stuff like Intel IPTS. But that having been said: Linux doesn't support old hardware literally forever, either. Old hardware sometimes stops working and gets pulled from the kernel, or moves out of mainline Mesa, and so forth. So even that isn't a 100% panacea.
Not trying to contradict anything you’re saying, which I agree is true for Microsoft and Dell machines, but to provide an anecdotal counter-example, the Asus Zephyrus, Flow and ProArt lines run pretty well on Linux provided you replace the Realtek WiFi.
One place to check is the nixos-hardware repo for machines with reasonable support.
I’ll just leave this here, thank me later:
https://docs.linuxserver.io/images/docker-webtop/
These images are top-notch, well documented, and have recently been refactored to use Selkies under the hood. Even with gamepad support, I’ve used these for running DOSbox, RetroArch, streaming video, and many other things.
There’s even a mature extensibility layer for using their images as a base layer to add services and apps.
Can’t speak highly enough of the linuxserver.io folks.
I actually wanted to try webtop for a long time, and did it only recently. I could not figure out these selkies for the life of me. It wanted a bunch of ports, was complaining about something all the time (don't remember, it's been a month). Might be a skill issue, but I've been using docker for the past 10 years. Moreover they want root access to the host system, which kind of defies one of the reasons to use containers. Is there a video that explains the benefits in a good way? I mean, if it's for gaming only, I can understand the use-case, but say you just want to run something like Gnu Radio in a container, why would I need 60 fps and root permissions for that.
Related posts: - [How to Run GUI Applications Directly in Containers](https://github.com/hemashushu/docker-archlinux-gui) - [GUI Application Development Environment in a Container](https://github.com/hemashushu/docker-archlinux-gui-devel)
I recently learned about doing this because I was looking for a MATLAB container and it has one that provides full GUI experience.
I’ve done it for almost a decade now, to the point of packaging “stacks” inside Docker for specific tasks: https://github.com/rcarmo/azure-toolbox
These days I have a Docker container with Remmina that I use as a bastion (fronted by Cloudflare and Authelia for OIDC), but everything else is LXC with xrdp and hardware acceleration for both application rendering and desktop streaming (xorgxrdp-glamor is much, MUCH better than VNC).
I am, however, struggling to find a good way to stream Wayland desktops over RDP.
With SKIFF_CONFIG=intel/desktop,skiff/core you get a Debian desktop running within a Docker container - see https://github.com/skiffos/skiffos
Does anybody have a good writeup/tutorial on doing similar things with Wayland? From my limited knowledge that might be with RDP instead, but there hasn't been anything more distilled as far as I know?
I've also done xpra in docker before; that's always felt as hacky as it sounds though.
I don't use it much, but I've glued together sway+wayvnc+novnc in a container and it worked fine (exposing both raw VNC and the webified novnc interface).
That sounds useful, do you have the Dockerfile for it pushed anywhere?
Here ya go:)
I run full-headed Puppeteer sessions in Docker, with VNC for debugging and observation. I keep the instances as light as I can, but I suspect I'm most of the way there toward a "full" desktop experience. Probably just need to add a more full-featured window manager (currently I use fluxbox)
For me this opens the question of are there any good remote desktop solutions for multiuser systems around. Rustdesk is single user, TurboVNC works, but there can be lag.
I use Thinlinc. It's free for under 5 concurrent users
Xrdp works well for this purpose
Sheesh. Just use LXC.
This is the first thing that came to my mind. Why pick an OCI container instead of an LXC container since it's a stateful workload?
Going OCI here only makes sense for temporary, disposable sessions.
I just carry around a pwnagotchi on a keychain, and use my iPad to access it to do Linux development work, including run a full raspian desktop, dev tools, etc.
I’m a dummy. Can you explain your setup? How does the Pi fit on a keychain?
I searched for the term and it seems to be a DIY kit to do reinforcement learning to try to crack WPA keys?
It's running on a pi as with every other general computing device you can also run other stuff.
I get that.
I was just wondering about the comment that it can be carried around on a keychain.
Looking at the pictures, some have a little screen attached and maybe a battery, so thought it might be a bit too big for a keychain.
We did this as way to break the 'are you human test captcha' a while back. Nothing nefarious, just a hacking problem we were doing back then.
> Containers are not isolated Operating Systems: Docker containers share the host kernel. This is what makes them lightweight and great for single services. Whereas desktop environments expect system services like (systemd, logind, udev, DBus) and device access to be available. Containers don’t provide that by default.
I thought Docker images always run in a VM on non Linux systems, no? This guy is running Windows on host, right? Confusing
They are talking just about what's in the container there, not the whole stack.
I run Arch under WSL2 and then in ~/.bashrc:
WINDOWS_IP=$(ip route | awk '/^default/ {print $3}')
DISPLAY="$WINDOWS_IP:0"
Now I can use the mighty mobaxterm from https://www.mobatek.net to just run whatever and pipe it back to Windows.
One caveat is that the $PATH gets polluted with space characters by 'Doze, so I have to do something like this for QGIS:
PATH=/usr/local/sbin:/usr/local/bin:/usr/bin qgis -n &
This sounds interesting. But I don’t fully follow?
What are your use cases? To run Linux GUI apps?
Does mobaxterm allow you to view those GUI apps?
Yes, Moba provides the X11 functionality to allow me to run QGIS under Arch and see the maps.
Just had a look at QGIS. It appears to be cross platform. Why not run it directly on Windows?
Are there benefits running it under Arch and then 'stream' it to Windows?
Assuming it's faster running QGIS in Linux? Or is it because all your other dev stuff is in Linux?
(Sorry of these questions are basic - just curious).
This is a much less efficient way of running Linux GUI apps over WSL since it will use software rendering.
WSL2 provides a GPU accelerated Wayland server. If your Mesa build (ver > 22) has d3d12 drivers you can use Windows DirectX as your OpenGL driver. Combined with the WSLg Wayland server you get near native desktop performance out of GUI apps.
One other nit was that WSL2 does not expose the USB ports to the Arch guest very easily.
So I had to install USBIP on the Windows11 host, and bring in a tool chain and compile a Linxux 6.x kernel in order to add external storage for my QGIS data.
WSL2's default kernel has USBIP modules built-in. I am using it daily at work for FW development. So I don't think that's your problem. However, WSL kernel might be missing specific USB device classes. Luckily they rolled out kernel module support lately and you can compile modules instead of the full kernel now.
What's the best way to forward the x-server over ssh?
ssh -YC user@$host
Has worked since .... forever. Interestingly, it works with WSL2 on windows, too!