- Use Docker Compose as the core tool to model and deploy multi-container applications on remote VPS servers over SSH or CI/CD.
- Leverage platforms like Plesk and Portainer for GUI-based management of local and remote Docker engines, ports, volumes and stacks.
- Combine Docker Desktop, WSL 2 and VS Code Dev Containers to mirror production containers in local development environments.
- Control networking, firewalls, memory usage and persistence carefully to run Dockerized apps securely and reliably in production.
If you’ve just learned Docker and now want to deploy your containers to a remote server, it’s totally normal to feel a bit lost at first. You suddenly have to combine Docker, Docker Compose, SSH, CI/CD pipelines like GitHub Actions, maybe even control panels such as Plesk or tools like Portainer, and on top of that still keep Nginx, firewalls and volumes under control.
The good news is that there’s a very clear mental model and a set of repeatable workflows for deploying Dockerized apps to remote VPS servers from your laptop. Once you understand these patterns, it doesn’t really matter whether you trigger deployments from GitHub Actions, from Docker Desktop, from Plesk or from a simple SSH session: the underlying concepts are always the same.
Big picture: how remote Docker deployment workflows usually look
At a high level, deploying Docker containers to a remote VPS always boils down to a few recurring steps. You package your app in containers, you send the code or images to the server, you start those containers (ideally with Docker Compose) and you route traffic to them through something like Nginx or a panel such as Plesk.
A very typical beginner workflow starts with a simple SSH-based pattern, often automated via GitHub Actions or another CI system. For example, on every push to a development branch, your CI pipeline connects to an Ubuntu server over SSH, pulls images from Docker Hub, stops and removes any old containers, then runs new containers with docker run, while Nginx on the server reverse-proxies traffic to the right container ports.
When you switch from raw docker run to Docker Compose, the deployment pattern stays roughly the same, but becomes more organized and repeatable. Instead of juggling multiple commands for each service, you keep everything in a single docker-compose.yml file and run docker compose up -d (or docker-compose on older installs) on the remote host to bring the full stack up.
The main decision you have to make is where and how that Compose file gets executed on the remote server. The most robust option is usually to keep the repo cloned on the VPS, have your CI/CD pipeline SSH into the server, cd into that repo and then run the Compose commands there, so the deployment state lives together with your code.

From manual docker run to Docker Compose on a remote Ubuntu server
If your current pipeline is just pulling images and running them with docker run, Docker Compose is your next big quality-of-life upgrade. It lets you describe all your services, ports, volumes and environment variables in one file, so restarting or updating your stack becomes a single command rather than a fragile sequence of manual steps.
A common beginner CI pattern looks like this: a GitHub Actions workflow triggers on pushes to a development branch, connects to your VPS via SSH and does all the deployment logic there. Inside that SSH session it might pull images from Docker Hub, stop and remove every running container, then start new ones with raw Docker commands, while Nginx is preconfigured to forward requests to the container ports.
Once you adopt Docker Compose, you don’t actually need to radically change the CI flow, but you should change what runs on the server. Instead of issuing several docker run commands, your Action can simply run docker compose pull (if using registry images) and docker compose up -d --remove-orphans from the root of your project where docker-compose.yml lives.
The easiest pattern is usually: clone the same repo that GitHub Actions uses onto the Ubuntu server, then let your workflow SSH into that host, cd into the project directory and execute Compose commands. This keeps the remote state, configuration, Compose file, and any environment files in sync with your Git repository, and it also makes debugging deployments by SSHing manually much simpler.
There are more advanced options like running Docker-in-Docker inside your CI environment or using remote Docker contexts, but for most small and medium projects the “SSH into server + run Compose in the repo” pattern is more than enough. It’s simple to understand, works with any VPS provider and plays nicely with Nginx or any other reverse proxy already running on the machine.
Designing a real-world Docker app and Compose stack

Before you worry about deployment, you need a containerized app that actually makes sense to run in production. One common example is a small Node.js backend served via Express, but the same pattern works for any stack (Python, PHP, Go, Java, etc.).
A simple way to structure the project is to keep a root folder for the whole stack and a subdirectory for the actual app code, for instance a folder called app. Inside app you initialize the Node.js project with npm init, install dependencies like express and create your main file (for example index.js) that listens on a given port like 3030 and responds with a basic “Hello world” message.
Once the app works locally (e.g. running node index and hitting http://localhost:3030), you can turn it into a container by writing a Dockerfile in the app directory. A typical Dockerfile might choose a Node base image such as node:12, set a working directory like /usr/src/app, copy the project files, run npm install and expose the port 3030 to the outside.
To avoid unnecessary bloat in your image, you should also add a .dockerignore file to skip folders like node_modules when building. This keeps build times down, prevents local artifacts from leaking into the container and produces lighter images that are easier to push, pull and redeploy.
With that in place, Docker Compose becomes the tool that glues everything together into a coherent stack. In your project root, you create a docker-compose.yml that defines at least one service (for example express), points its build context to ./app, maps ports (like 3030:3030) and sets the command to start your app (e.g. node index).
Expanding to multi-container setups with Docker Compose
Real-world deployments rarely consist of a single container; you’ll almost always end up with multiple services that need to talk to each other. A classic pattern is a front-end container, a back-end API container, and a database such as MongoDB or PostgreSQL, all orchestrated with Docker Compose.
In those scenarios, Compose becomes even more valuable because it guarantees that your services start with the right dependencies, networks and volume mappings. For instance, your docker-compose.yml might define a frontend service, a backend service and a mongodb service, and Compose automatically creates an internal network so that the backend can reach the database via the hostname mongodb rather than exposing the DB port on the public internet.
Volumes are especially important here, because they let you persist data outside of the lifecycle of any single container. Docker volumes are simply directories on the host that are mounted into specific paths inside the container; this means you can safely remove or recreate containers without losing database files, uploaded assets or logs, as long as those paths are mounted as volumes.
Another critical aspect is environment configuration, which in Docker Compose is handled via environment variables defined directly in the compose file or in external .env files. This makes it straightforward to pass database URLs, API keys, secrets and feature flags to your services, and in GUI environments like Plesk you can manage these variables via forms instead of editing YAML by hand.
Once your stack is modeled in Compose, deployments to a remote VPS are just about copying the repository or pulling the latest changes, then running docker-compose up -d (or docker compose up -d). The first run will build images and fetch dependencies, while subsequent runs will only update what’s changed, making continuous deployment both predictable and fast.
Deploying Dockerized apps on a cloud server and opening access
When you’re ready to run your containers on a real VPS or cloud server, the first tasks are to provision a machine with Docker installed and then move your project onto it. Many providers offer ready-made images with Docker pre-installed, so spinning up a server with Docker can be as simple as choosing that image in the control panel and waiting a few minutes.
Getting your app’s code to the server is usually done either by cloning your Git repository over SSH or by copying files via SCP/rsync. Cloning the repo is preferred in most cases because it keeps your deployments reproducible and let you roll back or inspect specific commits that are running in production.
If your cloud image only has the Docker engine but not Docker Compose, you’ll need to install Compose manually. On many Linux distributions you can download the Compose binary with curl from the official GitHub releases URL, save it to /usr/local/bin/docker-compose and make it executable with chmod +x, after which the docker-compose command becomes available.
Once Compose is installed and your project files are in place, starting the stack is typically just a matter of running docker-compose up (optionally with -d for detached mode). The very first run can take a while because Docker must pull base images and install application dependencies (for example npm install in your Node container), but subsequent deploys will be much faster thanks to Docker’s layer caching.
At this point your application is probably listening on some internal port (like 3030) inside the container, mapped to the same port on the host, but that doesn’t guarantee it’s reachable from the internet. You still need to make sure that any firewalls, security groups or network policies in front of the server allow inbound traffic on that port, or that you route the requests through an HTTP reverse proxy on port 80/443 such as Nginx.
Networking, ports and firewalls in remote Docker deployments
Properly exposing your containerized app to users without accidentally opening every port on your VPS is one of the key operational skills you’ll need. By default, Docker’s port mapping feature lets you bind a container’s internal port to any port on the host, with rules like -p 3030:3030 or the equivalent in Compose.
For simple setups you might map a container directly to a public port, but in most production setups you’ll want an HTTP reverse proxy (like Nginx) or a panel such as Plesk or a hosting firewall to be the entry point. In that model, your app listens only on localhost or an internal Docker network, and the reverse proxy forwards traffic from standard ports 80/443 to the container’s port.
When you configure port mappings manually, you can often choose whether the port should be accessible only from the local host interface or from all network interfaces. Binding to 127.0.0.1:PORT means the port is not reachable from the public internet, which is safer for admin tools or internal services; binding to 0.0.0.0:PORT makes it accessible from outside (subject to firewall rules).
On top of Docker’s port bindings, your cloud provider or hosting platform may enforce its own firewall rules, which you have to configure separately. For instance, if your application listens on port 3030 and you don’t use a reverse proxy, you’ll need to explicitly open 3030 on the VPS firewall or in your cloud network security configuration before hitting http://SERVER_IP:3030 will work.
For remote admin tools such as Portainer, which often run on ports like 9000 or 8000, it’s especially important to think through firewall exposure and maybe restrict access via IP allowlists or VPN, since these dashboards usually give full control over Docker on that host.
Using Plesk to manage local and remote Docker services
If you’re hosting sites on Plesk, you can integrate Docker directly into that environment, both on the same server and on remote Docker hosts. Plesk supports Docker on a range of Linux distributions, including CentOS 7, RHEL 7, Debian 10-12, and Ubuntu 18.04, 20.04, 22.04 and 24.04, as well as AlmaLinux and Rocky Linux 8.x/9.x, plus certain Virtuozzo versions.
On Plesk for Windows, you cannot run Docker directly on the same machine, but you can connect to Docker that’s installed on a separate remote host. This is handy if you’re locked into Windows for the panel but still want the flexibility of Linux containers on another node that Plesk controls via Docker’s remote API.
Be aware that when Plesk itself is running inside a Docker container, the Docker integration features are not available. Docker support also requires an additional Plesk license or pack, such as Hosting Pack, Power Pack or Developer Pack, and it only works on 64-bit (x64) systems.
Plesk’s Docker extension makes it very easy to search for images both in your local Docker repository and on Docker Hub. You can find images using a search box, see which versions (tags) are available, and choose exactly which tag you want to run, which is important for reproducible deployments and avoiding “latest” surprises.
Running and configuring containers through Plesk
When you want to spin up a container from Plesk, you typically head to the Docker section, choose “Containers” and then “Run container”. There you search for an image, pick the desired tag, and Plesk handles pulling the image and creating the container for you.
Once the container is created, Plesk gives you a configuration screen where you can tweak key runtime options before hitting the final “Run” button. That’s where you set up environment variables, port mappings, volumes, memory limits and restart behavior, much like you would in a Compose file but using a GUI instead of YAML.
By default, containers have no memory limit and are allowed to use as much RAM as the host will give them, which might be fine for small setups but dangerous at scale. In Plesk you can flip a “Memory limit” option and specify a maximum in megabytes, preventing a runaway container from starving the whole server of resources.
Another important Plesk option is whether a container should automatically restart when the system reboots. If you don’t enable auto-start, sites that depend on that container may stay down after a reboot until someone logs in and manually restarts the container, so for production workloads it’s common to keep auto-start enabled.
Plesk also makes it easy to revisit container configuration later: from the containers list you can open logs, check resource usage, rename containers, recreate them, save them as new images, download snapshots and delete them when they’re no longer needed. These actions mirror what you’d typically do from the command line with Docker and Compose, but they’re wrapped in a panel that might be more approachable for teams who aren’t comfortable with SSH.
Ports, volumes and environment variables in Plesk-managed containers
When Plesk creates a container, it can automatically map internal ports to random high ports on the host, or you can override that and set up manual port mappings. Automatic mapping is convenient for quick experiments, but for production you usually want predictable, manual mappings or a reverse proxy in front.
With manual port mapping, Docker by default binds to the host’s local interface, making the container port reachable only from the host itself unless you open it further. There’s typically an option in Plesk to decide whether a port should be accessible from the internet or restricted to localhost, which is crucial for security-sensitive services.
Volumes in Plesk are configured by specifying an absolute path on the host and a target path inside the container. This works just like standard Docker volumes: data stored in these mount points survives container recreation or removal, making them ideal for databases, uploaded files, cache directories and any persistent state your application needs.
Environment variables are configured through a dedicated section where you can add, edit or remove as many variables as your containerized app requires. This provides a straightforward way to inject configuration values, secrets (though for real secrets you might still prefer external stores) or flags without baking them into your images.
Behind the scenes, Plesk typically implements HTTP proxy rules in the web server config (for example in Nginx’s nginx.conf for a specific domain) to forward traffic from a domain to one of your containers. This proxying usually works fine even if the server is sitting behind a NAT, as long as the external firewall and port forwarding are set correctly.
Managing remote Docker nodes from Plesk
One of the more advanced capabilities in Plesk’s Docker integration is its support for managing remote Docker engines, not just the one on the same server as Plesk. This is useful when you want Plesk to stay as a control plane while the heavy workloads run on other nodes.
To set this up, you first have to configure the remote host’s Docker daemon to listen securely for remote connections, usually by editing /etc/docker/daemon.json and enabling TLS with certificate files in .pem format. You generate the certificates, configure the daemon to use them and restart Docker so it starts listening on a TCP port in addition to the usual local Unix socket.
Once the remote daemon is configured, you save the certificate outputs on your local machine so that your Docker client or Plesk can authenticate against that host. With those files ready, you go into Plesk, open the Docker “Environments” section, and add a new server using the remote host’s address and TLS credentials, marking it as active if you want Plesk to use it right away.
After you’ve added several Docker environments, you can switch between them from the same Plesk interface, setting any one of them as the active Docker service at a time. This lets you centralize management of multiple Docker nodes while still keeping their workloads separated physically.
Plesk also offers an images view for each Docker environment where you can list all local images, inspect which tags are present and how much disk space they consume, and remove unused images to reclaim storage. This is particularly handy if your CI pipeline is frequently building new images and leaving old ones behind on your remote nodes.
Deploying Docker Compose stacks from Plesk
Beyond managing single containers, Plesk can also deploy full multi-container stacks described by Docker Compose files. This bridges the gap between “clickable GUI” and “infrastructure as code”, letting you keep Compose files in version control while still having a panel to control deployments.
To do this, you go to the Docker “Stacks” section in Plesk and choose to add a new stack, giving it a project name and selecting how you want to provide the Compose file. Your options usually include typing or pasting the file directly in an editor, uploading a local YAML file, or pointing to a Compose file that already lives in the web space of a particular domain.
When you deploy a Compose stack via Plesk, you can declare and create fully custom containers, and any build artifacts produced during the process are stored under the website’s root directory. This keeps your deployment assets close to your site files and makes it easier to inspect what’s been generated.
While Plesk abstracts much of the complexity, the underlying Compose file still has to respect Docker’s format and semantics. For detailed options such as networks, build args, health checks and advanced logging, it’s still worth reading the official Docker Compose specification and using Plesk mostly as a friendly front-end on top of it.
If you prefer a more dedicated UI for Docker management, you can complement or even replace Plesk’s Docker tools with Portainer, which is essentially a web-based dashboard for Docker and Docker Compose that also supports remote nodes via the Docker API or an agent.
Working with remote containers from Docker Desktop and VS Code
If you’re on Windows, Docker Desktop combined with WSL 2 and VS Code gives you a very polished workflow for building and running Linux containers locally, which indirectly helps with remote deployment as well. Docker Desktop uses a lightweight Linux VM through WSL 2, so you can build and test the same Linux containers on your laptop that you’ll push to a remote VPS later.
After installing WSL 2 and Docker Desktop, you enable the WSL 2 backend in Docker’s settings, choose which WSL distributions should integrate with Docker, and then verify the installation by running docker --version and a test container like docker run hello-world from a WSL terminal. This confirms that your development environment is using the same Docker CLI and engine across Windows and Linux contexts.
VS Code’s WSL extension lets you open a project that lives inside your WSL filesystem and work with it as if it were local, hiding away cross-OS path and binary compatibility headaches. On top of that, the Dev Containers extension allows you to open the project itself “inside” a container, effectively turning the container into your development environment.
In practice, the usual flow is: clone your project inside WSL, open it with VS Code via the WSL extension, then use the “Reopen in Container” command from the Dev Containers extension. VS Code will create a .devcontainer folder with a Dockerfile and a devcontainer.json, build that image, and then re-open your project in a dev container that has all the tools you need.
From there you can run and debug your app directly inside the container using VS Code’s Run and Debug panel, which will launch your server (for example a Django or Node app) and let you hit it on http://127.0.0.1:PORT from your browser. This gives you high confidence that the container you deploy to your remote VPS will behave the same way as the one you’re using for development.
Troubleshooting common Docker Desktop and WSL issues
Sometimes when migrating from older Docker for Windows previews, you might run into an obsolete Docker context named “wsl” that points to a non-existent pipe. Commands like docker context ls will reveal it, and error messages might mention docker_wsl pipes that can’t be found.
The fix is usually as simple as removing that outdated context with docker context rm wsl so that the default Docker context is used for both Windows and WSL. After that, interactions with Docker from inside WSL distributions should again go through the proper Docker Desktop integration.
Another frequent confusion is where Docker Desktop actually stores its WSL-backed data, including images and volumes. Docker creates distributions named something like docker-desktop and docker-desktop-data, which you can browse through Windows Explorer by running explorer.exe . from a WSL prompt and navigating to paths like \\wsl$\Distro\mnt\wsl.
Understanding these storage locations helps when you’re diagnosing disk space issues or trying to back up or prune large images and volumes that your local development work has accumulated. For day-to-day development and deployment, you don’t usually need to touch these directories directly, but it’s good to know where they live.
If you hit general WSL-related errors, consulting the official WSL troubleshooting guides and Docker’s own docs on WSL integration is often the fastest route to a fix. Many known issues, especially around version mismatches and virtualization settings, have well-documented resolutions.
Putting all these pieces together – Compose on your VPS, optional GUIs like Plesk or Portainer, and a solid local setup with Docker Desktop and VS Code – gives you a flexible, repeatable workflow for developing, shipping and running containerized apps on remote servers without nasty surprises between environments.