@see
Docker automatizza il deployment di applicazioni all'interno di container software, sfrutta le feature di isolamento delle risorse del kernel Linux per consentire a "container" indipendenti di coesistere sulla stessa istanza di sistema, andando quindi ad evitare l'installazione di una macchina virtuale.
First came the Operating System. It allowed software to share hardware. Despite the best efforts of designers, OSes did not sufficiently isolate users, applications and processes from each other. Trying to drop a new application into an OS was a total crapshoot, since the state of the OS could be anything, depending on what was installed or what hardware lived underneath.
Enter Operating-system-level virtualization. Using a hypervisor, the physical hardware could be represented as an abstraction of the hardware on which you could install multiple independent OSes on the same machine. You could now drop a pre-configured OS image on top on a hypervisor without any worry. This process was pretty resource intensive, despite the tons of cool tricks to share memory and resources.
Docker acts as a lightweight virtualization platform that lets you have the benefit of OS level virtualization without the overhead of running multiple, parallel OSes. It lets you drop ready-to-run "containers" into ANY hardware environment that supports Docker instances. Docker provides isolation at file system, network etc. It makes application deployment and scaling predictable.
we can use CPU and RAM limits (cgroups) to make sure 1 container didn't block the entire host.
by the early 2000s, using APT, people wrote shell scripts to deploy packages from the internet in a repeatable way, on many machines. The shell script essentially became the documentation on how, precisely, to install packages and prepare a system for application code, code as specification.
By the mid-2000s Amazon had released AWS, permitting writing software that built the infrastructure you needed on AWS automatically, called Infrastructure as Code. By the late 2000s the DevOps movement had embraced this new workflow, with Chef and Puppet, both of which are known as configuration management tools.
When I commit code, the build system will run the Dockerfile, which is essentially a script, and produce a single artifact, which is the container containing the system dependencies along with my application code, that is later deployed to a production system.
FROM php:8.0-apache WORKDIR /var/www/html COPY index.php index.php COPY src/ src EXPOSE 80
This Dockerfile takes index.php and src from our working directory and copies them into the Apache document root. You could now build the image and start a container from it. You’d see your site being served by Apache.
The PHP Docker images have the Apache document root at the default Debian location of /var/www/html.
The WORKDIR instruction in the Dockerfile means subsequent commands will be executed within the document root.
docker build -t my-php-site:latest . docker run -d -p 8081:80 my-php-site:latest
browse to http://localhost:8081 to test your script
build a docker-compose.yml file for your project specifying which to create and how they communicate with one another.
After installing docker on your machine, you can start a web server with one command. The following will download a fully functional Apache installation with the latest PHP version, map /path/to/your/php/files to the document root, which you can view at http://localhost:8080:
docker run -d --name my-php-webserver -p 8080:80 -v /path/to/your/php/files:/var/www/html/ php:apache
This will initialize and launch your container. -d makes it runs in the background. To stop and start:
docker stop my-php-webserver docker start my-php-webserver (the other parameters are not needed again).
Here is an example docker-compose.yml file for a PHP application using a MySQL and PHPMyAdmin:
version: '3' services: web: build: . ports: - 80:80 volumes: - .:/var/www/html depends_on: - db db: image: mysql:5.7 volumes: - db-data:/var/lib/mysql environment: MYSQL_ROOT_PASSWORD: password MYSQL_DATABASE: mydatabase MYSQL_USER: user MYSQL_PASSWORD: password phpmyadmin: image: phpmyadmin/phpmyadmin ports: - 8080:80 environment: PMA_HOST: db PMA_USER: root PMA_PASSWORD: password volumes: db-data:
In this example, we have defined three services: web, db, and phpmyadmin. The web service is built from the current directory and exposes port 80 on the host machine. The db service is based on the MySQL image and creates a volume for the MySQL data. The phpmyadmin service is based on the PHPMyAdmin image and exposes port 8080 on the host machine.
To start these services:
docker-compose up
Access your PHP application at http://localhost, and PHPMyAdmin at http://localhost:8080.
FROM php:7 RUN apt-get update -y && apt-get install -y openssl zip unzip git RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer RUN docker-php-ext-install pdo mbstring WORKDIR /app COPY . /app RUN composer install CMD php artisan serve --host=0.0.0.0 --port=8181 EXPOSE 8181
// creare una Docker image docker build -t my-image // immagine completamente isolata del nostro ambiente PHP docker run -p 8081:8081 my-image
app accessing multiple containers
# mysql docker run -d --name mysql dockerfile/mysql:latest # rabbitmq docker run -d --name rabbitmq dockerfile/rabbitmq:latest # phantomjs - scriptable headless browser docker run --name phantomjs -d -v `pwd`:/mnt/test --expose 25555 cmfatih/phantomjs:latest /usr/bin/phantomjs /mnt/test/fetcher/phantomjs_fetcher.js 25555 # app docker run -d -p 5000:5000 \ --link mysql:mysql \ --link rabbitmq:rabbitmq \ --link phantomjs:phantomjs \ --link scheduler:scheduler \ githubusr/prjname:latest appname \
Add docker to user group first or else it won't run
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create": dial unix /var/run/docker.sock: connect: permission denied.
sudo apt install docker-ce sudo usermod -aG docker $(whoami) groups docker run hello-world
docker run ubuntu /bin/echo 'Hello world'
docker run -i -t --rm ubuntu /bin/bash
docker run --name daemon -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
list all containers:
docker ps -a
To see what a specific daemon container is doing right now:
docker logs -f daemon
Prints the log of the container named 'daemon'
To stop a daemon container:
docker stop {{container_name}}
To start a container:
docker start {{container_name}}
To remove a container:
docker rm {{container_name}}
To see downloaded images:
docker images
To get details about a running container:
docker inspect {{container_name}}
To run a container:
docker run --name some-app \ --link some-mongo:mongo \ -d application-that-uses-mongo
To build from an image:
docker build -t {{image_name}} .
Remove all dangling volumes:
docker volume rm `docker volume ls -q -f dangling=true`
to copy files:
docker cp foo.txt mycontainer:/foo.txt docker cp mycontainer:/foo.txt foo.txt
https://hub.docker.com/_/nginx/
docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d \ -p 8082:80 \ nginx