How to use Cloud Explorer with Scality S3 server

I spent a few weeks searching for an open-source S3 server that I can run at home to test . I first came across Minio which is an open-source S3 server but I could not get it to work with Cloud Explorer because it had issues resolving bucket names via DNS which is a requirement using the AWS SDK. I then read an article about Scality releasing an open-source S3 that you can run inside a Docker image. I was able to get Scality up and running quickly with little effort. In this post, I will explain how I got the Scality S3 server setup and how to use it with Cloud Explorer.

Continue reading How to use Cloud Explorer with Scality S3 server

amazonamazon s3cloud explorerdockerlinuxopen sources3scality

Running WordPress on Kubernetes

I recently started to check out Kubernetes and wanted to share with everyone how I got WordPress running on it as a three-tier application. I made the decision to learn Kubernetes because Docker Swarm was not working well for me. To start, I downloaded and installed MiniKube on my laptop.

I then created three Docker images and pushed them to my Docker Hub registry:



Continue reading Running WordPress on Kubernetes

cloud explorercontainersdockerkubernetes

Forking Docker will lead to more fragmentation

If you have been keeping up with Docker lately, you may have come across my blog post about the sad state of Docker. In this post, I go over how the 1.12 release appeared interesting from all the marketing announcements and the constant copying and pasting of the same Docker content into blogs over the world. However, many others and I expressed our opinions on Hacker News on how Docker failed to deliver a quality product and how they failed to create a quality release. The New Stack then summarized all of the weekend discussions going on in a new blog post and discussed that a fork of Docker may arise. Is a fork really the best answer? Let’s take a look.

The nice thing about open source software is that anyone can take the software and modify it as needed or even create their own version of the software for redistribution. Software repositories like GitHub make it really easy for developers to fork a project and begin making their own changes and improvements. A recent example was the fork of OwnCloud into NextCloud. My problem with forking is that it leads to fragmentation. I personally like one or two ways of doing something well versus many different ways to partially achieve the same goal.

The container space is already starting to grow rapidly in terms of building and orchestration. The biggest container format is of course Docker and they have Swarm for their container orchestration. CoreOS also has their own container runtime called Rocket which is starting to gain traction and uses Kubernetes for orchestration. There are many other companies sprouting up in the container management area with their own unique solutions. However, Kubernetes appears to be coming the standard orchestration layer that many products use now. To help standardize containers, the Open Container Initiative (OCI) was formed to help define how containers work.

The OCI was created by members of CoreOS, Red Hat, Docker, and a few others on June 22, 2015 and gained support from companies like Apcera, , Apprenda, , and many more. Collaboration between companies for the greater good is terrific and we need more of this. Docker made strides to make an official OCI compliant version in their v1.11 . Progress is being made to better standardize in this space but it takes time to achieve. Instead of forking Docker, the community should continue to raise their concerns in a nice manner and wait a little bit longer for change to happen.

Creating more fragmentation will be counterproductive because the attention of the people will be split amongst projects. How will companies new to containers and microservices ever learn to adopt this great new way of doing things if they can never decide on what to use? Anyone can fork Docker but we need to ask ourselves if another container solution is really needed when we have many to choose from already. If the answer is yes, we must ask ourselves who will maintain it? How long can this fork last? How much time will be wasted? Do the forkers have enough resources to make a quality project? How will they make their product secure and address vulnerabilities?

How about instead we stay positive and keep containing?

containersdockeropen source

The Sad State of Docker

I have always been a big fan of Docker. This is very visible if you regularly read this blog. However, I am very disappointed lately how Docker handled the 1.12 release. I like to think of version 1.12 as a great proof of concept that should not have received the amount of attention that it already received. Let’s dive deep into what I found wrong.

First, I do not think a company should market and promote exciting new features that have not been tested well. Every time Docker makes an announcement, the news spreads like a virus to blogs and news sites all over the globe. Tech blogs will basically copy and paste the exact same procedure that Docker discussed into a new blog post as if they were creating original content. This cycle repeats over and over again and becomes annoying because I am seeing the same story a million times. What I hate most about these recent redundant articles is that the features do not work as well as what is written about them.

I was really excited hearing about the new Swarm mode feature and wanted it to work as described because this means that one day I can easily make a Swarm cluster with my four Raspberry Pi’s and have container orchestration, load balancing, automatic failover, multi-host networking, and mesh networking features without any effort. Swarm in v1.12 is very easy to setup versus the predecessor and I wanted to put it in production at home (homeduction). To test Swarm, I setup a few virtual machines using docker-machine on my laptop and went through the Swarm creation process and then began to run into issues when deploying my applications.

An important feature to have in a Swarm cluster is multi-host networking for containers. This allows containers to talk to each other on a virtual network across many hosts running the Docker engine. Multi-host networking is important for containers to communicate with each other such as web application connecting to another container with MySQL. The problem I faced is that none of my containers could communicate across hosts. When it did work sometimes, the mesh networking would not route traffic properly to the host running my container. This means none of my applications worked properly. I went to the Docker forums and many people shared my pain.

It is not wise to explode the Internet and conventions with marketing material about exciting new features that do not work as presented. There are still many bugs in Swarm that need to be fixed before releasing to the general public to have them beta test for you. What is the rush to release? Will it hurt that much to wait a few more weeks or months to do it right and have the product properly working and tested? Yes, we all know Docker is awesome and is trying to play catch up with competitors such as Apcera and Kubernetes, but please take it slow and make Docker great again!

[Edit 8/31/2016]

Tweaked paragraphs to make it more clear that my testing was not done on the Raspberry Pi and done with docker-machine on a laptop.

containersdocker

Moving from a single machine with Docker to a cluster of Pi’s

I decided to finally make use of my four Raspberry Pi model 3’s and take the challenge to move all of my home services to them. Previously, I ran a x86 Desktop as a server in my living room. The loud noises coming from the server made it uncomfortable to be in sometimes. The loud noisy box is home to this website and many other applications such as Plex, Transmission, OpenVPN, Jenkins, Samba, and various Node.js projects all running in Docker. Having all of those applications running on a single box is a single point of failure and makes system administration harder when reboots are required.

Continue reading Moving from a single machine with Docker to a cluster of Pi’s

dockerraspberry pi

Running an SSH server in a container on Apcera

SSH is the Swiss Amy Knife of system administration and provides the easiest way to manage a system remotely. When running containers, there is typically someway to connect to a container’s shell from a client that communicates through an API like Docker or by using an SSH solution which is how Apcera does it. Some applications that run in containers may require SSH access to communicate with other containers or services. For example, Hadoop is a popular cluster application that uses a distributed filesystem spread across many nodes and communicates with each other via SSH. Let’s take a look on how to setup an SSH server running inside a capsule (a minimal OS container) on the Apcera Platform.

Continue reading Running an SSH server in a container on Apcera

apceracontainersPaaS

Examining the Apcera Cloud Platform

I like to take a break from my usual Docker blog posts and discuss the Apcera Cloud Platform. The Apcera Cloud Platform runs containerized workloads such as Docker images or applications from source code in a clustered environment. For the past several weeks, I have been playing with Docker Swarm and spent time researching how to put this blog into production on it. Life has been very difficult for this migration because Swarm requires a lot of handholding and lacks the failover automation that I need. I began researching the Apcera Platform and tried out the community edition that users can try for free. The areas that I focused on for my needs regard ease of use and workload portability.

Continue reading Examining the Apcera Cloud Platform

apceracontainersPaaS

Getting started with the many ways to Docker

This is a followup on how to use Docker after building a Swarm cluster. I think it is important for people to understand the different ways to create containers and choose the best way for their needs.This blog post will explain docker-compose, docker engine, and how to do persistent storage.

Continue reading Getting started with the many ways to Docker

containersdocker

Cloud Explorer is back with v7.2

Introducing Cloud Explorer 7.2!

Cloud Explorer is a open-source Amazon S3 client that works on any operating system. The program features a graphical or command line interface. Today I just released version 7.2 and hope that you give it a test drive. Feedback and uses cases are always encouraged.

 

What’s new in this release?

To start,  this release of Cloud Explorer was compiled with Java 1.8.0_72 and version 1.10.56 of the Amazon S3 Java Development Kit ( JDK). The major improvements in this release regard file synchronization. Basically, it was mostly rewritten. By putting forth the effort, it helped reduce technical debt and consistency between the command line and graphical version of Cloud Explorer.

 

How do I get it?

Cloud Explorer v7.2 is available under the “Downloads” section of the on GitHub. Simply click on “cloudExplorer-7.2.zip” and the download will begin. When the download is finished, extract the zip file and double click on “CloudExplorer.jar”.

 

Where do we go from here?

I know it has been a while since Cloud Explorer has been touched. It is hard to handle a project all by yourself and keep innovating. I feel that with this release, Cloud Explorer reached a stable point.  I am always looking for new ideas and help from the community. If you are interested in contributing, please contact me or open an issue on the

 

amazon s3cloudcloud explorerjava

Using Docker Swarm in Production

[Introduction]

I have always been fascinated with Docker Swarm and how it can cluster multiple computers together to run containers. I mainly used Swarm via docker-machine with the Virtual Box provider for testing. I felt that now it is time to try and run this in production. This blog post will explain how to create a simple Swarm cluster and secure it with a firewall. Docker officially recommends that you enable TLS on each node but I wanted to make it simpler with firewall rules to prevent unauthorized access.

[Setup]

Docker v1.10 has been installed on each of these machines running Ubuntu 15.10:

node_0 – The Swarm Master.
node_1 – A Swarm node.
node_2 – Another Swarm node.

[Installation]

1. Setup each node to have Docker listen on it’s own host IP address and disable the firewall rules:

First, stop the Docker daemon so we can make configuration changes:

systemctl stop docker

Edit: /etc/default/docker.  Uncomment if needed and modify DOCKER_OPTS as follows:

DOCKER_OPTS=”-H tcp://node_0_ip:2375 –iptables=false”

Start the Docker daemon again:

systemctl start docker

(Repeat this process for all the nodes)

2. On the Swarm Master node, create a cluster token. Each Swarm client will need the token to form a cluster. The output of this command will be a long token that you will need in the next steps.

docker run swarm create

3. On the Swarm Master node, create a Swarm Manager using the token from step 2. The Swarm manager will listen on port 5000.

docker run -d -p 5000:2375 -t swarm manage token://6b11f566db288878e16e56f37c58599f

2. Type the following commands from the master node to join the slave nodes to the cluster using the token from step 2.

docker run -d swarm join –addr=node_0_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_1_ip:2375 token://6b11f566db288878e16e56f37c58599f
docker run -d swarm join –addr=node_2_ip:2375 token://6b11f566db288878e16e56f37c58599f

3. Since the Swarm manager is running on port 5000 on node_0, we need to tell the Docker client such as a laptop to connect to that host and port to use the cluster. The following command will show the status of the Swarm cluster.

docker -H tcp://node_0_ip:5000 ps

[Securing]

4. Finally, we need to secure the Swarm cluster with firewall rules so that only the nodes in the cluster can talk to the Docker engine. The following rules will deny all incoming traffic and only allow Docker access from the nodes.

Node_0:

ufw allow 22
ufw allow 5000
ufw default deny incoming
ufw allow from node_1_ip
ufw allow from node_2_ip
ufw enable

Node_1:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_2_ip
ufw enable

Node_2:

ufw allow 22
ufw default deny incoming
ufw allow from node_0_ip
ufw allow from node_1_ip
ufw enable

[Conclusion]

Now you should have a three node Docker Swarm Cluster that is locked down. If you need to enable an external port for a container, the firewall rules will need to be adjusted manually.

 

containersdocker

…..another techy Linux blog