Tag Archives: storage

What to do with a Raspberry Pi?

In my previous blog posts, I always talked about my setup but never showed a completed diagram. In the above image, you can see everything that I am running on my 6 node Raspberry Pi cluster. All of the applications above are running in Docker containers  managed by  except for Gluster.

Continue reading What to do with a Raspberry Pi?

amazon s3blogcloudcloud storagedisklinuxnginxopen sourceraspberry piRedHats3scality

How to use Cloud Explorer with Minio

I did some updating to recently to make it work with Minio. Minio is one of many open source S3 servers available today for people to use on-premises for their personal cloud storage needs. With the added support, Minio users can take advantage of Cloud Explorer’s unique features such as performance testing, note taking, playing music, viewing images, and search.

Continue reading How to use Cloud Explorer with Minio

amazonamazon s3cloudcloud storagedisklinuxminios3

Thanks for the support in 2016

Most people will say that 2016 was a terrible year and can’t wait for 2017. I agree that 2016 was not perfect for many people but it was a great year for linux-toys. I made this blog with the goal to influence people and drive that creative spark that we all have inside. In this blog post I will go over the website statistics and discuss a few of the blog entries that I thought were most influential for the year.

Continue reading Thanks for the support in 2016

Using the Scality S3 Server in Homeduction

My previous post was about using Cloud Explorer with the Scality S3 server.  After I published that post,  I thought it would be informative to go one step further and explain how I use the S3 server in homeduction (applications run at home in production). My homeduction environment consists of four Raspberry Pi’s running Docker that power this WordPress blog and many other applications . My goal is to add an S3 server to store the images for this blog and anything else that I can come up with.

Continue reading Using the Scality S3 Server in Homeduction

Using LVM cache on Linux with a RAM disk

The Challenge

This is a follow up article from using a USB drive for a LVM cache. I decided to test things further by using a RAM disk instead of a USB drive.

 

The Journey

1. Create a RAM disk:

modprobe brd rd_nr=1 rd_size=4585760 max_part=0

2. Create the cache

pvcreate /dev/ram0
vgextend vg /dev/ram0
lvcreate -L 300M -n cache_meta vg /dev/ram0
lvcreate -L 4G -n cache_vol vg /dev/ram0
lvconvert –type cache-pool –poolmetadata vg/cache_meta –cachemode=writeback vg/cache_vol -y
lvconvert –type cache –cachepool vg/cache_vol vg/docker-pool

3. Run the DD test again

[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.89586 s, 553 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.79864 s, 583 MB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 0.922467 s, 1.1 GB/s
[root@tokyo /]# dd if=/dev/zero of=/tmp/1G bs=1M count=1000
1048576000 bytes (1.0 GB) copied, 1.33757 s, 784 MB/s

Average Speed: 736 MB/s

 

Conclusion

In Conclusion, my average write speed is 736 MB/s using LVM caching with a RAM disk. With a USB thumb drive, my average speed is 411.25 MB/s. With no cache, my average speed is 256 MB/s.

 

 

cacheCentOScloudhard drivelinuxperformanceRedHat

Betting the farm on Docker

The Challenge

I wanted to try out Docker in production to really understand it. I believe that to fully understand or master something, you must make it part of your life. Containers are the next buzz word in IT and future employment opportunities are likely to prefer candidates with container experience. My current home environment consists of a single server running: OpenVPN, Email, Plex Media Server, Nginx, Torrent client, IRC bouncer, and a Samba server. The challenge is to move each of these to Docker for easier deployment.

The Journey

I noticed that many Docker containers contain multiple files that are added to the container upon creation. I decided to take a simpler approach and have the Dockerfile modify most of the configuration files with the sed utility and also create the startup script. I think that for most cases, having a single file to keep track of and build your system is much easier than editing multiple files. In short, the Dockerfile file should be the focal point of the container creation process for simpler administration.

Sometimes it is easier to upload prebuilt configuration files and additional support files for an application. For example, the Nginx configuration file is simpler to edit manually and have Docker put it in the correct location upon creation. Finally, there is no way around importing existing SSL certificates for Nginx in the Dockerfile.

Creating an Email server was the most difficult container to create because I had to modify configuration files for many components such as SASL, Postfix, Dovecot, creating the mail users, and setting up the alias for the system. Also, Docker needed to be cleaned often because it consumes large amounts of disk space for each container made. Dockerfiles with many commands took very long to execute. Testing out a few changes in a Dockerfile and rebuilding the container took a long time. Amplify that by the many containers I made and the many typos and you can see how my weekend disappeared.

After my home production environment moved to Docker and is working well, I created a BitBucket account to store all of my Docker files and MySQL backups to. There is a cron job inside the container that does a database dump to a shared external folder. If my system ever died, I can setup the new system easier by cloning the Git repository and building the container with a single command.

Conclusion

In conclusion, Docker was hard to deploy initially, but will save time in the future if disasters happen such as a system failure. Dockerfiles can basically be used as a well-documented blue print of a perfect application or environment. For continuity purposes in organizations, Dockerfiles can be shared and administrators should easily be able to understand what needs to be done. Even if you do not like Docker, you can basically copy and paste the contents of a Dockerfile with little text removal and build the perfect system.