Docker users, groups, permissions and running things

Running things as root (anywhere) is bad and should not be done.
this is what we have been doing so far with other dockers:
build a base image and add the user and set guids in the base image:

* Note ADD Dockerfile /base.Dockerfile is a good way of tracking how the image was built, and you always have the Dockerfile.

Now every single Dockerfile you create should have FROM portus-base
Every image will have the user tango
Using gosu, you can change the user as which the process runs.
You may need to have root in the image because of reasons.
Gosu allows you to run the app as tango
gosu tango nginx

You can also chown -R in your entrypoint file, because the $PROJECT_NAME variable is available in any containers/images built from the new base.


This way, all containers that share volumes, will have enough permissions to the shared volumes.

Migrating Proxmox 3.4 to 4.0b2 with ZFS

I have just completed a migration from Proxmox 3.4 with ZFS on Linux installed (zfs-, and it was a fairly easy move if you prepare enough.
My ZFS pool is 4 local disk in a RAIDz2.
My VMs, ISOs and Backups all live on the ZFS pool tank .
The ZFS pool is mounted locally and was added as a directory /tank to storage.
My storage was id was zfstank important!!

Here is what you should do if you want t happy life


  1. shutdown all VMs
  2. copy contents of  /etc/pve/nodes/{yourserver}/qemu-server  to a safe place (these are individual VM config files)
  3. stop cluster service service pve-cluster stop
  4. copy contents of  /var/lib/pve-cluster  to a safe place
  5. stop all pve related services using terminal, this is to make sure we can export our ZFS pool
  6. in terminal type zpool export tank
  7. shutdown the server and disconnect your ZFS disks
  8. install Proxmox 4.0 on a new disk, keep the server name the same
  9. shutdown the server and connect your ZFS disks
  10. power on and your ZFS pool should be magically imported under the same location (my one was /tank)
  11. if your pool is not magically imported run zpool import if you can see your ZFS storage, fantastic! otherwise you will need to tinker some more
  12. add your storage using the Proxmox ui, and ensure that the path and ID are exactly the same as they were on your 3.4 Proxmox
  13. stop cluster service service pve-cluster stop
  14. restore config.db  and other files you backed up to /var/lib/pve-cluster
  15. start cluster service service pve-cluster start and you should have all you vms available

Ensure that you backup your Proxmox configuration.
Backup the database files

IPMI tools, Linux tools to slow down fans on your supermicro server

I just found out that you can edit your server fanspeed settings via IPMI tools on Linux.

So if you are keeping a server at home, this will make it quieter.

install IPMItools

save current ipmi configs

create backup copy for fun

make changes to fans peeds

commit changes

NGINX dynamic fastcgi param based on host

Techstack: Ubuntu/Debian, NGINX, PHP5FPM ( I will assume that you have these installed and nginx is talking to php5fpm)

Supposed you have a php application that needs to do different things based on a fastcgi_param variable, but you want to set a different variable based on another variable in nginx (like http hostname).

The nginx http_map module is the answer, but it may not be immediately obvious how to use it.

Continue reading


Simple Ansible playbook to install Puppet 3.6.2 on your Ubuntu 12.04 servers

This is a super simple way of installing a current version of Puppet on your Ubuntu servers.

You will need

All you need is SSH access with SSH KEYS, and Ansible installed. My thinking is that once you have SSH access, you should be able to do anything you please with the resources.

Why would you want to do this?

  • Puppet can be a pain to install (up to date versions, instead of OS version)
  • You may not have a preseeded, kickstart iso
  • You may have previous outdated installation, and you need to get compliant
  • With Ansible you will not need to repeat a series of steps, and you will know for sure that all the steps have been completed

Here I assume you have a recent version of Ansible installed with pip.

My ansible hosts file looks like this (it can live in your home folder and can be called staging ~/staging )

My ansible playbook looks like this, it should live next to the hosts file for convenience. I called this file puppet.yml .

This is how I ran my playbook against my proxmox-webservers :

The -v option will make the output busier, so you see what is going on.





Proxmox + ZFS on Linux local storage Part 1

UPDATE: 2015-04-21 zfsonlinux update from 063 to 064, my zfs volume automounts. I tried getting the later versions of the pve kernel to work with zfs without any success. You are stuck with 2.6.32-26-pve.

CAVEAT: This is by no means stable, reliable or noobie friendly.  June 12 2014, zfsonline release update, your zfs pool is now not accessible.

PROXMOX is a free open source virtualization environment based on Debian. Proxmox has KVM and openvz virtualization.

ZFS is the legendary Solaris filesystem and volume manager, that really cares about keeping your data intact.

Running ZFS and PROXMOX on the same box is not the best idea for production, but it is very convenient if you have dev/lab setup.

This should be a very clear guide on how to setup ZFS on PROXMOX.

I have this running on my Supermicro server.

MODEL: SuperServer 6026TT-GTRF
CPU: 1x Xeon X5620 Quadcore 2.4Ghz
RAM: 3x 4GB = 12GB
STORAGE: 4x 1TB, 1x 500GB

Only one server node is being used.
The virtual machines are relatively fast, I should really run some benchmarks and share the output.

Continue reading


Make sense of server logs, with logstash, elasticsearch and kibana.

I have read quite a few posts about logstash, kibana and elasticsearch, the three together offer centralized logging and a brilliant interface, but a lot of the information was hazy.

Logstash, Elasticsearch and Kibana are three different projects that work seamlessly together to create amazing UI dash-boards so you can make sense of dense server logs.

Why do yet another “Getting Started with Logstash, Elasticsearch and Kibana” post?

I will try to explain why certain steps are important and what you can do to get more out of this setup.

I found most of the guides lacking, especially when it comes to kibana dashboards, fixing configuration mistakes, basic elastic search functionality.

I will call Logstash, Elasticsearch and Kibana the LEK stack, because it is less typing and possibly less confusing.

Continue reading

words are for people