diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml
new file mode 100644
index 0000000000000000000000000000000000000000..ca2a7c814b4ced9f426dc49cc4f543cd5fee6247
--- /dev/null
+++ b/.gitlab-ci.yml
@@ -0,0 +1,28 @@
+stages:
+  - build
+  - deploy
+
+Build:
+  stage: build
+  tags:
+    - default
+  script:
+    - docker-compose up -d
+    - wait 20
+    - docker-compose exec --workdir=/mkdocs mkdocs mkdocs build
+    - docker-compose down
+  artifacts:
+    name: build
+    when: always
+    paths:
+      - site/
+
+Deploy:
+  stage: deploy
+  tags:
+    - ansible
+  variables:
+    GIT_STRATEGY: none
+  script:
+    - echo "Coming later"
+  cache: {}
diff --git a/docs/ansible/index.md b/docs/ansible/index.md
index d83bb0d4cbb0e72d92cc9b83e05ffce95c5d5507..76b2374e30d2c99887c3b315b20d295bfbca7eab 100644
--- a/docs/ansible/index.md
+++ b/docs/ansible/index.md
@@ -1,3 +1,259 @@
-# Ansible
+# Ansible Repository
 
-For full documentation visit [mkdocs.org](https://mkdocs.org).
+This repository is a collection of playbooks, roles, plugins and inventories for
+multiple clients that PARAGON supports. It is build such that multiple clients
+could be installed on a single control host. However, by default the setup
+script installs the framework for an individual custumer so that those can use
+it for themselves too.
+
+## Installation
+
+Before you can checkout the repositories from the server, you have to add your
+public SSH key to your user profile on the GitLab server. To do so, please go
+to https://gitlab.lakedrops.com/profile/keys and follow the instructions there.
+Please also note the link to instructions on how to [generate it](https://gitlab.lakedrops.com/help/ssh/README)
+which also describes on how to prepare your local host to securly access the
+GitLab repositories.
+
+Once that's been completed, proceed with these steps:
+
+```
+# Install Ansible: http://docs.ansible.com/ansible/intro_installation.html
+# There you can find the relevant instructions for your local environment.
+# The following example is for Ubuntu (for others refer to the link above):
+sudo apt-get install software-properties-common python2.7 git
+sudo apt-add-repository ppa:ansible/ansible
+sudo apt-get update
+sudo apt-get install ansible
+
+# Patch Ansible, if required
+# For this, see the next chapter with advise on which patch is required for which Ansible version.
+
+# Create your Ansible home directory (e.g. /opt/ansible)
+mkdir /opt/ansible
+
+# Checkout the main repository
+git clone git@gitlab.lakedrops.com:ansible-playbooks/general.git /opt/ansible --recurse-submodules
+cd /opt/ansible
+
+# Optionally install other OS requirements, only if it fails in subsequent ansible-script.py setup-local task
+# Then you should also use the parameter --skip-os-tasks in the next step
+# How to find out the requirements: please look into the playbooks/setup/ directory and look
+# into the right file for your OS to see which components need to be installed 
+
+# Setup or update
+./ansible-script.py setup-local COMPANY [--username=REMOTEUSERNAME] [--skip-os-tasks]
+```
+
+## Patching
+
+From time to time there might be problems with an Ansible core update and if there
+is no other way, i.e. modifying our Ansible playbooks and roles, then we may have
+to patch Ansible core and here is a list of availables patches and the information
+which of them needs to be applied to which Ansible version:
+
+| Patch File          | Ansible Versions | Comments                                                    |
+| :------------------ | :--------------- | :---------------------------------------------------------- |
+| none                |                  |                                                             |
+
+You can find the patches in the `files/patches` sub-directory of this repository
+and the Ansible core directory, that needs to be patched, depends on the operating
+system and might be something like `/usr/share/pyshared/ansible/`
+
+So, you may have to execute these steps:
+
+```
+cd /usr/lib/python2.7/dist-packages/ansible
+patch -p1 < /opt/ansible/files/patches/FILENAME.patch
+```
+
+Again, the directories depend on your local installation.
+
+## Configuration
+
+The above installation configures Ansible such that everything is good to go
+and you can call the `ansible-script.py setup-local` script again at any time to grab updates
+or restore settings if something got broken.
+
+When using `ansible-script.py setup-local` the first time there will be two new files being created
+that you can use again in the future as shortcuts:
+
+- **/opt/ansible/update.sh** which will update your installation by pulling
+  all changed repositories and also running configuration of the current user
+- **/opt/ansible/config.sh** which will just update the configuration for the
+  current user
+
+Important: if your remote username is different from your local username, you
+should call those scripts (`ansible-script.py setup-local`, `update.sh`, `config.sh`) with an
+additional parameter `--username=[REMOTE USER NAME]`. This will write that
+username into the `~/.ansible.cfg` for future usage so as long as that name
+remains the same, you no longer have to use that parameter again.
+
+Additional settings make the usage of Ansible more convenient and we're describing
+them here with detailed descriptions.
+
+### Ansible configuration
+
+You'll find a file ```.ansible.cfg``` in your home directory after the setup from
+above and there are certain additional settings that could be usefull:
+
+#### Working with a vault for automatic sudo password input
+
+When using Ansible with this repository you'll get asked for your remote sudo
+password every single time. If you want to avoid that, you can store that password
+in a vault so that Ansible grabs it from there automatically.
+
+**Warning:** Only use that if you have full control over the Ansible control host
+because otherwise someone else could get access to your whole server farm.
+
+1. Create your vault password file
+   Create a file named ```~/.ansible/vault.pwd``` and edit that file so that it
+   contains your local password for your Ansible vault.
+2. Configure your vault password file
+   To make sure Ansible is going to utilize your password file, insert the line
+   ```vault_password_file = ~/.ansible/vault.pwd``` into ```.ansible.cfg``` in
+   your home directory.
+3. Create your vault
+   Use the command ```ansible-vault create ~/.ansible/secrets``` and include
+   one line ```ansible_sudo_pass: YOURSUDOPASS```. This is using your default
+   editor for the console but you can configure that like ```export $EDITOR=nano```
+   to use the nano editor as an example. When saving the file, ansible-vault
+   will encrypt that file with your vault password contained in the vault.pwd file.
+4. (Optional) Edit your vault file later on
+   If you later want to edit your secrets, use ```ansible-vault edit ~/.ansible/secrets```
+
+### AWS EC2: Boto configuration
+
+If you want to use the dynamic AWS EC2 inventory, you should provide your access
+keys in a file ```/etc/boto.cfg``` with the following content:
+
+```
+[Credentials]
+aws_access_key_id = <access key>
+aws_secret_access_key = <secret key>
+```
+
+Note: The access key and secret key should be provided without the < brackets >.
+
+In a multi-company environment the configuration file should provide different
+sections for each company that has hosts in AWS EC2:
+
+```
+[profile COMPANY1]
+aws_access_key_id = <access key>
+aws_secret_access_key = <secret key>
+
+[profile COMPANY2]
+aws_access_key_id = <access key>
+aws_secret_access_key = <secret key>
+```
+
+Here you should replace `COMPANY#` with the lower case name of the relevant company.
+
+### JiffyBox configuration
+
+If you are using a JiffyBox inventory, you have to provide your API token in a
+file ```/etc/jiffybox.cfg``` with the following content:
+
+```
+[Credentials]
+api_token = <api token>
+```
+
+Note: The api token should be provided without the < brackets >.
+
+In a multi-company environment the configuration file should provide different
+sections for each company that has hosts in JiffyBox:
+
+```
+[profile COMPANY1]
+api_token = <api token>
+
+[profile COMPANY2]
+api_token = <api token>
+```
+
+Here you should replace `COMPANY#` with the lower case name of the relevant company.
+
+### Linode configuration
+
+If you are using a Linode inventory, you have to provide your API key in a
+file ```/etc/linode.cfg``` with the following content:
+
+```
+[Credentials]
+api_key = <api key>
+```
+
+Note: The api key should be provided without the < brackets >.
+
+In a multi-company environment the configuration file should provide different
+sections for each company that has hosts in Linode:
+
+```
+[profile COMPANY1]
+api_key = <api key>
+
+[profile COMPANY2]
+api_key = <api key>
+```
+
+Here you should replace `COMPANY#` with the lower case name of the relevant company.
+
+### Creating shortcuts for the scripts
+
+All the scripts in this repository are written in a way that they can be called
+from everywhere, you don't have to chdir into the repository directory first.
+
+For better convenience, we recommend to create shortcuts in a directory which is
+part of your PATH environment variable. Examples:
+
+```
+cd /usr/local/bin
+sudo ln -s /opt/ansible/directory/ansible.py a
+sudo ln -s /opt/ansible/directory/ansible-playbook.py apb
+sudo ln -s /opt/ansible/directory/ansible-script.py ascr
+```
+
+Since version 1.2, the setup script is creating those links by default for you.
+
+### Preparing access to existing hosts
+
+Ansible knows the hosts by name and the company specific naming convention should
+be reflected on each local host that wants to use Ansible to manage them. You'll
+find the hostnames in the file called ```inventory``` (if you have a static
+inventory) or for dynamic inventories this is an executibale file that you can
+call and it will list the known hosts to your console.
+
+You should make sure that your local host knows all your remote hosts by name
+and their IP address. For this, add a new line for each of those hosts into your
+```/etc/hosts``` file starting with the IP address followed by a space and the
+hostname from the inventory file.
+
+You can also run ```ansible-script.py hosts``` and Ansible will update your
+local hosts file automatically.
+
+Next, and this is the final piece before you can start using Ansible to access
+your hosts, make sure that you can access your hosts via SSH. This repository
+is built with security at the forefront and therefore access is only available
+through a PKI infrastructure. To configure your system for easy access, you
+should have a file ```$HOME/.ssh/config``` with some content similar to the
+following:
+
+```
+StrictHostKeyChecking yes
+ForwardAgent no
+
+Host *
+  User [YOUR REMOTE USERNAME]
+  IdentityFile ~/.ssh/id_rsa
+```
+
+The above setting applies to all hosts and the definition of the remote user name
+is only neccessary if that remote username is different from your local one. Please
+note that you should define the same username also in ```$HOME/.ansible.cfg```
+
+## Where to go next?
+
+The best place to continue reading is by heading over to the
+[Wiki](https://gitlab.lakedrops.com/ansible-playbooks/general/wikis/home)
diff --git a/docs/ansible/plugins/fluentd/index.md b/docs/ansible/plugins/fluentd/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..1c81d2d28f270ec4230dd25dfb3dbfb769ab827d
--- /dev/null
+++ b/docs/ansible/plugins/fluentd/index.md
@@ -0,0 +1,73 @@
+# Ansible FluentD Callback Plugin
+
+This repository provides a callback plugin that ships Ansible output via FluentD to an indexer as configured via FLuentD.
+
+### Ansible section
+
+Install [fluent-logger-python](https://github.com/fluent/fluent-logger-python)
+
+```
+pip install fluent-logger
+```
+
+Append the following to the `[defaults]` section of your `ansible.cfg`
+
+```
+callback_plugins   = <path_to_callback_plugins_folder>
+callback_whitelist = fluentd
+```
+
+Put the `fluentd` plugin from this git repository into the path_to_callback_plugins_folder as defined above.
+
+This plugin makes use of the following environment variables:
+
+* `FLUENTD_SERVER`   (optional): defaults to localhost
+* `FLUENTD_PORT`     (optional): defaults to 24224
+* `FLUENTD_TYPE`     (optional): defaults to ansible
+
+### FluentD section
+
+Basic fluentd testing config
+
+```
+<source>
+  @type forward
+  port 24224
+</source>
+```
+
+Shipping logs to elasticsearch
+
+```
+<source>
+  @type forward
+  port 24224
+</source>
+
+<match app.ansible>
+  @type elasticsearch
+  logstash_format true
+  host 127.0.0.1
+  port 9200
+  include_tag_key true
+  tag_key @log_name
+  index_name ansible
+  type_name ansible
+  reconnect_on_error true
+</match>
+```
+
+### Elasticsearch
+
+This repository contains a file titled `ansible.json`. This template can be loaded into your elasticsearch cluster to provide a nice mapping for the ansible data.
+
+List available templates
+
+```
+curl -s -XGET localhost:9200/_template
+```
+Load the template
+
+```
+curl -s -XPUT 'http://localhost:9200/_template/ansible' -d@ansible.json
+```
diff --git a/docs/ansible/plugins/serverdensity/index.md b/docs/ansible/plugins/serverdensity/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..03e827fbb0a4a2681c0f7ee5bf9f1d7ab85f5264
--- /dev/null
+++ b/docs/ansible/plugins/serverdensity/index.md
@@ -0,0 +1,73 @@
+# Ansible plugin for Server Density
+
+This is an [Ansible] plugin to manage your Ansible inventory over at [Server Density]. It uses the [ServerDensity API] and the [Ansible API].
+
+##Features##
+
+The following objects can be created and updated in your Server Density account from within Ansible:
+
+* Hosts/Devices with groups
+* Services with groups
+* Alerts for
+    * Devices
+    * Device Groups
+    * Services
+    * Service Groups
+* Notifications for alerts
+
+There are plugin parameters to define how the plugin will behave:
+
+* **api_token**: An API token from Server Density to authenticate yourself
+* **force** (optional, defaults to False): If an object already exists whether it should be updated
+* **cache** (optional, defaults to None): Fully qualified filename for a cache of der Server Density data
+* **cleanup** (optional, defaults to False): Decides if undefined alerts in your Ansible inventory available at Server Density should be deleted
+* **readonly** (optional, defaults to False): If set to True it will only read the current settings from SD and stores them in a temporary file and does nothing else. This is useful when you want to find out some variable names for alerts or similar things
+* **output** (optional, defaults to False): If provided, all current settings that are currently set upstream at Server Density will be written in YAML to this file
+
+##Installation##
+
+Download (or clone) the file `serverdensity.py` from the `action_plugins` directory and copy that into your custom action plugins directory which is defined in `/etc/ansible/ansible.cfg`. The default location for this is `/usr/share/ansible_plugins/action_plugins`
+
+##Usage##
+
+This plugin can be used in playbooks or with the ansible script directly.
+
+###In Playbooks###
+
+Simply include a task like this:
+
+```
+    - name: ServerDensity | Init SD plugin
+      local_action: serverdensity
+        api_token={{sd_api_token}}
+        cleanup=true
+        cache='/tmp/my_sd_cache'
+```
+
+You may also be interested in the [Server Density Role] that I've written which in addition installs and configures the Server Density agent on your hosts and synchronizes your inventory with Server Density by utilizing this plugin.
+
+###From the ansible script###
+
+Your whole inventory gets synchronized with Server Density simply by using this command:
+
+```
+ansible all -m serverdensity -a 'api_token=YOUR_SD_TOKEN' -vv
+```
+
+The final -vv parameter is inhancing the level of output on the console and with this plugin you'll get some quite useful information on what's going on in detail.
+
+##Configuration##
+
+The following variables are required in order to use this plugin and you should define them somewhere in you inventory, e.g. in group_vars/all. Further configuration for device groups, services and alerts can be defined in variables too and they are fully documented in the [Wiki].
+
+###sd_url###
+
+Defines the Server Density URL of your account, e.g. ```'https://myaccount.serverdensity.io'```
+
+
+[Ansible]: http://www.ansible.com
+[Server Density]: https://www.serverdensity.com
+[ServerDensity API]: https://apidocs.serverdensity.com
+[Ansible API]: http://docs.ansible.com/index.html
+[Server Density Role]: https://github.com/jurgenhaas/ansible-role-serverdensity
+[Wiki]: https://github.com/jurgenhaas/ansible-plugin-serverdensity/wiki
diff --git a/docs/ansible/roles/apache/index.md b/docs/ansible/roles/apache/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..71348c50c6ad6af1465ca1767be8f7a6ac5025c8
--- /dev/null
+++ b/docs/ansible/roles/apache/index.md
@@ -0,0 +1,25 @@
+# Optimize configuration for Apache and PHP FPM
+
+Credit: [@sbuckpesch](https://medium.com/@sbuckpesch/apache2-and-php-fpm-performance-optimization-step-by-step-guide-1bfecf161534)
+
+Run `ps_mem.py` on you host and collect the values for apache and php-fpm processes.
+
+The, download the [apache_performance.xlsx](apache_performance.xlsx) spreadsheet, fill in the orange fields for your specific server and configure the green calculated values into the Ansible inventory for your host:
+
+```
+apache_mpm_prefork:
+  startservers: 40
+  serverlimit: 11267
+  minspareservers: 5
+  maxspareservers: 10
+  maxrequestworkers: 11267
+  maxconnectionsperchild: 0
+
+php_fpm_max_children: 3338
+php_fpm_start_servers: 160
+php_fpm_min_spare_servers: 80
+php_fpm_max_spare_servers: 160
+php_fpm_max_requests: 20000
+```
+
+Then roll-out the configuration to the host.
diff --git a/docs/ansible/roles/borgbackup/index.md b/docs/ansible/roles/borgbackup/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f8fe4d8a40f2a01bae30c0f867b88fa199bfa62
--- /dev/null
+++ b/docs/ansible/roles/borgbackup/index.md
@@ -0,0 +1,38 @@
+# Links
+
+## Borg Backup
+
+- [Home](https://www.borgbackup.org)
+- [Documentation](https://borgbackup.readthedocs.io/en/stable/index.html)
+- [Issue Queue](https://github.com/borgbackup/borg/issues)
+- [Hosted Backups](https://www.borgbase.com)
+- [Hosted Backup Docs](https://docs.borgbase.com)
+
+## Borgmatic
+
+- [Documentation](https://torsion.org/borgmatic/)
+- [Config Reference](https://torsion.org/borgmatic/docs/reference/configuration/)
+- [CLI Reference](https://torsion.org/borgmatic/docs/reference/command-line/)
+- [Issue Queue](https://projects.torsion.org/witten/borgmatic/issues/)
+
+# Best practive
+
+- Check repositories daily
+- Check archives and/or data rarely, e.g. monthly
+
+# Usage
+
+When looking for backup lists, call
+
+```
+borgmatic -c /etc/borgmatic/config.yaml list
+```
+
+To restore file from backups, call
+
+```
+mkdir /tmp/borg-default
+borgmatic -c /etc/borgmatic/config.yaml mount --mount-point /tmp/borg-default
+
+borgmatic umount --mount-point /tmp/borg-default
+```
diff --git a/docs/ansible/roles/composer/index.md b/docs/ansible/roles/composer/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..9f06792a36c7c25fa7dcf7607cda9cb3758f7dae
--- /dev/null
+++ b/docs/ansible/roles/composer/index.md
@@ -0,0 +1,70 @@
+# ansible-role-composer
+
+[![License](https://img.shields.io/badge/License-MIT%20License-blue.svg)](https://github.com/kosssi/ansible-role-composer/blob/master/LICENSE)
+[![Build Status](https://travis-ci.org/kosssi/ansible-role-composer.svg?branch=master)](https://travis-ci.org/kosssi/ansible-role-composer)
+
+Installs Composer, the PHP Dependency Manager.
+
+## Role Defaults Variables
+
+    composer_path: /usr/local/bin/composer
+    composer_update: true
+    composer_update_day: 20
+
+The path where composer will be installed and available to your system. Should be in your user's `$PATH` so you can run
+commands simply with `composer` instead of the full path.
+
+You can also setup a global composer directory and make the bin directory available in the `$PATH` automatically by:
+
+    composer_home_path: /opt/composer
+    composer_home_owner: root
+    composer_home_group: root
+    composer_global_packages:
+      phpunit/phpunit: "@stable"
+
+## Auth.json
+
+### Github OAuth token
+
+If your project use a lot of libraries from github, you may see next message during `composer install`:
+
+    Could not fetch `...`, enter your GitHub credentials to go over the API rate limit
+    A token will be created and stored in "~/.composer/auth.json", your password will never be stored
+    To revoke access to this token you can visit https://github.com/settings/applications
+
+So your `composer install` can get stuck.
+
+To prevent that, you must configure github oauth token to go over the API rate limit. Visit https://github.com/settings/applications and generate personal access token and assign it to `composer_github_oauth` variable.
+
+    composer_github_oauth: f03401aae1e276abb073f987c08a32410f462e73
+
+### HTTP Basic auth
+
+You can provide HTTP Basic auth credentials to any repository like this:
+
+```
+composer_http_basic:
+    repo.magento.com:
+        username: 52fe41da9d8caa70538244c10f367d0a
+        password: 238fe32d374a2573c4527bd45a7e6f54
+```
+
+## Example Playbook
+
+      roles:
+        - { role: kosssi.composer }
+
+## Tests
+
+If you have vagrant, you can test this role:
+
+    cd tests
+    vagrant up
+    vagrant provision
+
+## Special thanks to contributors
+
+* [jnakatsui](https://github.com/jnakatsui)
+* [Yosh](https://github.com/yoshz)
+* [Johnny Robeson](https://github.com/jrobeson)
+* [Sebastian Krebs](https://github.com/KingCrunch)
diff --git a/docs/ansible/roles/discourse/index.md b/docs/ansible/roles/discourse/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..50daca8195610a7d7da75c9760ddb605eda4826e
--- /dev/null
+++ b/docs/ansible/roles/discourse/index.md
@@ -0,0 +1,40 @@
+# Discourse
+
+https://github.com/discourse/discourse/blob/master/docs/INSTALL-cloud.md
+https://meta.discourse.org/t/running-other-websites-on-the-same-machine-as-discourse/17247
+
+Log: /var/discourse/shared/standalone/log/rails/production.log
+
+```
+Usage: launcher COMMAND CONFIG [--skip-prereqs] [--docker-args STRING]
+Commands:
+    start:      Start/initialize a container
+    stop:       Stop a running container
+    restart:    Restart a container
+    destroy:    Stop and remove a container
+    enter:      Use nsenter to get a shell into a container
+    logs:       View the Docker logs for a container
+    bootstrap:  Bootstrap a container for the config based on a template
+    rebuild:    Rebuild a container (destroy old, bootstrap, start new)
+    cleanup:    Remove all containers that have stopped for > 24 hours
+
+Options:
+    --skip-prereqs             Don't check launcher prerequisites
+    --docker-args              Extra arguments to pass when running docker
+```
+
+Manually create admin:
+
+```
+cd /var/discourse
+./launcher enter app
+rake admin:create
+```
+
+Upgrade: http://172.17.0.1/admin/upgrade
+
+Drupal Integration:
+
+- https://www.drupal.org/project/discourse
+- https://www.drupal.org/project/discourse_sso
+- https://www.drupal.org/node/2880123#comment-12312794
diff --git a/docs/ansible/roles/elastalert/index.md b/docs/ansible/roles/elastalert/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..90fddec0d105fe00a9a689ee0c939c60df5f5bf8
--- /dev/null
+++ b/docs/ansible/roles/elastalert/index.md
@@ -0,0 +1,13 @@
+# ElastAlert
+
+- [GitHub](https://github.com/Yelp/elastalert)
+- [ReadTheDocs](https://elastalert.readthedocs.io/en/latest)
+- [Alerta Config](https://elastalert.readthedocs.io/en/latest/ruletypes.html#alerta)
+- [Alert format](https://alerta.readthedocs.io/en/latest/api/alert.html)
+- [Install as a service on Ubuntu](https://fabianlee.org/2017/04/17/elk-running-elastalert-as-a-service-on-ubuntu-14-04)
+
+# BitSensor ElastAlert Server
+
+- [Docker](https://hub.docker.com/r/bitsensor/elastalert/tags)
+- [GitHub Server](https://github.com/bitsensor/elastalert)
+- [GitHub Kibana](https://github.com/bitsensor/elastalert-kibana-plugin)
diff --git a/docs/ansible/roles/elasticsearch/index.md b/docs/ansible/roles/elasticsearch/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..77cb2f1fae31e0412b9c6266232b500831833e5d
--- /dev/null
+++ b/docs/ansible/roles/elasticsearch/index.md
@@ -0,0 +1,33 @@
+# Documentation
+
+- [Download](https://www.elastic.co/downloads/elasticsearch)
+- [API](https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html)
+- [Date Format](http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html)
+- [Licensing](https://www.elastic.co/de/subscriptions)
+- [Download BEATS](https://www.elastic.co/de/downloads/beats)
+
+# Create basic users
+
+```
+bin/elasticsearch-setup-passwords auto -u http://localhost:9200
+```
+
+# Trouble-Shooting
+
+Get a list of indides:
+
+```
+curl "http://127.0.0.1:9200/_cat/indices?v"
+```
+
+If an index went read-only:
+
+```
+curl -X PUT http://127.0.0.1:9200/[INDEX]/_settings -H 'Content-Type: application/json' -d '{"index.blocks.read_only_allow_delete":null}'
+```
+
+Or use the prepared script `elasticsearch-remove-readonly` for this.
+
+```
+curl -X PUT http://localhost:9200/_all/_settings -H 'Content-Type: application/json' -d'{ "index.blocks.read_only_allow_delete" : false } }'
+```
diff --git a/docs/ansible/roles/fail2ban/index.md b/docs/ansible/roles/fail2ban/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..b01fcf85a8c9a360ce1a43c2510f2c4caf87d0f6
--- /dev/null
+++ b/docs/ansible/roles/fail2ban/index.md
@@ -0,0 +1,4 @@
+# Fail2Ban
+
+- GitHub: https://github.com/fail2ban/fail2ban
+- Homepage: http://www.fail2ban.org/wiki/index.php/Main_Page
diff --git a/docs/ansible/roles/fluentd/index.md b/docs/ansible/roles/fluentd/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..d57f1b223fe07162de635368b64e8d429903d90d
--- /dev/null
+++ b/docs/ansible/roles/fluentd/index.md
@@ -0,0 +1,21 @@
+# Documentation
+
+- http://www.fluentd.org
+- https://docs.fluentd.org
+- Plugins: http://www.fluentd.org/plugins
+- Time Format: http://ruby-doc.org/core-1.9.3/Time.html#method-i-strftime
+- Format Regex Editor: https://fluentular.herokuapp.com/
+- Changelog: https://support.treasuredata.com/hc/en-us/articles/360001479187-The-td-agent-ChangeLog
+
+# Preparing SSL Cert
+
+Create SSL-Cert once upfront in the inventory and use the passphrase similar to `{{ fluentd_cert_passphrase }}`:
+
+```
+openssl req -x509 -newkey rsa:2048 -keyout key.pem -out cert.pem -days 3650 -subj '/CN={{ fluentd_host }}'
+```
+
+# Tutorials
+
+- https://sonnguyen.ws/centralize-docker-logs-with-fluentd-elasticsearch-and-kibana/
+- https://sonnguyen.ws/monitor-nginx-response-time-with-fluentd-kibana-and-elasticsearch/
diff --git a/docs/ansible/roles/gitlab/index.md b/docs/ansible/roles/gitlab/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..3622340d467226769c0718401979f11964efae5d
--- /dev/null
+++ b/docs/ansible/roles/gitlab/index.md
@@ -0,0 +1,53 @@
+Installs GitLab Community Edition
+
+Links:
+
+- GitLab Install on Ubuntu 12.04: https://about.gitlab.com/downloads
+    - sudo apt-get install curl openssh-server ca-certificates postfix
+    - curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.deb.sh | sudo bash
+    - sudo apt-get install gitlab-ce
+    - sudo nano /etc/gitlab/gitlab.rb
+    - sudo gitlab-ctl restart
+    - Configure Apache
+    - sudo gitlab-ctl reconfigure
+    - sudo service apache2 restart
+        * Username: root
+        * Password: 5iveL!fe
+- Trouble Shooting: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/README.md
+- Blog Install GitLab: http://paulshipley.id.au/blog/coding-tips/install-gitlab-on-ubuntu-14-04-using-apache2
+- GitLab / SSL / Apache: https://gitlab.com/gitlab-org/gitlab-recipes/blob/master/web-server/apache/gitlab-ssl.conf
+- GitLab other webserver: https://gitlab.com/gitlab-org/omnibus-gitlab/blob/master/doc/settings/nginx.md
+- Ansible Sample: https://gitlab.xarif.de/thomass/ansible_roles/tree/master/thomass.gitlab
+- CI Samples: http://doc.gitlab.com/ce/ci/yaml/README.html
+- Runner installation: https://gitlab.com/gitlab-org/gitlab-ci-multi-runner/tree/master
+
+Additional issues that helped to resolve the configuration:
+- https://gitlab.com/gitlab-org/gitlab-ce/issues/3262
+- https://gitlab.com/gitlab-org/gitlab-ce/issues/3262
+- https://gitlab.com/gitlab-org/gitlab-recipes/tree/master/web-server/apache
+
+Prepareing target hosts:
+- Copy pdevop:/home/gitlab-runner/.ssh/id_rsa.pub to authenticated keys on target host
+- ssh-keygen on target and paste the public key into deployment keys on gitlab
+
+Preparing gitlab-runner:
+- Configure Ansible with vault and password
+
+Preparing gitlab-lfs:
+- see http://doc.gitlab.com/ce/workflow/lfs/lfs_administration.html
+- gitlab_rails['lfs_enabled'] = false
+- gitlab_rails['lfs_storage_path'] = "/mnt/storage/lfs-objects"
+
+GitLab LFS client install and usage:
+- https://packagecloud.io/github/git-lfs/install
+- http://doc.gitlab.com/ce/workflow/lfs/manage_large_binaries_with_git_lfs.html
+
+Gitlab Kanban
+- http://kanban.leanlabs.io
+- https://gitlab.com/leanlabsio/kanban
+
+Tips & Tricks
+
+- Database Console: `sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql/ gitlabhq_production`
+- Mattermost CLI: `/opt/gitlab/embedded/service/mattermost/i18n` and then `sudo -u mattermost /opt/gitlab/embedded/bin/mattermost -config=/var/opt/gitlab/mattermost/config.json -help`
+- [Registry Garbage Collection](https://docs.gitlab.com/omnibus/maintenance/#removing-unused-layers-not-referenced-by-manifests)
diff --git a/docs/ansible/roles/haproxy/index.md b/docs/ansible/roles/haproxy/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..ea05a01180d88ba0815188f56eb6afe618ee496e
--- /dev/null
+++ b/docs/ansible/roles/haproxy/index.md
@@ -0,0 +1,45 @@
+# Documentation
+
+- https://www.haproxy.com/doc/aloha/7.0/haproxy/index.html
+- https://cbonte.github.io/haproxy-dconv/index.html
+- https://cbonte.github.io/haproxy-dconv/1.7/configuration.html
+
+# Instruction to prepare a certificate file
+
+For HaProxy to terminate SSL requests we require a single PEM file with all certificate components chained together.
+
+The seqeuence of those compoenents is this:
+
+- Private Key, e.g. example.com.key.pem
+- Domain Certficate, e.g. example.com.crt.pem
+- Intermediate Certificate, e.g. example.com.ca.crt.pem
+
+# Watching statistics
+
+Create an SSH tunnel to the haproxy host's port 7000 and then go to `http://127.0.0.1:7000/haproxy_stats` to get live stats.
+
+# Talking to HaProxy Socket
+
+HaProxy can communicate with the console through a socket and we provide a script called `hasocket` which can be used for that purpose. You either call that from the proxy's console or run it through Ansible with this command:
+
+```
+a -a "hasocket 'help'" --limit=proxyserver
+```
+
+Useful commands might be:
+
+- "show info"
+  show informations like haproxy version, PID, current connections, session rates, tasks, etc..
+- "show stat"
+  prints the stats about all frontents and backends (connection statistics etc) in a csv format
+- "show errors"
+  indeed the following prints informations about errors if there are any
+- "show sess"
+  show open sessions with the used backend/frontend, the source, etc..
+
+# Other information
+
+## Webinar on version 2.2
+
+- [Webinar Video](https://www.haproxy.com/vids/a-tour-of-haproxy-2-2)
+- [HaProxy Response Generator Sample](https://eleph.haproxy.com)
diff --git a/docs/ansible/roles/heartbeat/index.md b/docs/ansible/roles/heartbeat/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..e89e37d1f00cca8fad64b53118ed2e0e4117557c
--- /dev/null
+++ b/docs/ansible/roles/heartbeat/index.md
@@ -0,0 +1,7 @@
+## Resources
+
+- [Install](https://www.elastic.co/guide/en/beats/heartbeat/6.5/heartbeat-installation.html)
+- [Blogpost](https://www.elastic.co/blog/uptime-monitoring-with-heartbeat-and-the-elastic-stack)
+- [Forum](https://discuss.elastic.co/c/beats/heartbeat)
+- [GitHub](https://github.com/elastic/beats)
+- [Monitor Config](https://www.elastic.co/guide/en/beats/heartbeat/6.5/configuration-heartbeat-options.html)
diff --git a/docs/ansible/roles/jailkit/index.md b/docs/ansible/roles/jailkit/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..f35b66e713b845f4f995097ce69b5de09289f772
--- /dev/null
+++ b/docs/ansible/roles/jailkit/index.md
@@ -0,0 +1,7 @@
+# JailKit
+
+When updating PHP version at a later stage, there are extra steps that need to be taken:
+
+- Update `/etc/jailkit/jk_init.ini`
+- Force updating all Jails: `jk_init --force -j /jails/[NAME]/ php`
+- Manually update `/jails/[NAME]/etc/alternatives/php`
diff --git a/docs/ansible/roles/keycloak/index.md b/docs/ansible/roles/keycloak/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..d81f713528ffb1a4e24f96036cdcd9ef2df671f1
--- /dev/null
+++ b/docs/ansible/roles/keycloak/index.md
@@ -0,0 +1,6 @@
+# Keycloak
+
+Links to configure Keycloak/Nextcloud integration:
+
+- https://stackoverflow.com/questions/48400812/sso-with-saml-keycloak-and-nextcloud
+- https://stackoverflow.com/questions/51011422/is-there-a-way-to-filter-avoid-duplicate-attribute-names-in-keycloak-saml-assert
diff --git a/docs/ansible/roles/letsencrypt/index.md b/docs/ansible/roles/letsencrypt/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..4c16056a637c3eb1569fd42b61ef9b76a083f2cd
--- /dev/null
+++ b/docs/ansible/roles/letsencrypt/index.md
@@ -0,0 +1,9 @@
+# LetsEncrypt
+
+- https://certbot.eff.org/docs/using.html#command-line
+- https://certbot.eff.org/docs/using.html#plugins
+- https://tools.ietf.org/html/draft-ietf-acme-acme-03#section-7.2
+
+## Tipps
+
+- Use `certbot renew --force-renewal` to force renewal
diff --git a/docs/ansible/roles/mysql/index.md b/docs/ansible/roles/mysql/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..db651306ef9777ffa0d94ff04ae11887a49503a4
--- /dev/null
+++ b/docs/ansible/roles/mysql/index.md
@@ -0,0 +1,52 @@
+# MySQL
+
+Default values for MySQL configuration: https://dev.mysql.com/doc/refman/5.7/en/server-system-variables.html
+
+Use http://www.mysqlcalculator.com to calculate settings depending on your host configuration.
+
+# Configure Backups
+
+By default we install automatic MySQL backups pre-configured with rotating daily, weekly and monthly backups which will be stored in `/var/backups/mysql`. Those backups will be done at 2am every day by a cron task.
+
+In [defaults/main.yml](/defaults/main.yml) you'll find a variable `mysqlbackup` with all the default values being defined and if you want to change those, copy that into your inventory and make the changes there.
+
+## Special note on excluding tables
+
+By default no tables are excluded. You can define a list of excluded tables in `mysqlbackup.exclude.table` where each item in the list must be defined in the format `dbname.tablename` and the tablename accepts wildcards, e.g. `mydb.cache*` to exclude all tables that start with `cache` in their name.
+
+## Configuring Drupal databases
+
+The [Drupal role](https://gitlab.lakedrops.com/ansible-roles/drupal) defines databases for each domain in a variable named `drupal_settings.ITEM.domains.DOMAIN.db` and inside of this dictionary you can either turn off MySQL backup for that database completely by adding `backup: false` or you can exclude certain tables by adding a list in `backup_exclude` with table names supporting wildcards as well. Note, you don't have to provide the db name here as we have already defined that once before.
+
+Examples:
+
+```
+drupal_settings:
+  - ...
+    domains:
+      - ...
+        db:
+          ...
+          backup: false
+      - ...
+        db:
+          ...
+          backup_exclude:
+            - cache*
+            - access*
+```
+
+# Replcation
+
+- [Replication Configuration](https://dev.mysql.com/doc/refman/5.7/en/replication.html)
+- [Replication SQL Statements](https://dev.mysql.com/doc/refman/5.7/en/sql-replication-statements.html)
+
+## Setting up a new replication
+
+Starting from [here](https://dev.mysql.com/doc/refman/5.7/en/replication-howto-masterstatus.html) there are basically these steps:
+
+- Create a DB dump: lock tables, remember bin log position, dump db (see [Choosing a Method for Data Snapshots](https://dev.mysql.com/doc/refman/5.7/en/replication-snapshot-method.html)) and copy to secondary host
+- Unlock the tables again
+- Configure secondary for new master with Ansible playbook `dans COMPANY mysqlsecondary --tags=changemaster --extra-vars="port=[PORT]" --extra-vars="binpos=[BINLOGPOS]"`
+- On the secondary, start the MySQL server and turn off replication (`stop slave`)
+- Import dump file and then start replication again (`start slave`)
diff --git a/docs/ansible/roles/nextcloud/index.md b/docs/ansible/roles/nextcloud/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..4fbf2fda890ecd73701fb021d386268a8bf5a49c
--- /dev/null
+++ b/docs/ansible/roles/nextcloud/index.md
@@ -0,0 +1,9 @@
+# Nextcloud
+
+## Setup SAML with Keycloak
+
+Follow [this guide](https://rmm.li/wiki/doku.php?id=linux_server_manuals:nextcloud_saml_authentication_against_keycloak) and pay attention to these:
+
+- To map existing users, make sure that the username in Keycloak and Nextcloud is the same
+- In Nextcloud disable the option that the NameID response needs to be encrypted
+- For monitoring, use the URL https://domain/index.php/login?direct=1
diff --git a/docs/ansible/roles/oracle/index.md b/docs/ansible/roles/oracle/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..366cd5586d20fd0abef5fbb270b11fd8af49f482
--- /dev/null
+++ b/docs/ansible/roles/oracle/index.md
@@ -0,0 +1,35 @@
+# Oracle
+
+- [Oracle XE](https://docs.oracle.com/en/database/oracle/oracle-database/18/xeinl/installation-guide.html)
+- [Docker Image](https://github.com/fuzziebrain/docker-oracle-xe)
+- [Oracle CI](https://www.oracle.com/database/technologies/appdev/sqlcl.html)
+    - Connect to DB: `sqlcl sys/Oracle18@localhost:32118/XE as sysdba`
+
+
+
+
+Compress and uncompress keeping owner and permission:
+- `tar -cvpf file.tar folderToCompress`
+- `tar --same-owner -xvf file.tar`
+
+SSH into container and run SqlPlus:
+- `docker exec -it oracle-xe bash -c "source /home/oracle/.bashrc; bash"`
+- `$ORACLE_HOME/bin/sqlplus sys/Oracle18@localhost/XE as sysdba`
+
+
+
+
+docker run -d \
+-p 32112:1521 \
+--name=oracle-xe-2 \
+--volume /opt/oracle/oracle-xe-2:/opt/oracle/oradata \
+--volume /opt/oracle/scripts:/usr/local/sbin \
+--volume /var/backups/oracle:/var/backups/oracle \
+--network=oracle_network \
+oracle-xe:18c
+
+
+--env ORA_RMAN_SGA_TARGET=4048M \
+--memory 5G \
+--memory-swap 1G \
+--cpus 4 \
diff --git a/docs/ansible/roles/packetbeat/index.md b/docs/ansible/roles/packetbeat/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..227b2fd4baf0065460ce8e543c6ae075c97b2510
--- /dev/null
+++ b/docs/ansible/roles/packetbeat/index.md
@@ -0,0 +1,3 @@
+# Packetbeat
+
+https://www.elastic.co/guide/en/beats/packetbeat/7.4/packetbeat-getting-started.html
diff --git a/docs/ansible/roles/php/index.md b/docs/ansible/roles/php/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..83e4bd32b3c77aff5c019df2563d4d95c28abb14
--- /dev/null
+++ b/docs/ansible/roles/php/index.md
@@ -0,0 +1,21 @@
+# Updating PHP Version
+
+```
+# Example: from 7.1 to 7.3
+
+ascr role php
+
+# - Move FPM Conf from 7.1 to 7.3
+# - Remove www FPM Conf from 7.3
+# - Stop FPM 7.1
+# - Restart FPM 7.3
+
+
+a -m service -a "name=php7.1-fpm state=stopped enabled=no"
+a -m service -a "name=php7.3-fpm state=started enabled=yes"
+ascr jailkit-upgrade
+
+# On the Host
+update-alternatives --config php
+etckeeper commit -m "After PHP update to PHP 7.3"
+```
diff --git a/docs/ansible/roles/serverdensity/indey.md b/docs/ansible/roles/serverdensity/indey.md
new file mode 100644
index 0000000000000000000000000000000000000000..00bcda00d61acd81da5f91f32461105a772bcd16
--- /dev/null
+++ b/docs/ansible/roles/serverdensity/indey.md
@@ -0,0 +1,84 @@
+# Ansible role to install and configure Server Density Agent
+
+[Server Density] is a monitoring solution which requires a simple Python based agent and is highly configurable. This Ansible role installs and confgures that agent and supports several options like plugin installation and inventory synchronisation with your Server density dashboard.
+
+Currently this plugin is developed and tested for Ubuntu only and will be enhanced to other Nix's later on.
+
+## Dependency
+
+The [Ansible-ServerDensity-Plugin] is required and installation and instructions can be found in that project.
+
+## Configuration
+
+In defaults/main.yml you'll find a set of variables that you could re-define in your own inventory. Please, do not overwrite the variables in defaults/main.yml, instead you should define them in your inventory files, e.g. group_vars/all
+
+### sd_url
+
+Default: ```''```
+
+Defines the Server Density URL of your account, e.g. ```'https://myaccount.serverdensity.io'```
+
+### sd_api_token
+
+Default: ```''```
+
+Defines your API token provided by Server Density. You get this token by going to preferences in your SD account to the "Security" tab where you can either grab one of the existing API tokens or createing a new one.
+
+### sd_api_cache_file
+
+Default: ```no```
+
+Each time you're using the SD plugin it reads the current settings for devices, services and alerts from your SD account by using the Server Density API. As this usually takes a couple of seconds it is possible to shortcut that process by storing all those settings in a cache file of your own choice so that next time those values get loaded from that cache file instead through the API.
+
+The plugin makes sure that the cache gets reset if necessary, e.g. if the settings of your inventory have changed.
+
+### sd_agent_key
+
+Default: ```''```
+
+Do *not* redefine this variable, it is just there to avoid error messages in case a variable was missing. The agent key for each of your hosts gets assigned by Server Density and ill be imported by the plugin and will be used for updating your device settings.
+
+### sd_logging_level
+
+Default: ```'info'```
+
+Defines the level of log information that the SD agent will produce during run time. Possible values are debug, info, error, warn, warning, critical, fatal
+
+### sd_plugins
+
+Default: ```[]```
+
+Defines a list of Server Density plugins that will be installed and updated by this role automatically.
+
+Example:
+
+```
+  sd_plugins:
+    updatestatus: UpdateStatus.py
+    redis: Redis.py
+```
+
+This variable is expected to be defined as a set so that hash_behaviour=merge can be used so that you can define this variable in several of your groups and that this role is going to install the superset of all the group a specific host belings to.
+
+The key of each entry should be unique across your inventory and the value is a file name. Those files have to be present in a subdirectory called ```files/sd-plugins``` of your inventory directory.
+
+### sd_groups
+
+Default:
+
+```
+  apache: none
+  mysql: none
+  proxy: none
+```
+
+Certain sections in the agent configuration are only relevant for certain hosts and by providing valid inventory group names for those sections (currently apache, mysql, proxy) you can make sure that for all hosts in those groups the relevant parts in the agent configuration get written to that host.
+
+## Detailed configuration for inventory synchronisation
+
+By defining further variables in your inventory, you get full control over how your inventory gets synchronized with your Server Density dashboard including device groups, services and alerts.
+
+All those options are documented over in the [Ansible-ServerDensity-Plugin]
+
+[Server Density]: https://www.serverdensity.com
+[Ansible-ServerDensity-Plugin]: https://github.com/jurgenhaas/ansible-plugin-serverdensity
diff --git a/docs/ansible/roles/spideroak/index.md b/docs/ansible/roles/spideroak/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..8e211791c2784341d8c3ee92a20fb57901bf7630
--- /dev/null
+++ b/docs/ansible/roles/spideroak/index.md
@@ -0,0 +1,5 @@
+# SpiderOak ONE
+
+- [Support](https://spideroak.support/hc/en-us)
+- [CLI](https://spideroak.support/hc/en-us/sections/115000565766-Command-Line)
+- [Advanced Techniques](https://spideroak.support/hc/en-us/sections/115000566446-Advanced-Techniques)
diff --git a/docs/ansible/roles/user-management/index.md b/docs/ansible/roles/user-management/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..108ea7bbe4167d89127cb4c1f1399f645badb966
--- /dev/null
+++ b/docs/ansible/roles/user-management/index.md
@@ -0,0 +1,10 @@
+ansible-user-management
+=======================
+
+Ansible role to manage user accounts on hosts
+
+This projects was triggered by a thread in Google groups (https://groups.google.com/forum/#!topic/ansible-project/chJu26GkPlw) and is in its early stage.
+
+Currently, this role looks into the /etc/passwd file and locks all the users that are not available in a users hash of your Ansible inventory. All others get unlocked.
+
+Locking a user means to lock the user's password (usermod --lock USERNAME) and to remove $HOME/.ssh/authorized_keys to also make sure that those users can no longer get access with their private key.
diff --git a/docs/ansible/roles/vpn/index.md b/docs/ansible/roles/vpn/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..0219b93965b15705e6367b8e1b2b77cba53421b8
--- /dev/null
+++ b/docs/ansible/roles/vpn/index.md
@@ -0,0 +1,6 @@
+# VPN with strongSwan
+
+Sources:
+
+- https://www.strongswan.org
+- https://raymii.org/s/tutorials/IPSEC_vpn_with_Ubuntu_16.04.html
diff --git a/docs/ansible/roles/zammad/index.md b/docs/ansible/roles/zammad/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..882eef2a506ec682c140abd96dcac512ba5681d0
--- /dev/null
+++ b/docs/ansible/roles/zammad/index.md
@@ -0,0 +1,5 @@
+# Zammad
+
+ATTENTION: Zammad requires ElasticSearch 5.6.x
+
+See https://docs.zammad.org/en/latest/install-elasticsearch.html
diff --git a/docs/ansible/wiki/attacks.md b/docs/ansible/wiki/attacks.md
new file mode 100644
index 0000000000000000000000000000000000000000..3519fcfd058b1b541de67a5ef0fa9bb9b20a6d29
--- /dev/null
+++ b/docs/ansible/wiki/attacks.md
@@ -0,0 +1,50 @@
+# Attack Vectors
+
+In the context of responsible risk assesment we have to think about how the server farm can be attacked from outside, what the consequences would be and how we can protect ourselves from such attacks.
+
+In essence there are 2 possible scenarios:
+
+1. An attacker gets access to the inner part of the server farm
+2. An attacker manages to make our infrastructure unresponsive
+
+## Users
+
+This is always the weakest link in all IT environments. The danger of mistakes by regular users is fairly limited and mitigations or recovery strategies are described in the [Desaster discovery](desaster-recovery) section.
+
+More difficult is this for DevOps or system administrators. Their possible mistakes come with a huge risk to the whole infrastructure and this can only be mitigated by some weak factors:
+
+- only grant permissions to an individual that they really need
+- have a backup for every key admin but still limit the number of admins to the absolut minimum
+- allow for enough resources such that every development can be tested well enough
+- implement and live the culture where development always gets reviewed to increase the chance to pick on mistakes before they hit your environment
+
+## Social Engineering
+
+This is an external attack which addresses this aforementioned users. Somehow an attacker wants to get access to the system so that they can steal data and manipulate the application(s). To social engineer on regular users is relatively unattractive, although even that can cause real damage. I.e. if someone external gets access to a user account with editing permission on the official site, they can easily damage the reputation of the site owner by publishing insulting or malicious content.
+ 
+More important is the behaviour of users with higher permissions like DevOps and system administrators. These people have to be trained and trusted that they behave themselves and make sure that they never do anything that would allow any attacker to get access to any one of the systems.
+
+Here is a list of guidelines that should be forced on any user and application:
+
+- Using strong passwords
+- Never re-use any password - never use any single password on more than one account
+- Never share passwords or other credentials with anyone
+- Never click on a link in an email, not even from senders you know
+- Never download anything for which you haven't looked for actively by yourselves
+- [...list to be continued...]
+
+## Domain attacks (DNS or registrar or SSL)
+
+This is something that gets more and more popular. But it is a global problem and nothing that any one site owner could do anything about. However, let's quickly look into this:
+
+If the DNS records get attacked, either by hijacking them or by overloading the responsible DNS server, then the domain is worthless for the duration of that attack. The same applies, if an attacker manages to get hold of the domain registration as such. Both of these issues are close to the topic that we've discussed above under social engineering.
+
+The SSL attack is close to this but slightly different. It has to do with the laziness of some domain owners who carelessly order some SSL certificates and then send them around by email. In every single instruction from certificate authorities you can find a bold warning that the private key should never be given away. But what else happens when it is attached to an email? By that time it has become part of the public domain - that's a fact. And that means that a man-in-the-middle can easily sniff into assumed encrypted sessions and by that get access to the internal systems.
+
+## (D)DoS attacks
+
+Denial of service attacks, whether dynamic or not, are a pretty powerful means to cause real trouble. Have such attacks been undertaken by kids in the past, that has changed and professional criminals with the intend to earn money with it have taken over.
+
+As the owner of a single site or a small to medium sized farm thinking about DoS protection is pretty simple: protecting hosts at the border of a data center is impossible! This is because the bandwidth the attackers can afford these days is that high (we're talking hundreds of GB per second) that the infrastructure of any data center is just not able to handle that - with none of today's available mechanisms. Even worse if we get into the area of amplification attacks, the attacker only requires low bandwidth but manages to aplify the effect in orders of magnitude. 
+
+If denial of services attacks are a realistic thread to your business, you have to hide your entire infrastructure behind a shield, which is as far away from your physical data center as possible. Providers like Akamai or ClourFlare are providing such services and they come with additional features that greatly improve the performance of every web application, but they also come for a price.
diff --git a/docs/ansible/wiki/backup/backup-google.md b/docs/ansible/wiki/backup/backup-google.md
new file mode 100644
index 0000000000000000000000000000000000000000..42f212cb52c5a02406396a142e918703ad8cefc2
--- /dev/null
+++ b/docs/ansible/wiki/backup/backup-google.md
@@ -0,0 +1,31 @@
+# Backup to Google Cloud
+
+If your inventory has the variable `gcloud_sync` defined, then Google Cloud backup tools will be installed and configured automatically.
+
+The configuration of the Google cloud backup is done through the inventory and an example configuration looks like this:
+
+```
+gcloud_sync:
+  - source: /var/backups/mysql
+    account: backup
+    bucket: mysql/hostname
+    hour: 3
+    minute: 10
+  - source: /jails/mysite/var/www/files/default
+    account: backup
+    bucket: files
+    hour: 4
+    minute: 10
+```
+
+In addition, you have to define your Google Cloud account and projects with another inventory variable:
+
+```
+gcloud:
+  backup:
+    projectid: id-from-google-cloud
+    authfile: Backups-123456789.json
+    account: abcdefghijh@developer.gserviceaccount.com
+```
+
+You'll find those credentials in your Google Cloud account.
diff --git a/docs/ansible/wiki/backup/backup-spideroak.md b/docs/ansible/wiki/backup/backup-spideroak.md
new file mode 100644
index 0000000000000000000000000000000000000000..a40cb42f9f6f45bc12c8ae378430ba256a2e5faf
--- /dev/null
+++ b/docs/ansible/wiki/backup/backup-spideroak.md
@@ -0,0 +1,15 @@
+# Backup to SpiderOak
+
+If your inventory has the variables `spideroak_username` and `spideroak_password` defined, then SpiderOak will be installed and configured automatically.
+
+The directories being included in the backup for SpiderOak:
+
+- all given directories in the variable `spideroak_include`
+- all `files` directories from all Drupal installations
+- the MySQL backups from `/var/backups/mysql`
+
+The SpiderOak client is running in the background, i.e. it detects new, changed or deleted files immediately and backs them up off-site as soon as possible. The way SpiderOak works has some huge advantages:
+
+- All data is being encrypted locally before uploaded to the cloud. Hence, nobody will ever by able to access that data unless they get hold of the account's credentials.
+- Changed files overwrite their previous version in the backup, but all older versions are accessible in the SpiderOak account as well.
+- Deleted files are moved to their own trash within the off-site backup so that you can restore deleted files as well, should that ever be necessary.
diff --git a/docs/ansible/wiki/backup/backup.md b/docs/ansible/wiki/backup/backup.md
new file mode 100644
index 0000000000000000000000000000000000000000..6947ef3038d4ed6b59793cecfb4270407913a820
--- /dev/null
+++ b/docs/ansible/wiki/backup/backup.md
@@ -0,0 +1,51 @@
+# Backing up data
+
+As described in the [introduction](backup/introduction) we are focusing on databases and customer files only when it comes to backups. So we have to maintain a list of directories containing customer files that should be backed up and we have to produce regular database dumps that then also can be backed up like customer files too.
+
+## MySQL dumps
+
+Each host that has a MySQL server installed always comes with a pre-configured dump scenario, that creates database dumps following the default plan:
+
+- Store the database dumps in `/var/backups/mysql`
+- Run the dump by cron (time and frequency configurable, see below)
+- Keep dumps as follows:
+  - Daily: keep the last 8 days
+  - Weekly: keep the Friday dump for the last 5 weeks
+  - Monthly: keep the first dump for the last 12 months
+
+The tool used for this is called [AutoMySqlBackup][1].
+
+In the inventory we can configure the variables `mysqlbackup` and `drupal_settings` to
+- exclude certain databases
+- exclude certain tables within specific databases
+
+The default configuration is this and runs at 2am every morning:
+
+```
+mysqlbackup:
+  active: true
+  cron:
+    month: *
+    day: *
+    weekday: *
+    hour: 2
+    minute: 0
+  exclude:
+    db:
+      - information_schema
+      - mysql
+      - performance_schema
+      - sys
+    table: []
+```
+
+## Backing up off-site
+
+Now that we've identified and provided the directories and their files that require off-site backup, we can do that with the two following tools and services: 
+
+- [Backup with SpiderOak](backup/backup-spideroak)
+- [Backup with Google Cloud](backup/backup-google)
+
+It is possible to use either of these services or even both.
+
+[1]: http://sourceforge.net/projects/automysqlbackup
diff --git a/docs/ansible/wiki/backup/introduction.md b/docs/ansible/wiki/backup/introduction.md
new file mode 100644
index 0000000000000000000000000000000000000000..34f6049d604d9db8f93ff156c8ef8090c6db0f6a
--- /dev/null
+++ b/docs/ansible/wiki/backup/introduction.md
@@ -0,0 +1,31 @@
+# Introduction to Backup and Restore
+
+- [Introduction](backup/introduction)
+- [Backup](backup/backup)
+    - [Backup with SpiderOak](backup/backup-spideroak)
+    - [Backup with Google Cloud](backup/backup-google)
+- [Restore](backup/restore)
+    - [Restore from SpiderOak](backup/restore-spideroak)
+    - [Restore from Google Cloud](backup/restore-google)
+
+The architecture of the server farm follows tha assumption, that hosts and applications can alway be rebuild and therefore don't have to be backed up. Only the customer data (files and databases) can not be rebuild and therefore need to be backed up.
+
+There is a first level of backup which is provided by each ISP that provides hosting facilities and while those backups are available, a host can always be rebuild including the customer data from those backups - often available as images.
+
+However, the strategy of putting all eggs into one basket is fairly dangerous. [As we saw recently][1], hosting companies can become insolvent and if that happens without prior notice, not only are the live hosts inaccessible at that point, the same is the case for the backups: both is gone and that forever.
+
+What had to be learned from this incident and was built into this server farm from day one is the need for off-site backups. This is what we describe in this chapter as well as how to restore the data from those backups when needed again.
+
+To make sure we have the right things in mind, here is again a simple overview that determines what needs to be backed up. The different components of a live host are:
+ 
+ - **Hardware**: provided by the ISP and reproducable/replaceable
+ - **OS**: installed from third party sources
+ - **Services**: Apache, MySQL, PHP, modules, firewall, logging and many many more. All of them are installed from third party sources
+ - **Applications**: e.g. Drupal, installed from drupal.org
+ - **Customer development**: installed from a Git repository which is hosted elsewehere
+ - **Database tables and records**: they need to be backed up
+ - **Customer files**: e.g. images, pdfs and other uploaded files. THey also need to be backed up
+
+That means, our backup strategy focuses on the latter two only: databse content and customer files. Everything else is going to be available from other sources at any time.
+
+[1]: https://www.codeenigma.com/host/blog/aberdeencloud-what-happened
diff --git a/docs/ansible/wiki/backup/restore-google.md b/docs/ansible/wiki/backup/restore-google.md
new file mode 100644
index 0000000000000000000000000000000000000000..39d62ebaa28455129821ef918b2c1d176f7d291f
--- /dev/null
+++ b/docs/ansible/wiki/backup/restore-google.md
@@ -0,0 +1,11 @@
+# Restore from Google Cloud
+
+For each of the directories you want to restore, call the sequence of these commands:
+
+```
+sudo gcloud config set account abcdefghijh@developer.gserviceaccount.com
+sudo gcloud config set core/project id-from-google-cloud
+sudo gsutil -m cp -r gs://mysql/hostname/ /var/backups/mysql
+```
+
+where `abcdefghijh@developer.gserviceaccount.com`, `id-from-google-cloud`, `mysql/hostname` and `/var/backups/mysql` are example values taken from the configuration described in the [backup section](backup/backup-google).
diff --git a/docs/ansible/wiki/backup/restore-spideroak.md b/docs/ansible/wiki/backup/restore-spideroak.md
new file mode 100644
index 0000000000000000000000000000000000000000..776546e580f51194a65409904bee6c7ef45d5378
--- /dev/null
+++ b/docs/ansible/wiki/backup/restore-spideroak.md
@@ -0,0 +1,21 @@
+# Restore from SpiderOak
+
+First, you have to stop the SpiderOak service:
+
+```
+sudo service spideroak stop
+```
+
+For each of the directories you want to restore, call the sequence of these commands:
+
+```
+sudo SpiderOakONE --restore=/var/backups/mysql
+```
+
+where `/var/backups/mysql` is one of the directories to restore.
+
+Finally, please start the service again so that automatic backup is operating:
+
+```
+sudo service spideroak start
+```
diff --git a/docs/ansible/wiki/backup/restore.md b/docs/ansible/wiki/backup/restore.md
new file mode 100644
index 0000000000000000000000000000000000000000..1bef18dee60389a8fd61d387e06c909b2b2b8f78
--- /dev/null
+++ b/docs/ansible/wiki/backup/restore.md
@@ -0,0 +1,6 @@
+# Restore Data from Backups
+
+Should you ever have to rebuild a host, you will then also have to restore data from backups. The host itself should be built with Ansible like it was built originally and afterwards you then pull the data from the backups. At that time, all the tools and credentials will already be available on that host so that you can simply call the respective commands, from either of these off-site facilities:
+
+- [Restore from SpiderOak](backup/restore-spideroak)
+- [Restore from Google Cloud](backup/restore-google)
diff --git a/docs/ansible/wiki/configure-crontabs.md b/docs/ansible/wiki/configure-crontabs.md
new file mode 100644
index 0000000000000000000000000000000000000000..3911fa162679074996792a801f935089b083d468
--- /dev/null
+++ b/docs/ansible/wiki/configure-crontabs.md
@@ -0,0 +1,57 @@
+# Roles which are setting Crontabs
+
+## Common
+
+Tasks defined in *common.yml*
+
+Setup all crontabs from variables `cronjobs_host` and `cronjobs_group`.
+
+## Drupal
+
+Setup further crontabs defined by domains in _drupal\_settings_
+
+## Drush
+
+Tasks defined in *config.yml*
+
+Setup `drush core-cron` for all drush aliases, configurable through _drush\_cron\_core_
+
+Setup `drush translation refresh` and `drush translation update` for all drush aliases, configurable through _drush\_cron\_translation_
+
+## HAProxy
+
+Tasks defined in *configure.yml*
+
+Should be moved out to the customer inventory.
+
+## MySQL
+
+Tasks defined in *backup.yml*
+
+Setup automatic backup, configurable through _mysql\_cron\_backup_
+
+## Oracle
+
+Tasks defined in *configure.yml*
+
+Should be moved out to the customer inventory.
+
+## ownCloud
+
+Tasks defined in *main.yml*
+
+Setup cron, configurable through _owncloud\_cron\_core_
+
+## SVNServer
+
+Tasks defined in *main.yml*
+
+Setup sync cron between master and slave, configurable through _svnserver\_cron\_sync_
+
+# How to deploy Crontab definitions
+
+When you've made any changes to Cronjobs, simply call this to deploy them to your server farm:
+
+```
+apb farm --tags=cron
+```
diff --git a/docs/ansible/wiki/desaster-recovery.md b/docs/ansible/wiki/desaster-recovery.md
new file mode 100644
index 0000000000000000000000000000000000000000..e9b4209e02a82471d1d777156d7554f3f4b5b89b
--- /dev/null
+++ b/docs/ansible/wiki/desaster-recovery.md
@@ -0,0 +1,46 @@
+# Desaster Recovery
+
+This is something that a lot of traditional IT admins are looking for because they alway had to have a plan to keep their company's hardware, software and network up and running. That included the requirement for a plan on how to react if one of the components failed or crashed.
+
+With cloud hosting, a lot of those paradigms changed and we should first look into the components that make up the infrastructure as such.
+
+## Components of the infrastructure
+
+- Hardware
+  - All the hardware is provided by the ISP
+  - It is neither owned nor exclusively utilised by us
+- Network
+  - DNS system
+  - Routing system
+- Services
+  - Firewall
+  - Apache
+  - PHP
+  - MySQL
+- Applications
+  - Developed and maintained e.g. by a Drupal agency 
+- Data
+  - Databases
+  - Customer files (images, pdfs, etc.)
+
+## What can go wrong?
+
+The first two components (hardware and network) is maintained by an ISP (AWS, Google, Linode, JiffyBox, etc.) and what we know as the host, is no longer a bare metal box where something could break - more so its a virtual server inside of a bigger infrastructure.
+
+All the ISPs we do support in this server farm are monitoring the individual components like power supply, hard drives, main boards, switches, routers, etc. on an ongoing basis and they even replace them early enough such that it's hard to believe that a hardware crash were something we would have to consider. The whole network infrastructure with the global DNS system and the routing is such a critical component in the global net, that we rely on the fact that the experts in those areas are keeping that up and running for us - there is nothing we can do to reduce the risk of (temporary) failure.
+
+With regard to the third part, services, the installation and configuration of them is the result of a fully automated process and no manual interaction is ever been taken. A failure is therefore impossible and before we are going to change services or their configuration, we always test that on staging hosts.
+  
+The application layer is provided by an agency and they usually run a version control system. Of course, such an agency should never deploy fresh code to any live server without having taken everything possible to make sure that the code doesn't break anything. If it still happened, the agency has to deploy a previously working version of their application development.
+
+That brings us down to the fifth and last part, the customer data. Let's ask ourselves what could go wrong with that data. It can be either overwritten or deleted unintentionally. This is only possible due to an error in the application provided by the agency or by user mistake. As all the customer data is subject to our [backup strategy](backup/introduction) such files can always be restored and the damage is limited or even avoided completely.
+
+## How to recover from one of those failures
+
+With all that in mind, it is hard to understand what scenarios would lead to a desaster that we had to recover from. In the very worst case there might be the following 3 scenarios:
+
+1. Loss or damage of customer files due to application error or user mistake: we recover from that by [restoring the data](backup/restore).
+2. Corrupted services configuration due to an error made by a DevOp: in this case we simply rebuild that host from scratch and then restore the customer files.
+3. Data center unavailable: if this is temporary, this should usually get back to normal operation much quicker then any action we could possibly take. However, if the data center is completely broken (technically or financially) then we have to switch to a different data center and build our hosts there from scratch as well.
+
+How do we build hosts from scratch? This is the same as [adding a new host](hosts/add) and is fully documented in that chapter.
diff --git a/docs/ansible/wiki/drupal-apache-update.md b/docs/ansible/wiki/drupal-apache-update.md
new file mode 100644
index 0000000000000000000000000000000000000000..3db04cb99a797cbda6652cb8a82d4b449a90234c
--- /dev/null
+++ b/docs/ansible/wiki/drupal-apache-update.md
@@ -0,0 +1,5 @@
+To update Drupal's apache configuration - optionally with basic auth, letsencrypt, etc. - you should use the command
+
+```
+ascr apache-config --limit=[HOSTNAME] --site=[SITEID] --application=drupal
+```
diff --git a/docs/ansible/wiki/drupal-deployment.md b/docs/ansible/wiki/drupal-deployment.md
new file mode 100644
index 0000000000000000000000000000000000000000..7ae054c876a695592969103f7a5357378d392d14
--- /dev/null
+++ b/docs/ansible/wiki/drupal-deployment.md
@@ -0,0 +1,53 @@
+# Drupal deployment
+
+## Adding a new site
+
+In the inventory there are all Drupal sites defined in a variable called `drupal_settings`. To add a new site simply add the definition as a new item to that existing array and then run the full farm playbook on the respective host followed by the user playbook:
+
+```
+apb farm --limit=[HOSTNAME]
+apb user --limit=[HOSTNAME] --tags=JailUserInit,SetPermissions
+```
+
+Running the full farm playbook is necessary because a new Drupal site impacts almost all parts of the system.
+
+If you are using jails, then you can limit the Drupal deployment to one of the the jails by adding `-e "limit_site=[DRUPAL-SITE-ID]"` to the commands above.
+
+If you are also using HaProxy and/or Varnish as well, you may have to run their updates too:
+
+- HaProxy
+  - [Quick Update](haproxy-quick-update)
+- Varnish
+  - [Quick Update](varnish-quick-update)
+
+## Limiting access with Apache Basic Authentication
+
+If you want to limit access to a Drupal site you can utilize Apache's basic authentication control by simply adding a couple of variables to the `drupal_settings` in your inventory:
+
+```
+drupal_settings:
+  - id: ...
+    root: ...
+    domains:
+      - domain: www.example.com
+        ...
+        apache_auth:
+          type: Basic
+          name: The label for the login dialog
+          user: USERNAME
+          password: PASSWORD
+        ...
+    
+```
+
+To deploy those changes, run
+
+```
+ascr role drupal --limit=[HOSTNAME] --tags=ApacheConfig
+```
+
+## Update a deployed Drupal site
+
+```
+ascr drupal --site=[DRUPAL-SITE-ID]
+```
diff --git a/docs/ansible/wiki/drush-fetch-aliases.md b/docs/ansible/wiki/drush-fetch-aliases.md
new file mode 100644
index 0000000000000000000000000000000000000000..903dd3f2470c974adf599c12c6634ae1b8ade078
--- /dev/null
+++ b/docs/ansible/wiki/drush-fetch-aliases.md
@@ -0,0 +1,9 @@
+Hosts that have drush installed will have comprehensive and always-up-to-date drush alias files and they ar build such that they can be used locally as well so that drush can access the remote hosts easily.
+
+To update your local host with the latest alias definitions, simply run this script:
+
+```
+ascr drush-aliases
+```
+
+You will then find all the drush alias files in `/etc/drush` and they are ready to be used right away.
diff --git a/docs/ansible/wiki/elk/fluentd.md b/docs/ansible/wiki/elk/fluentd.md
new file mode 100644
index 0000000000000000000000000000000000000000..56f08ebcd7da4eb76ef4f2bb081849feaac3cd00
--- /dev/null
+++ b/docs/ansible/wiki/elk/fluentd.md
@@ -0,0 +1,19 @@
+# Collecting Data
+
+Log data is being collected from all the log files in `/var/log` and all of its subdirectories as well as from certain listeners that can be configured on each host in the server farm to collect additional data from applications without piping them through the system log facilities first.
+
+All of that collected data gets forwarded to a central log server within the server farm where it will be stored and indexed by ElasticSearch.
+
+The collection and forwarding is performed by [FluentD][1]. Each host has a FluentD client and the central log server also has a FluentD server instance installed. Each of the clients is collecting data from system log files in `/var/log` and from optionally configured additional listeners. The data is queued and forwarded to the FluentD server. This forwarding is protected the following way:
+
+- Shared Key: a string only known to all the hosts in the server farm
+- SSL Certificate: used to encrypt the data in transit
+- username and password: to authenticate the communication, these credentials are only known to hosts within the server farm
+
+All the communication is done over port 24284.
+
+The fluentd server is also queueing all the received data and forwards it locally on to ElasticSearch which is configured to only accept data from that fluentd instance and not from any other local or remote process.
+
+Depending on configuration all the data can be indexed in different indexes which makes querying the data easier down the road. Also, data retention can be configured seperately for each index. Though this is not possible in this server farm as yet.
+
+[1]: http://www.fluentd.org
diff --git a/docs/ansible/wiki/elk/introduction.md b/docs/ansible/wiki/elk/introduction.md
new file mode 100644
index 0000000000000000000000000000000000000000..02eb8f161d48eff5e51f0099baca03d2093b6315
--- /dev/null
+++ b/docs/ansible/wiki/elk/introduction.md
@@ -0,0 +1,10 @@
+# ElasticSearch
+
+- [Introduction](elk/introduction)
+- [Collecting Data](elk/fluentd)
+- [UI to view the data](elk/kibana)
+- [Alerts on Log Data](monitoring/alerts-elk)
+
+Log data from the OS and selected applications is generated on the server farm on many different places and we are aggregating all of that data in ElasticSearch so that we can analyse the systems when ever needed but also to be able to raise alerts if something is going wrong.
+
+In this chapter we are describing on how that works and what can be done with all the data.
diff --git a/docs/ansible/wiki/elk/kibana.md b/docs/ansible/wiki/elk/kibana.md
new file mode 100644
index 0000000000000000000000000000000000000000..95a6ffbdd10d8cf884913fbb4de173aedf550a2f
--- /dev/null
+++ b/docs/ansible/wiki/elk/kibana.md
@@ -0,0 +1,17 @@
+# UI to view the data
+
+ElasticSearch, which stores all our log data, has a powerful query language to access all of that data and do what ever you want to do with it. This is being used by e.g. [ElastAlert](monitoring/alerts-elk) to determine if any alerts have to be raised and there are a lot of other tools around that use that very same query language, e.g. a command line tool.
+
+However, this is a cumbersome process and you don't want to browser through your log data by typing long queries. A graphical interface is required for this and that is available with [Kibana][1]. It can be acces through the URL that is defined as the `kibana_domain` variable in your inventory and access is controlled through username and password.
+
+Please go to the [Kibana documentation][2] to best learn how to use this powerful tool. Additional resources may be useful too:
+
+- [ElasticSearch Query Language][3] 
+- [Video on Kibana][4]
+- [Kibana Tutorial][5]
+
+[1]: https://www.elastic.co/products/kibana
+[2]: https://www.elastic.co/guide/en/kibana/current/index.html
+[3]: https://www.elastic.co/guide/en/elasticsearch/reference/current/query-dsl.html
+[4]: https://www.elastic.co/webinars/whats-new-in-kibana-4
+[5]: https://www.digitalocean.com/community/tutorials/how-to-use-kibana-dashboards-and-visualizations
diff --git a/docs/ansible/wiki/firewall.md b/docs/ansible/wiki/firewall.md
new file mode 100644
index 0000000000000000000000000000000000000000..97187b6c3aa47d4c5418e394cf26b7009c615363
--- /dev/null
+++ b/docs/ansible/wiki/firewall.md
@@ -0,0 +1,61 @@
+# Firewall
+
+## Overview of firewall protection
+
+Some of the protections are:
+
+- syncookie protection
+- disable proxy-ARP
+- ignore faulty ICMP answers
+- ignore ICMP echo-broadcasts
+- icmp ratelimit
+- set tcp-fin-timeout to 30
+- deny private networks
+- block specific IPs
+- limit max current tcp connections from one ip
+- defend portscans
+- defend ping-of-death
+- deny invalid packages
+
+In addition to that we only allow a very limited number of open ports for both inbound and outbound that are actually open, everything else is stealth.
+
+## Configuring the firewall
+
+When ever you've changed any firewall configuration, deploy that by calling this:
+
+```
+ascr firewall-config
+```
+
+### Inbound and outbound port configuration
+
+The firewall configuration happens in the `common` role and in their default variables, all the inbound and outbound ports are defined with these values:
+
+```
+firewall_ports:
+  in:
+    - {type: TCP', number: '22', comment: 'ssh}
+    - {type: TCP', number: '80', comment: 'http}
+    - {type: TCP', number: '443', comment: 'https}
+    - {type: UDP', number: '60000:61000', comment: 'mosh}
+  out:
+    - {type: UDP', number: '53', comment: 'unknown}
+    - {type: UDP', number: '67:68', comment: 'DHCP}
+    - {type: UDP', number: '123', comment: 'unknown}
+    - {type: UDP', number: '60000:61000', comment: 'mosh}
+    - {type: TCP', number: '22', comment: 'ssh}
+    - {type: TCP', number: '25', comment: 'smtp}
+    - {type: TCP', number: '80', comment: 'http}
+    - {type: TCP', number: '8080', comment: 'http}
+    - {type: TCP', number: '110', comment: 'pop3}
+    - {type: TCP', number: '143', comment: 'imap}
+    - {type: TCP', number: '220', comment: 'imap}
+    - {type: TCP', number: '443', comment: 'https}
+    - {type: TCP', number: '465', comment: 'smtp}
+    - {type: TCP', number: '587', comment: 'smtp}
+    - {type: TCP', number: '993', comment: 'imap}
+    - {type: TCP', number: '995', comment: 'pop3}
+    - {type: TCP', number: '9418', comment: 'git}
+```
+
+You can overwrite these values in your own inventory.
diff --git a/docs/ansible/wiki/gitlab-ci-configuration.md b/docs/ansible/wiki/gitlab-ci-configuration.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1b99097a24b1708f08daaea8b720dca92aa9cd4
--- /dev/null
+++ b/docs/ansible/wiki/gitlab-ci-configuration.md
@@ -0,0 +1,5 @@
+If all the [preparation](gitlab-ci-prerequisites) has been completed, you only require a single file named `.gitlab-ci.yml` in the root of your repository.
+
+All the possible configuration is [documented by GitLab-CI](http://docs.gitlab.com/ce/ci/yaml/README.html)
+
+To call Ansible for deployment with the settings given to the system administrator, you only have to call the script `apb deploy`. Everything else will be taken from the project configuration on the server.
diff --git a/docs/ansible/wiki/gitlab-ci-introduction.md b/docs/ansible/wiki/gitlab-ci-introduction.md
new file mode 100644
index 0000000000000000000000000000000000000000..213e0e07aad322e8b533676334e371d93dbc2652
--- /dev/null
+++ b/docs/ansible/wiki/gitlab-ci-introduction.md
@@ -0,0 +1,8 @@
+GitLab-CI is an automated system on this host which is part of GitLab and documented in full detail on [gitlab.com](http://docs.gitlab.com/ce/ci/).
+
+This is utilized on this server such that the system administrator has to allocate one of the available runner to the projects that want to make use of it. Please contact @jurgenhaas should you be interested.
+
+Continue reading:
+
+- [Pre-requisites](gitlab-ci-prerequisites)
+- [Configure a project](gitlab-ci-configuration)
diff --git a/docs/ansible/wiki/gitlab-ci-prerequisites.md b/docs/ansible/wiki/gitlab-ci-prerequisites.md
new file mode 100644
index 0000000000000000000000000000000000000000..b3dc14acc91189be6b97316ee3a4b04182a32a0a
--- /dev/null
+++ b/docs/ansible/wiki/gitlab-ci-prerequisites.md
@@ -0,0 +1,50 @@
+# Preparing the GitLab-CI server
+
+As GitLab-CI is part of GitLab core, there is no extra software that needs to be installed. However, a few configuration steps are required:
+
+- Create and configure runners
+  - Located in /etc/gitlab-runner
+  - [Documentation](http://docs.gitlab.com/ce/ci/runners/README.html)
+- Install and configure Ansible
+  - Prepare configuration for user gitlab-runner
+    - Create ~/.ansible/vault.pwd
+    - Create ~/.ansible/secrets with ansible_sudo_pass and potentially other variables
+  - Install Ansible in /opt/ansible and run ansible-script.py setup-local
+  - Run /opt/ansible/config.sh as gitlab-runner
+
+# Preparing remote hosts for Ansible deployment
+
+To allow the gitlab-runner to access your remote hosts when deploying code through Ansible, you have to prepare those remote hosts with these steps:
+
+- Create a user called gitlab-runner
+- Install the public key ```ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJDzOYp01MZTZxj6jY+S+Pv9uvpDlEQLl9uH0llmHUw5FsDgZ//ObYQtKvyMftykkejckWzSvAYsulV20h5+oDjQAcdaC5joZETAOP/5rCgYlV3Rd4lbKLNBSpWHFl4hmOD1cBqMrNtTZqIkfSayMBRn+tMK/6FseXEROjlose11JF+4WcjIzo41qKDQ/Y3GT7BG2kgAgfO0sWj2bxWSW4pdOuYIabQvS+EuC+g8OQsRUseSTsOJNgDTzh/loIjlKV3ZP8zRAqYq2XUPz2GvQ8qILUkaZvTU3CAdIzxY4rLF/iEgofNCi1EgscEzdHDfujRbG8BiMiH/3wt6UPK/Ql gitlab-runner@pdevop``` for that user as an authorized key
+- Add gitlab-runner to the sudo group
+- Set the password given by the system administrator for gitlab-runner
+
+# Preparing a gitlab project
+
+These steps need to be taken by the system administrator on the gitlab server:
+
+- Assign at least one runner to the project
+- Add the GitLab user `Ansible Deployment` as a member to the project with the developer role
+- Add the public key of the user gitlab-runner at the remote host to the GitLab user's `Ansible Deployment` profile
+- If the project wants to use Ansible for deployment, do this in addition:
+  - Find out CI_PROJECT_ID
+  - Create a file ansible-inventories/paragon/raw/master/files/gitlab-runner/[CI_PROJECT_ID].yml and define the necessary variables
+  - Test SSH access to the remote host
+
+## CI variable configuration
+
+To limit the Ansible scope of what the GitLab-CI process can actually perform, define the project configuration file with these values:
+
+```
+playbook: deploy
+company: customer
+host: web01
+extras:
+  var1: value 1
+  var2: value 2
+tags: deploy
+```
+
+All the variables in `extras` (optional) will be forwarded to Ansible as `--extra-vars` and also the optional tags, which should be a comma separated list of strings if multiple values are required.
diff --git a/docs/ansible/wiki/haproxy-custom-blacklists.md b/docs/ansible/wiki/haproxy-custom-blacklists.md
new file mode 100644
index 0000000000000000000000000000000000000000..6f439f620b17d9730cedc1aae71c3d7b3872b69f
--- /dev/null
+++ b/docs/ansible/wiki/haproxy-custom-blacklists.md
@@ -0,0 +1,22 @@
+This Ansible suite comes with some default blacklists by default. However, in some circumstances it may be required to quickly add a couple of entries to those blackliste and you quickly want to rol them out.
+
+For this you can create/edit the file `/etc/ansible/facts.d/blacklist.fact` with this json content:
+
+```json
+{
+  "agent": [],
+  "ip": [
+    "212.34.56.78",
+    "198.11.149.0/24"
+  ],
+  "referer": []
+}
+```
+
+So for each of the 3 supported blacklists you can provide an array with strings that will be looked up for each request.
+
+With the following command you can roll out those changes:
+
+```
+ascr haproxy-blacklists
+```
diff --git a/docs/ansible/wiki/haproxy-quick-update.md b/docs/ansible/wiki/haproxy-quick-update.md
new file mode 100644
index 0000000000000000000000000000000000000000..4541d71c163833058c482cdf807c1641a8d539e4
--- /dev/null
+++ b/docs/ansible/wiki/haproxy-quick-update.md
@@ -0,0 +1,19 @@
+If any changes have been made, e.g. new domains, changed aliases, or using SSL for certain domain, you might be looking for a way to quickly update HaProxy to make those changes effective.
+
+With the following command, everything will be done within seconds:
+
+```
+ascr haproxy-config
+```
+
+or
+
+```
+ascr role haproxy --limit=proxyserver -r commonauth,commonconnect,common --tags=Config
+```
+
+Note, if you have introduced new SSL certificates or even new IP adresses then you should run `ascr role haproxy --limit=proxyserver` without any limitations so that network interfaces, firewall, etc. can all be configured as well.
+
+If you only added new domains to existing SSL certificates, it's sufficient to run `ascr haproxy-certs`.
+
+You probably also want to look into [Quick Update of Varnish](varnish-quick-update).
diff --git a/docs/ansible/wiki/hosts/add.md b/docs/ansible/wiki/hosts/add.md
new file mode 100644
index 0000000000000000000000000000000000000000..95b78491ed514d09e4a8b3a28c9e877884ea5e57
--- /dev/null
+++ b/docs/ansible/wiki/hosts/add.md
@@ -0,0 +1,26 @@
+# Adding a new host
+
+First of all, define all the variables for the new host in your inventory. Then, when working with static inventories, just add the details to the inventory file by adding the new hostname to all the groups that it belongs to.
+
+For dynamic inventories follow their specific instructions and then come back here to follow the next steps.
+
+## Adding a new host on dynamic inventories
+
+[JiffyBox](hosts/jiffybox)
+
+## Next steps
+
+```
+ascr inithost NEWHOSTNAME
+```
+
+This will ask for your password several times and processing will take a while. After that, your new host and its users are configured but the role-based functionality is not yet deployed.  
+
+Onve that's completed, this script will automatically trigger more scripts to be executed:
+
+```
+ascr sanity upgrade
+ascr sanity reboote
+ascr role commonkeys
+ascr farm
+```
diff --git a/docs/ansible/wiki/hosts/jiffybox.md b/docs/ansible/wiki/hosts/jiffybox.md
new file mode 100644
index 0000000000000000000000000000000000000000..bcf51f0fda84ab4ee61fe3e95deea8dcaec7ee9e
--- /dev/null
+++ b/docs/ansible/wiki/hosts/jiffybox.md
@@ -0,0 +1,15 @@
+# Adding a new JiffyBox
+
+Login to your JiffyBox account at https://admin.jiffybox.de and add a new host by giving it a unique name, selecting a tarif and one of the Ubuntu distributions and defining a root password which is only going to be used for initial setup. It is recommended to use the same password which you have configured as your personal password in the Ansible vault.
+
+When the host has been created, start it up in your JiffyBox control panel and as soon as it is available, go to your terminal, change to your Ansible directory and follow these steps:
+
+```
+cloud/jiffybox.py --list --refresh-cache
+cloud/jiffybox.py --host=NEWHOSTNAME --set-host-groups=GROUPS
+cloud/jiffybox.py --list --refresh-cache
+```
+
+Replace `NEWHOSTNAME` with the same name that you've defined in your JiffyBox account and for `GROUPS` define a comma separated list with valied group names, e.g. `webserver,webserver_drupal,dbserver,dbserver_mysql`
+
+The final step should confirm that the new host exists and is assigned to the correct groups. If that's the case, proceed to the [next steps](hosts/add#next-steps).
diff --git a/docs/ansible/wiki/hosts/prevent-reboot.md b/docs/ansible/wiki/hosts/prevent-reboot.md
new file mode 100644
index 0000000000000000000000000000000000000000..7f510fac1e516394bc09a39b5ef68956cccceabb
--- /dev/null
+++ b/docs/ansible/wiki/hosts/prevent-reboot.md
@@ -0,0 +1,11 @@
+# Prevent the reboot of a host
+
+If you need to permanently or temporarily prevent a specific host from being rebooted you can achieve that by creating a file `/etc/ansible/facts.d/reboot.fact` with the following content:
+
+```json
+{"paused": true}
+```
+
+If a scheduled reboot is being executed and the reboot is paused by this setting, then the reboot will be skipped.
+
+To enable the scheduled reboot again, either remove that file or set the value to `false`.
diff --git a/docs/ansible/wiki/index.md b/docs/ansible/wiki/index.md
new file mode 100644
index 0000000000000000000000000000000000000000..a426fc621cc32b8a5651116fb8164e75a7418a70
--- /dev/null
+++ b/docs/ansible/wiki/index.md
@@ -0,0 +1,148 @@
+This repository is a framework around Ansible to make the daily usage more simple and straight forward. In this Wiki we're going to describe the usage of the main parts of it and show use cases on how you can get the most out of it.
+
+The following instructions assume that you've created the shortcuts as described on the modules installation instructions on the front page. If not, please adjust the commands below accordingly.
+
+# Links to other pages
+
+- Inventories
+  - [Adding new inventory](inventory/add)
+  - [Scheduled Pipelines](inventory/pipelines)
+  - [ChatOps](inventory/chatops)
+- Hosts
+  - [Adding new hosts](hosts/add)
+      - [... on JiffyBox](hosts/jiffybox)
+  - [Prevent reboots](hosts/prevent-reboot)
+- System Configuration
+  - [Cron Tabs](configure-crontabs)
+  - [Firewall](firewall)
+  - [Add a new user](user-add-new)
+  - [Set user password](user-set-password)
+  - [LetsEncrypt SSL](letsencrypt)
+  - [Swapfile](swapfile)
+- Monitoring and Alerts
+  - [Introduction](monitoring/introduction)
+  - [Monitoring](monitoring/monitoring)
+  - [Alerts](monitoring/alerts)
+      - [Alerts on Log Data](monitoring/alerts-elk)
+      - [Alerts on live Host Data](monitoring/alerts-netdata)
+      - [Alerts on Availability](monitoring/alerts-uptime)
+- Backup and Restore
+  - [Introduction](backup/introduction)
+  - [Backup](backup/backup)
+      - [Backup with SpiderOak](backup/backup-spideroak)
+      - [Backup with Google Cloud](backup/backup-google)
+  - [Restore](backup/restore)
+      - [Restore from SpiderOak](backup/restore-spideroak)
+      - [Restore from Google Cloud](backup/restore-google)
+- GitLab CI
+  - [Introduction](gitlab-ci-introduction)
+  - [Pre-requisites](gitlab-ci-prerequisites)
+  - [Configure a project](gitlab-ci-configuration)
+- HaProxy
+  - [Quick Update](haproxy-quick-update)
+  - [Custom blacklists](haproxy-custom-blacklists)
+- Varnish
+  - [Quick Update](varnish-quick-update)
+- Drupal
+  - [Roll out new Drupal site](drupal-deployment)
+  - [Get Drush aliases to local host](drush-fetch-aliases)
+  - [Update Apache Config](drupal-apache-update)
+- ElasticSearch
+  - [Introduction](elk/introduction)
+  - [Collecting Data](elk/fluentd)
+  - [UI to view the data](elk/kibana)
+  - [Alerts on Log Data](monitoring/alerts-elk)
+- Risk Management
+  - [Desaster Recovery](desaster-recovery)
+  - [Attack Vectors](attacks)
+- Tips & Tricks
+  - [Signed Git Comits](tips/signed-git-commits)
+- [Other Resources](resources)
+
+# Using Ansible and accessing hosts
+
+There are admins and jail users that are all defined in the inventory and they are all available on all hosts of the inventory ready to go. Accessing the hosts is possible through SSH sessions only when you have the private key matching the public key that got installed for your user on each of the hosts.
+
+As an admin you also can `sudo` into other user context including the root user. The same thing happens when you are running Ansible playbooks or Ansible commands - this is utilizing SSH and switches to root via sudo for most of the work that needs to be done remotely.
+
+So, the first thing you should always do is to set your user password for each of the remote hosts:
+
+```
+ssh username@hostname
+
+passwd
+```
+
+You have to provide your current password first, before you can set a new one. For all new users on every host, this initial password set by Ansible during setup is `My First Password`.
+
+# Using prepared scripts
+
+There is a framework in place that makes regular tasks really easy by preparing simple scripts that predefine all the command line arguments such that you only have to call the script to get the right things done.
+
+Those scripts are stored in the `scripts/` subdirectory and you call them with `ansible-script.py` or the shortcut `ascr`.
+
+To get a list of all available scripts, simply call `ascr list`. All of those script support the Python help functionality so that you can easily find out all available options for each of those script by calling `ascr SCRIPTNAME --help`.
+
+# Using Ansible Playbooks
+
+In general, Ansible playbooks get called by ```apb``` followed by the name of the playbook and optionally some additional parameters.
+
+Examples:
+
+1. Display a list of all host name and their IP address
+```apb list```
+
+2. Limit the above list to the webservers only
+```apb list --limit=webserver```
+
+3. Copy a MySQL database from one host to another
+```apb mysqlcopy --extravars="sourcehost=DBHOST1 targethost=DBHOST2 dbname=DBNAME"```
+
+4. Move content from Swap back to RAM on a specific host
+```apb swap2ram --limit=HOSTNAME```
+
+5. Enable XDebug on all Drupal servers
+```apb xdebug --limit=webserver_drupal --extra-vars="enable=1 port=9000"```
+
+6. Disable XDebug on all Drupal servers
+```apb xdebug --limit=webserver_drupal --extra-vars="enable=0"```
+
+More prepared use-cases will be described below in a separate chapter. Also, the official [Ansible documentation](http://docs.ansible.com/ansible/playbooks.html) is a great source for further reading.
+
+# Using Ansible Commands
+
+Sometimes you want to execute some commands on one or many remote hosts without writing a playbook for that as it is something you probably only want to execute once. This is possible by using the ```a``` command.
+
+Examples:
+
+1. Check the accessibility of all remote hosts (they should all respond with a "pong")
+```a -m ping```
+
+2. Read the setup from all remote hosts
+```a -m setup```
+
+3. Update all settings on your ServerDensity account
+```a -m serverdensity```
+
+4. Read the settings from your ServerDensity account (write output to sd.json)
+```a -m serverdensity -a "readonly=true output=sd.json"```
+
+More details about all the modules and options available can be found over at the official [Ansible documentation](http://docs.ansible.com/ansible/intro_adhoc.html)
+
+# Prepared Use-Cases
+
+## Regular maintenance
+
+On a daily basis you may want to call ```ascr sanity check``` which is reaching out to all your hosts and provide information about available updates. If any updates are available, you can call ```ascr sanity upgrade``` and Ansible will update all your hosts, checking that everything is OK. Sometimes, such updates require a reboot of the hosts and in such cases that requirement willl be displayed by Ansible. Then, call ```ascr sanity reboot``` and Ansible will reboot only those hosts that require it.
+
+## Setting up a new host
+
+To setup a new host call ```ascr inithost HOSTNAME [OPTIONS]``` and this will call the inithost playbook and preparing your local environment as well as the remote host with all the basic configuration. In detail:
+
+TBD
+
+## Configuring a host or your complete host farm
+
+The most powerful piece is the farm playbook. If you call ```apb farm``` Ansible will configure all you hosts in the inventory according to the roles and their definitions. You can also run that on selected hosts by calling ```apb farm --limit=HOSTNAME``` or for a group of hosts like ```apb farm --limit=webserver```.
+
+More to come.
diff --git a/docs/ansible/wiki/inventory/add.md b/docs/ansible/wiki/inventory/add.md
new file mode 100644
index 0000000000000000000000000000000000000000..392c7e0fd3d6ab91fe9a292bfdcd889d552aa158
--- /dev/null
+++ b/docs/ansible/wiki/inventory/add.md
@@ -0,0 +1,69 @@
+# Adding a new inventory
+
+When adding a new inventory, there need to be one or more [GitLab runners](ansible/roles/gitlab-runner) installed and configured on one or more hosts within that host farm. Here is the step by step instruction on how to get this done:
+
+## Install gitlab-runner as a service
+
+```shell script
+dans [INVENTORY] role gitlab-runner --limit=[HOST]
+```
+
+## Configure the host for gitlab-runner
+
+SSH into the host, change to `sudo su` and follow these instructions:
+
+### Register the runner
+
+```shell script
+gitlab-runner register
+```
+
+- Please enter the gitlab-ci coordinator URL: `https://gitlab.lakedrops.com/`
+- Please enter the gitlab-ci token for this runner: get the token from https://gitlab.lakedrops.com/ansible-inventories/[INVENTORY]/-/settings/ci_cd
+- Please enter the gitlab-ci description for this runner: `Inventory [inventory] [host]`
+- Please enter the gitlab-ci tags for this runner (comma separated): `ansible`
+- Please enter the executor: `docker`
+- Please enter the default Docker image (e.g. ruby:2.6): `registry.lakedrops.com/ansible-inventories/[inventory]:latest`
+
+Then edit the file `/etc/gitlab-runner/config.toml` and make sure that the runner config looks like this:
+
+```
+[[runners]]
+  name = "Inventory [inventory] [host"
+  url = "https://gitlab.lakedrops.com/"
+  token = "[token]"
+  executor = "docker"
+  [runners.custom_build_dir]
+  [runners.cache]
+    [runners.cache.s3]
+    [runners.cache.gcs]
+  [runners.docker]
+    tls_verify = false
+    hostname = "Ansible-[inventory]-[host]"
+    image = "registry.lakedrops.com/ansible-inventories/[inventory]:latest"
+    privileged = true
+    disable_entrypoint_overwrite = false
+    oom_kill_disable = false
+    disable_cache = false
+    volumes = ["/home/gitlab-runner/.ssh/id_rsa:/root/.ssh/id_rsa","/home/gitlab-runner/.a/variables.yml:/root/.ansible/secrets","/var/log/ansible:/var/log/ansible"]
+    pull_policy = "always"
+    shm_size = 0
+```
+
+Then restart the runner with `gitlab-runner restart`.
+
+### Login to the registry
+
+The runner will be executed either as user `root`, `gitlab-runner` or any of the sudo users. It is recomended, that you create access tokens for each of them by going to https://gitlab.lakedrops.com/profile/personal_access_tokens and creating tokens with the `read_registry` scope. The call `docker login registry.lakedrops.com` and use username and token as credentials. They get stored in `~/.docker/config.json` for future use.
+
+### Create ~/.a/variables.yml
+
+```yaml
+ansible_sudo_pass: '[sudo password of gitlab-runner user on all hosts of the farm]'
+```
+
+### Create ~/.a/environment
+
+```ini
+ANSIBLE_REMOTE_USER=gitlab-runner
+```
diff --git a/docs/ansible/wiki/inventory/chatops.md b/docs/ansible/wiki/inventory/chatops.md
new file mode 100644
index 0000000000000000000000000000000000000000..3c5df1043365730d6d88431d10ba4e324316733f
--- /dev/null
+++ b/docs/ansible/wiki/inventory/chatops.md
@@ -0,0 +1,11 @@
+# Triggering pipelines through ChatOps
+
+When logging into [Mattermost](https://mattermost.lakedrops.com) and going to the private channel of one of the inventories, you can trigger ChatOps commands for that inventory by typing `/[inventory] help`. This will show you a list of all available commands.
+
+The first time you will use one of those commands, the bot responds with the note **Hi there! Before I do anything for you, please connect your GitLab account.** Click on the provided link and authorize your user account in the opening browser window.
+
+To trigger a pipeline for your inventory, call `/[inventory] run [taskname] [arguments]`, where all the available **task names** can be found in this [GitLab CI Template](https://gitlab.lakedrops.com/gitlab-ci-cd/ansible-inventories/-/blob/master/.gitlab-ci-template.yml). Subsequent **arguments** are optional and can e.g. limit the scope of the pipeline to a specific host with `--limit=[hostname]`.
+
+After submitting the ChatOps command, the bot responds with **Sorry, this chat service is currently not supported by GitLab ChatOps.** - this message is misleading. It means that your submission was successful but you won't see command output in Mattermost, you have to go to GitLab in order to see the pipeline output.
+
+In addition to all the prepared jobs in the GitLab CI template, you can effectively run any Ansible script by calling `/[inventory] run Ansible [script name] [arguments]`, e.g. `/[inventory] run Ansible haproxy-config` to run the re-configuration script for HaProxy.
diff --git a/docs/ansible/wiki/inventory/pipelines.md b/docs/ansible/wiki/inventory/pipelines.md
new file mode 100644
index 0000000000000000000000000000000000000000..50a82a82360d375a0b76e5e3c72668785d387547
--- /dev/null
+++ b/docs/ansible/wiki/inventory/pipelines.md
@@ -0,0 +1,9 @@
+# Scheduled pipelines
+
+Instead of cronjobs for individual maintenance tasks, we now use scheduled pipeline within GitLab in order to get
+
+- a transparent overview, which jobs are executed when
+- logging of all the task runs as job logs in https://gitlab.lakedrops.com/ansible-inventories/[inventory]/-/jobs
+- scalability, i.e. we can easily add more gitlab runner instances and get more performance if required
+
+Each inventory has their own list of scheduled pipelines and they can be reviewed at https://gitlab.lakedrops.com/ansible-inventories/[inventory]/pipeline_schedules - in this list you can also see when the pipeline ran last (with a link to the pipeline log) and when that pipeline will run next.
diff --git a/docs/ansible/wiki/letsencrypt.md b/docs/ansible/wiki/letsencrypt.md
new file mode 100644
index 0000000000000000000000000000000000000000..c1c37fc15afee2549f2b3973d707e0ff45bb7c39
--- /dev/null
+++ b/docs/ansible/wiki/letsencrypt.md
@@ -0,0 +1,9 @@
+# LetsEncrypt SSL Certificates
+
+To regularely renew all certificates from LetsEncrypt on all hosts, execute this:
+
+```
+ascr role letsencrypt --tags=renew
+```
+
+If you're running services on a host that need to be paused while renewing, add them to a list in the `letsencrypt_pause_services` variable.
diff --git a/docs/ansible/wiki/monitoring/alerts-elk.md b/docs/ansible/wiki/monitoring/alerts-elk.md
new file mode 100644
index 0000000000000000000000000000000000000000..d0e6da6d2162001571c55448a857e719f3971e77
--- /dev/null
+++ b/docs/ansible/wiki/monitoring/alerts-elk.md
@@ -0,0 +1,76 @@
+# ElasticSearch Alerts
+
+To define rules for ElastAlert you can configure a list called `elastalerts` in your inventory.
+
+To deploy those rules to your log server, simply run `ascr elastalert-rules`.
+
+Below you'll find the details on how to configure the alerts that matter to your server farm and application.
+
+## External Sources
+
+- [GitHub Project][4]
+- [Documentation][5]
+  - [Rule Config][3]
+  - [Rule Types][1]
+  - [Alert Types][2]
+  - [Filter][6]
+
+## Inventory Format
+
+The alert list in the inventory supports the following entries:
+
+- `key`: Unique key that makes for a file name, so don't use special characters.
+- `name`: Name of the rules which is used in alerts.
+- `description`: Describes the purpose of the rules.
+- `type`: The rules type, see [Rule Types][1]
+- `alert`: List of alert types, see [Alert Types][2]
+- Optional:
+  - `alert_subject`: Subject string to be used as email alert subject
+  - `alert_subject_args`: list of strings that can be used as placeholders in the subject line
+  - `alert_text`: Alert text for all alert channels
+  - `alert_text_args`: list of strings that can be used as placeholders in the alert text
+- `extra`: a list of additional parameters for the rules that depend on the selected rules type, see [Rule Config][3]
+
+# Special case: Drupal monitoring
+
+The monitoring for Drupal is pre-configured and watches the syslog, the Apache error log and the Apache access log. The queries have these default values:
+
+## Drupal Syslog
+
+```
+@log_name:"syslog.local0.err" OR @log_name:"syslog.local0.crit" OR @log_name:"syslog.local0.alert" OR @log_name:"syslog.local0.emerg"
+```
+
+To test the query in Kibana, you should prefix that query with `ident:drupal* AND ` and surround the query with brackets. Example:
+
+```
+ident:drupal* AND (@log_name:"syslog.local0.err" OR @log_name:"syslog.local0.crit" OR @log_name:"syslog.local0.alert" OR @log_name:"syslog.local0.emerg")
+```
+
+## Drupal Apache Log
+
+Default query for errors:
+
+```
+level:"*error"
+```
+
+Default query for access:
+
+```
+code:[500 TO 599]
+```
+
+To test the query in Kibana, you should embed the two parts into `(@log_name:"apache.error.var.log.apache2.*-error.log" AND (ERROR-QUERY)) OR (@log_name:"apache.access.var.log.apache2.*-access.log" AND (ACCESS_QUERY))` and surround them with brackets each. Example:
+
+```
+(@log_name:"apache.error.var.log.apache2.*-error.log" AND (level:"*error")) OR (@log_name:"apache.access.var.log.apache2.*-access.log" AND (code:[500 TO 599]))
+```
+
+
+[1]: http://elastalert.readthedocs.org/en/latest/ruletypes.html#rule-types
+[2]: http://elastalert.readthedocs.org/en/latest/ruletypes.html#alerts
+[3]: http://elastalert.readthedocs.org/en/latest/ruletypes.html#common-configuration-options
+[4]: https://github.com/Yelp/elastalert
+[5]: http://elastalert.readthedocs.org/en/latest/index.html
+[6]: http://elastalert.readthedocs.org/en/latest/recipes/writing_filters.html
diff --git a/docs/ansible/wiki/monitoring/alerts-netdata.md b/docs/ansible/wiki/monitoring/alerts-netdata.md
new file mode 100644
index 0000000000000000000000000000000000000000..54c43cf101c8278417cfc6d8ead3d0e119e033c2
--- /dev/null
+++ b/docs/ansible/wiki/monitoring/alerts-netdata.md
@@ -0,0 +1,24 @@
+# NetData Alerts
+
+NetData comes with a list of [preconfigured alerts][1] that we currently use as they come out of the box (not all of them apply to the server farm):
+
+- apache
+- cpu
+- disks
+- entropy
+- memcached
+- named
+- net
+- nginx
+- qos
+- ram
+- redis
+- squid
+- swap
+
+There is currently no way to adjust those alarms but we are planning to make them configurable when it's going to be required and that will then follow the [NetData Health Monitoring][2].
+
+Each alerts is currently sent as an email to root@localhost and will be forwarded to the list of email addresses that are configured in the inventory.
+
+[1]: https://github.com/firehol/netdata/tree/master/conf.d/health.d
+[2]: https://github.com/firehol/netdata/wiki/health-monitoring
diff --git a/docs/ansible/wiki/monitoring/alerts-uptime.md b/docs/ansible/wiki/monitoring/alerts-uptime.md
new file mode 100644
index 0000000000000000000000000000000000000000..4fdb47e6f4dff03ad58ba1657f22f2746ab16e41
--- /dev/null
+++ b/docs/ansible/wiki/monitoring/alerts-uptime.md
@@ -0,0 +1,13 @@
+# Uptime Alerts
+
+[Uptime][1] is a fork of the original and no longer maintained project from [fzaninotto][2] and got maintained and enhanced on its new home.
+
+The alerts if either one of the services
+- does not respond
+- reponds with an HTTP code other than 200
+- does not match a given regular expression
+
+are pushed to Mattermost where they get picked up by the DevOps.
+
+[1]: https://gitlab.lakedrops.com/tools/uptime
+[2]: https://github.com/fzaninotto/uptime
diff --git a/docs/ansible/wiki/monitoring/alerts.md b/docs/ansible/wiki/monitoring/alerts.md
new file mode 100644
index 0000000000000000000000000000000000000000..8b8da3f72ce642a2e32b1de409dee682a442abf5
--- /dev/null
+++ b/docs/ansible/wiki/monitoring/alerts.md
@@ -0,0 +1,13 @@
+# Alerts
+
+All 3 layers of [monitoring](monitoring/monitoring) have their own setup for raising alerts:
+
+- System monitoring: [NetData Alerts](monitoring/alerts-netdata)
+- Log monitoring: [ElasticSearch Alerts](monitoring/alerts-elk)
+- Availability monitoring: [Uptime Alerts](monitoring/alerts-uptime)
+
+Those alerts are available in [Mattermost][1] and will be picked up by Paragon's DevOp team and handled appropriatly.
+
+NB: There is currently one exception to this, NetData only submits emails to root on the local host and we configure the recipient for these emails in the `system_mail` inventory variable, which contains a comma separated list of email addresses. As soon as NetData gets enhanced, the alerts will then go into Mattermost to ensure that we get all alerts into one central place.
+
+[1]: https://mattermost.lakedrops.com/paragon
diff --git a/docs/ansible/wiki/monitoring/introduction.md b/docs/ansible/wiki/monitoring/introduction.md
new file mode 100644
index 0000000000000000000000000000000000000000..ff99325beb28ae2f60d3e62fbbc448f1ecb56fed
--- /dev/null
+++ b/docs/ansible/wiki/monitoring/introduction.md
@@ -0,0 +1,28 @@
+# Monitoring and Alerts
+
+- [Introduction](monitoring/introduction)
+- [Monitoring](monitoring/monitoring)
+- [Alerts](monitoring/alerts)
+    - [Alerts on Log Data](monitoring/alerts-elk)
+    - [Alerts on live Host Data](monitoring/alerts-netdata)
+    - [Alerts on Availability](monitoring/alerts-uptime)
+
+# Introduction
+
+The links above are explaining the various details about monitoring and alerts. This intro will simply give you an overview of where you can access the monitoring and alert logs.
+
+First of all, the access tp the various systems is controlled by GitLab where you can find a list of companies that you can access in the [Hosts Group](https://gitlab.lakedrops.com/ansible-inventories/hosts) of the [Ansible Inventories](https://gitlab.lakedrops.com/ansible-inventories). Inside of the hosts group there are groups for each company you have access to and inside of each company group you'll find a project for each host of that company. Those repositories contains the version controlled `/etc` directory of that host which helps to find out configuration changes of each host while browsing the history. Those repositories are read-only and will be updated by each of the hosts automatically when ever there are changes to anything inside the etc directory.
+
+## Alert overview
+
+We are using [Alerta](https://gitlab.lakedrops.com/ansible/roles/alerta) to collect and display aggregated and de-douplicated alerts from various sources described in the links above.
+
+You can get access to all alerts of all "your" companies in the [LakeDrops Alerta](https://alerta.lakedrops.com) dashboard. Login to the dashboard by authenticating yourself at GitLab, you will be redirected automatically.
+
+## Activity log and chat
+
+All these systems are also integrated with the [LakeDrops Mattermost](https://mattermost.lakedrops.com/paragon) instance where you can also authenticate yourself with your GitLab credentials.
+
+## Getting access
+
+You think you should get access to another company in GitLab, Mattermost and Alerta? Just raise an issue in the inventory of that company and we'll take care of it.
diff --git a/docs/ansible/wiki/monitoring/monitoring.md b/docs/ansible/wiki/monitoring/monitoring.md
new file mode 100644
index 0000000000000000000000000000000000000000..157af041d0ce0521789064a6dcc781e87e538d88
--- /dev/null
+++ b/docs/ansible/wiki/monitoring/monitoring.md
@@ -0,0 +1,94 @@
+# Monitoring
+
+Monitoring the server farm happens on 3 different layers:
+
+- System monitoring
+- Log monitoring
+- Availability monitoring
+
+All these layers are fully automated and they will raise [alerts](monitoring/alerts) when required. However, looking into the monitoring data manually is also possible but limited to permitted users only.
+
+## System monitoring
+
+System monitoring is looking into all system components such as
+- CPU
+- Load
+- RAM
+- Disks
+- Processes
+- Swap
+- Network Traffic
+- Entropy
+
+and others. We are using [NetData][1] for this which runs on each host individually.
+
+A dashboard is available at `http://hostname:19999` and access is limited to the IP address of Paragon's head office as this system does not yet support other means of authentication. This is subject to change in the future and we will then offer access to other users as well.
+
+## Log monitoring
+
+In the server farm there are plenty of log files being provided such as
+- syslogs
+- Apache logs
+- PHP logs
+- Auth logs
+
+and many more, often even application specific logs, e.g. detailed information about each request that the proxy gateway receives and handles.
+
+All that log data is being collected by [FluentD][2], forwarded to [ElasticSearch][3] for indexing and automatically monitored by [ElastAlert][4]. Manual access to the log archive is possible through [Kibana][5] and is possible through `https://logserver`, protected by username and password.
+
+## Availability monitoring
+
+Websites and other services accessible over TCP/IP are monitored from different hosts located all around the world running [Uptime][6] to check if each of them is accessible and delivers the expected response like e.g. http response code or certain content determined by regular expressions.
+
+Access to the [Uptime Dashboard][7] is permitted for Paragon DevOps only.
+ 
+Those checks are configured in the inventory and currently two different instances are supported:
+
+### Drupal domains
+
+Those are configured together with the Drupal settings and the uptime check parameters look like these:
+
+```
+drupal_settings:
+  - id: example
+    root: /web
+    ...
+    domains:
+      - domain: www.example.com
+        ...
+        uptime:
+          name: Label to identify this check instance
+          tags:
+            - sample
+          pollerParams:
+            match: /Text to expect on the site/
+            mattermost_hook: [URL of the webhook]
+```
+
+### Other services
+
+For domains or services that should be checked but are no Drupal sites, we can defin extra check instances somewhere in the inventory like this:
+
+```
+uptime_domains:
+  - url: status.linode.com
+    uptime:
+      name: Linode Status
+      path: history.rss
+      tags:
+        - linode
+        - tracker
+      pollerParams:
+        tracker_source: RSS
+        mattermost_hook: [URL of the webhook]
+```
+
+Any number of other services can be defined that way.
+
+[1]: https://github.com/firehol/netdata
+[2]: http://www.fluentd.org
+[3]: https://www.elastic.co/products/elasticsearch
+[4]: https://github.com/Yelp/elastalert
+[5]: https://www.elastic.co/products/kibana
+[6]: https://gitlab.lakedrops.com/tools/uptime
+[7]: https://mon1.paragon-es.de/dashboard
diff --git a/docs/ansible/wiki/openssl/csr.md b/docs/ansible/wiki/openssl/csr.md
new file mode 100644
index 0000000000000000000000000000000000000000..eee258a3d2158bbd6a394faaf20b43188eb43d18
--- /dev/null
+++ b/docs/ansible/wiki/openssl/csr.md
@@ -0,0 +1,15 @@
+Source: https://www.thomas-krenn.com/de/wiki/Openssl_Multi-Domain_CSR_erstellen
+
+Step 1: Create a config file
+
+Step 2: Create private key
+
+```
+openssl genrsa -out example.com.key 2048
+```
+
+Step 3: Create CSR
+
+```
+openssl req -new -out example.com.csr -key example.com.key -config example.com.conf
+```
diff --git a/docs/ansible/wiki/resources.md b/docs/ansible/wiki/resources.md
new file mode 100644
index 0000000000000000000000000000000000000000..e6c07de8db6195c0c80aff0b2aa229d8c0359780
--- /dev/null
+++ b/docs/ansible/wiki/resources.md
@@ -0,0 +1,3 @@
+- Awesome Lists
+  - [Awesome Self Hosted](https://github.com/Kickball/awesome-selfhosted)
+  - [Awesome Sysadmin](https://github.com/n1trux/awesome-sysadmin#monitoring)
diff --git a/docs/ansible/wiki/swapfile.md b/docs/ansible/wiki/swapfile.md
new file mode 100644
index 0000000000000000000000000000000000000000..d3e086b36be56dc77c1898bfa61763352360f018
--- /dev/null
+++ b/docs/ansible/wiki/swapfile.md
@@ -0,0 +1,7 @@
+# Set or change the size of the swap file
+
+By default the swap file will be twice as big as the system memory but you can adjust the size by defining the variable `swap_space` for each host individually. After you've done that, deploy that change with the command
+
+```
+ascr role common --tags=swap --extra-vars="reset=true" --limit=[HOSTNAME]
+```
diff --git a/docs/ansible/wiki/tips/signed-git-commits.md b/docs/ansible/wiki/tips/signed-git-commits.md
new file mode 100644
index 0000000000000000000000000000000000000000..f3dfb646029858366a8b01f99146bd96e9eb8fc2
--- /dev/null
+++ b/docs/ansible/wiki/tips/signed-git-commits.md
@@ -0,0 +1,22 @@
+Find your GPG key:
+
+```
+gpg --list-secret-keys
+# Find it here: sec 4096R/KEY-ID
+```
+
+Globally configure usage of that key and signing of commits:
+
+```
+git config --global user.signingkey KEY-ID
+git config --global commit.gpgsign true
+```
+
+Configure GPG for signing within the IDE in `.gnupg/pgp.conf`:
+
+```
+no-tty
+use-agent
+```
+
+After that you may have to kill and re-start gpg-agent.
\ No newline at end of file
diff --git a/docs/ansible/wiki/user-add-new.md b/docs/ansible/wiki/user-add-new.md
new file mode 100644
index 0000000000000000000000000000000000000000..ce3fed811dd487990e9c6e6908c5262590b62845
--- /dev/null
+++ b/docs/ansible/wiki/user-add-new.md
@@ -0,0 +1,47 @@
+To add a user or admin to the hosts, go through these steps:
+
+### Username ###
+
+First things first, define a username for the new user and add that to either the `admins` or the `jailusers` list, depending on what the user should be used for. The username should be alphanumeric, all lower-case and idealy consisting of first and last name of the real user.
+
+The lists are defined in `[INVENTORY]/group_vars/all/system.yml`.
+
+### SSH Public Key ###
+
+Get the public key of the user and store it in `[INVENTORY]/files/keys/[USERNAME].d2s.pub`.
+
+### User Variables ###
+
+Create a file `[INVENTORY]/user_vars/[USERNAME].yml` and define all required user variables. Note: because a yaml file can never be empty, you have to define at least one variable, even if that is a dummy, otherwise you will be seeing syntax errors when running play.
+
+Supported variables:
+
+`env`: a dictionary with key/value pairs which will be set as environment variables for that user on the remote host.
+
+`groups`: a comma separated list of group names ti which that user will be added on the remote host.
+
+### Running Play ###
+
+```
+apb user
+```
+
+### Quick Running Play ###
+
+This one is much faster than the previous one and can always been used to simply update permissions.
+
+```
+apb user --tags=SetPermissions
+```
+
+In addition this will also reset the users which will remove permissions if required where the previous one only appends to the already existing settings.
+
+```
+apb user --tags=SetPermissions,Reset
+```
+
+If you're using jails, then you should once call the following to initialize all users in each jail.
+
+```
+apb user --tags=JailUserInit,SetPermissions
+```
diff --git a/docs/ansible/wiki/user-set-password.md b/docs/ansible/wiki/user-set-password.md
new file mode 100644
index 0000000000000000000000000000000000000000..d7ec17a2522b8456e72ea72c5332fb213dfcff34
--- /dev/null
+++ b/docs/ansible/wiki/user-set-password.md
@@ -0,0 +1,7 @@
+To set the password for a user on some or all hosts, of course by an admin who still knows his or her password, can be done by this playbook:
+
+```
+apb userpwd -e "username=[USERNAME] password=[PASSWORD]"
+```
+
+and simply replace `[USERNAME]` and `[PASSWORD]` with the respective values.
diff --git a/docs/ansible/wiki/varnish-quick-update.md b/docs/ansible/wiki/varnish-quick-update.md
new file mode 100644
index 0000000000000000000000000000000000000000..5c008c753b3808ab581d979d155185eeaffa0b22
--- /dev/null
+++ b/docs/ansible/wiki/varnish-quick-update.md
@@ -0,0 +1,15 @@
+If any changes have been made, e.g. new domains, changed aliases, or using SSL for certain domain, you might be looking for a way to quickly update Varnish to make those changes effective.
+
+With the following command, everything will be done within seconds:
+
+```
+ascr varnish-config
+```
+
+or
+
+```
+ascr role varnish --limit=varnishserver --tags=Config
+```
+
+You probably also want to look into [Quick Update of HaProxy](haproxy-quick-update).
diff --git a/mkdocs.yml b/mkdocs.yml
index ad4cbabbedbf5443e3f0e8237dbe7445299c3a9a..342c10b70b001181e3dccd404428e6b9d8459662 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -4,6 +4,8 @@ theme:
 plugins:
   - search:
       lang: en
+  - git-revision-date-localized:
+      type: date
   - build_plantuml:
       render: "server"
       server: "http://www.plantuml.com/plantuml"
@@ -20,5 +22,34 @@ nav:
       - Home: composer/index.md
   - Ansible:
       - Home: ansible/index.md
+      - Wiki: ansible/wiki/index.md
+      - Plugins:
+          - Fluentd: ansible/plugins/fluentd/index.md
+          - ServerDensity: ansible/plugins/serverdensity/index.md
+      - Roles:
+          - Apache: ansible/roles/apache/index.md
+          - Borg Backup: ansible/roles/borgbackup/index.md
+          - Composer: ansible/roles/composer/index.md
+          - Discourse: ansible/roles/discourse/index.md
+          - ElastAlert: ansible/roles/elastalert/index.md
+          - Elasticsearch: ansible/roles/elasticsearch/index.md
+          - Fail2Ban: ansible/roles/fail2ban/index.md
+          - Fluentd: ansible/roles/fluentd/index.md
+          - GitLab: ansible/roles/gitlab/index.md
+          - HaProxy: ansible/roles/haproxy/index.md
+          - Heartbeat: ansible/roles/heartbeat/index.md
+          - JailKit: ansible/roles/jailkit/index.md
+          - Keycloak: ansible/roles/keycloak/index.md
+          - LetsEncrypt: ansible/roles/letsencrypt/index.md
+          - MySQL: ansible/roles/mysql/index.md
+          - Nextcloud: ansible/roles/nextcloud/index.md
+          - Oracle: ansible/roles/oracle/index.md
+          - Packetbeat: ansible/roles/packetbeat/index.md
+          - PHP: ansible/roles/php/index.md
+          - ServerDensity: ansible/roles/serverdensity/index.md
+          - SpiderOak ONE: ansible/roles/spideroak/index.md
+          - User Management: ansible/roles/user-management/index.md
+          - VPN: ansible/roles/vpn/index.md
+          - Zammad: ansible/roles/zammad/index.md
   - GitLab:
       - Home: gitlab/index.md