Category Archives: How To

how to, tips, hints

setup your own linux based router

What is all about
short story ...
This post will help you to configure a linux PC in order to function as a router too.
long story ...
If you like me have a very low energy consumption PC (a NAS equivalent) running all the time you might prefer it to act as a router too. This way you'll be able to:
- use the full Linux power to control the network traffic (especially the malicious connections)
- use the better performing PC hardware (compared to one of a dedicated router) to deal with the network traffic
- have fun because you're a linux enthusiast :)
The setup explained below uses NetworkManager.service; if you use something else the main difference will be related to configuring the pppoe connection while the other aspects should be the same or anyway helpful for your setup.
But using your PC as a router doesn't mean you won't be able to use it for something else too. I for example use my PC as a router while also as a desktop PC, as a server (for this blog, Transmission, ssh, nginx, etc) and as HTPC (Plex based).

What's to achieve
In the end you'll achieve these:
- connect directly to Internet using your PC router
- Internet users directly access your websites running on your PC router
- when having at least 2 ethernet cards you'll use one for Internet access while the other to setup a LAN
- with 2 ethernet cards one could be connected to a dedicated wireless router; its wireless users could be considered part of a LAN accessing the Internet through the PC router (the gateway for the dedicated router)
- secure your PC router against malicious Internet access
- setup other goodies e.g. dnsmasq with or without dhcp, sshttp, Plex

How to do it
In order to achieve the above you'll have do these:
- secure the access to your PC router
- setup a pppoe connection in order to access the Internet
- share the Internet access
- setup dnsmasq (NetworkManager's plugin) in order to ... long story, I'll explain later
- setup a dedicated wireless router in order to have wireless access to Internet when your PC router isn't able to provide by itself wireless access
- solve miscellaneous other issues e.g. dealing with sshttp and/or Plex

Secure the access to your PC router
This is a vital step!
You should do this first before having your PC accessed from all over the Internet.
You can do this by using the default firewall of your Linux distribution, e.g. for Ubuntu is UFW (Uncomplicated Firewall) while for RedHat/CentOS/Fedora is firewalld (check firewall-cmd man page and usage examples here and here).
Before continuing just check your opened ports with the commands below.
List opened ports using UFW:
sudo ufw status numbered
List opened ports using firewalld:
firewall-cmd --get-active-zones
firewall-cmd --list-ports

Setup a pppoe connection in order to access Internet
Use your graphical NetworkManager connection editor (nm-connection-editor on Ubuntu) in order to create a DSL connection (e.g. named RDS). In General tab check the options Automatically connect to this network when it is available and All users may connect to this network. In DSL tab fill in the username and password handed to you by your Internet provider. In Ethernet tab let MTU to automatic (it won't apply to pppoe connection) and choose the card which will be used for Internet access (e.g. eth0). In IPv6 Settings tab disable ipv6 connections if you don't have a reason to use it; if you intend to use it then this post won't help you.

Check the pppoe setup
On Ubuntu you'll be able to see your configuration from the command line:
sudo cat /etc/NetworkManager/system-connections/RDS
or using the graphical NetworkManager applet (Connection Information menu).

With the ifconfig command you'll see a new network interface (e.g. ppp0) when the pppoe connection is active.
Using the command below:
nmcli connection show
you'll also see that the pppoe connection is related to eth0 (chosen by you when creating the RDS connection).
With the command below:
nmcli device show
you'll see that eth0 has as IP4.GATEWAY the ip of your internet provider.
Check the pppoe connection with these commands too:
ifconfig ppp0
netstat -i

The MTU configuration
When MTU of your pppoe connection is not correctly set you'll experience internet web pages hanging/loading forever. 1500 is the maximum MTU possible and seems to be the default for the ethernet devices. For pppoe connections the maximum MTU is 1492. Check more about these at http://www.dslreports.com/faq/695.

You'll have to edit manually the [ppp] section in /etc/NetworkManager/system-connections/RDS in order to add/change it:

[ppp]
mru=1492
mtu=1492
With mtu=1492 the commands below:
sudo ip route flush cache
ping -c 1 -M do -s 1464 8.8.8.8
should yield among other:
1 packets transmitted, 1 received, 0% packet loss
or an error similar to the below:
ping: local error: Message too long, mtu=1492
1 packets transmitted, 1 received, 0% packet loss
If ping with 1464 (1464 = 1492 - 28) value yields an error then change it to a lower value e.g. subtract 10 then try again and so on. When found the maximum working value add 28 to it then use it for [ppp] section in RDS and restart RDS connection (use the NetworkManager applet to disconnect then reconnect).

When an ip package flows through e.g. eth1 (another ethernet card on your PC router) to ppp0 a MTU conversion must be done. This is accomplished with iptables or with the help of the firewall e.g. UFW. After finding the proper MTU you'll have to put this in /etc/ufw/before.rules: 

-A ufw-before-forward -p tcp -i eth1 -o ppp0 --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1452
-A ufw-before-forward -p tcp -i eth1 -o ppp1 --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 1452

just before # ufw-not-local though when left at the end I guess it will work too. Replace 1452 with your pppoe MTU minus 40.

Why adding 28 and why using 1464 in the first place? check at http://www.dslreports.com/faq/695.

There are other commands that show the MTU value:
ip ad
netstat -i
ifconfig ppp0 | grep MTU
but they don't provide you an option to test a wrong MTU value (as ping does).
The MTU for eth0 (used by ppp0) should be 1500.

There's another way of testing MTU value but with a more complicated setup and impractical for pppoe connection but useful for LAN connections. It works like this: on another computer (PC2) using e.g. eth0 (192.168.0.1) and connected to your PC router on e.g. eth1 run the command below in order to check received network packets:
sudo tcpdump -i eth0 --direction=in -n ip proto \\icmp
then from your PC router send network packets like this:
ping -c 1 -s 1472 -I eth3 192.168.0.1
ping -c 1 -s 1464 -I eth3 192.168.0.1
For any packet received on PC2 you'll get one line of console output so when the ping value (1464, 1472) is too large you'll see more than one line in PC2's console. You should change the ping value till you reach the maximum one while still showing only one line in PC2's console. Then to that maximum value add 28 and that's will be the MTU for the connection PC router on eth1 to PC2 on eth0.

I have no idea how to check the current MRU value but seems a good idea to set it to the same value as MTU; please post a comment when you have a clue about it.

Share the Internet access
You'll have to enable packet forwarding by editing /etc/sysctl.conf:
net.ipv4.ip_forward=1
then activate the new sysctl configuration with:
sudo sysctl -p
Check current configuration with:
sysctl net.ipv4.ip_forward

Also you'll have to configure your firewall to allow ip forwarding.
e.g. with UFW you'll have to edit /etc/default/ufw to have this:

DEFAULT_FORWARD_POLICY="ACCEPT"

In /etc/ufw/before.rules you'll need:

*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
# when having no other *nat rules uncomment the line below:
# -F
-A POSTROUTING -o ppp0 -j MASQUERADE
-A POSTROUTING -o ppp1 -j MASQUERADE
COMMIT

At this point when having multiple ethernet cards you'll be able to share the internet connection with them. This means that a PC2 directly connected to PC router's eth1 will be able to access the Internet but only with a proper configuration: 
- PC2 must have an ip in the same network class as PC router's eth1
- PC2 must have the gateway pointing to PC router eth1's ip
- PC2's DNS servers must be the same as those used by PC router (check nmcli device show eth0 | grep '.DNS')
This setup is an annoying complication mostly because of the DNS setup which might change depending on the Internet provider. The following section solves this with the help of a DNS and DHCP server.

Internet connection sharing: the big picture
Let's suppose that your PC router has an additional network interface (e.g. eth1). You could connect to it:
a) another PC on a wired connection when eth1 is wire only accessible
b) many other wireless devices when eth1 is a wireless device
c) a dedicated wireless router (when eth1 is wire only accessible) in order to share the Internet connection with other wireless and wired devices
For the b case you'll need to setup dnsmasq as a DNS and DHCP server. For a and c you won't really need the DHCP server but won't harm you anyway.

Setup dnsmasq as a DNS and DHCP server
When using NetworkManager then dnsmasq is already used as a plugin; just check /etc/NetworkManager/NetworkManager.conf for something like dns=dnsmasq. You'll need to customize dnsmasq's configuration; create the file /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf with the following content:

addn-hosts=/etc/hosts-dnsmasq.conf
local-ttl=3600
log-facility=/var/log/dnsmasq/dnsmasq.log
interface=eth1
except-interface=eth0
except-interface=ppp0
strict-order
all-servers
clear-on-reload
cache-size=5000
dhcp-range=192.168.0.2,192.168.0.255,255.255.255.0,192.168.0.255,1h
dhcp-option=option:router,192.168.0.1
dhcp-option-force=option:mtu,1500
dhcp-lease-max=1
log-dhcp
dhcp-leasefile=/var/log/dhcpd.leases.log

Make sure to create /var/log/dnsmasq/ (owned by root only) used for keeping dnsmasq.log.

Be aware to exclude with except-interface at least the pppoe connections (e.g. ppp0) and the network interfaces used by them (e.g. eth0). You can change the cache-size in case you want less RAM to be used. Related to dhcp-range I assume you have only one network interface available (e.g. eth1) besides the one used for the pppoe connection (e.g. eth0). So when something is connected to eth1 it will automatically get the proper ip (between 192.168.0.2 and 192.168.0.255) and the DNS configuration. On your side eth1 should have the ip 192.168.0.1 and no gateway or DNS configured. 

I don't know what one should do when having multiple network interface available; the problem is with the dhcp-option=option:router,192.168.0.1 which should be different for every interface.

Sometimes you'll notice that the network won't start with dnsmasq complaining that can't bind port 53 to 192.168.0.1 (see interface=eth1 option). This happens because sometimes eth1 (having 192.168.0.1 ip) is activated after dnsmasq. The solution I found is to start with the "interface=eth1" option commented; after eth1 is started I uncomment it then kill dnsmasq which will then be restarted automatically by NetworkManager. On PC router shutdown or eth1 down I'll have to comment again the "interface=eth1" option and do again the uncommenting-kill-dnsmasq after restarting eth1.

For the uncommenting and dnsmasq killing part I use /etc/network/if-up.d/eth1-up:
#!/bin/sh -e
# eth1 post-up

# sudo cp -v /********/bin/config/eth1-up /etc/network/if-up.d/ && sudo chown -c root: /etc/network/if-up.d/eth1-up && sudo chmod -c 755 /etc/network/if-up.d/eth1-up

[ "$IFACE" = "eth1" ] || exit 0
[ "$PHASE" = "post-up" ] || exit 0
if [ -e /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf ]; then
	if [ "`grep -P "^interface=eth1$" /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf`" = "" ]; then
		echo "[$(date +"%d.%m.%Y %H:%M:%S") eth1-up] activating \"interface=eth1\" in custom-dnsmasq.conf" | tee -a /var/log/RDS.log
		sed -i s/"^#\s*interface=eth1$"/"interface=eth1"/ /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf
		kill `pidof dnsmasq` 2>/dev/null
		if [ "$?" != "0" ]; then
			echo "[$(date +"%d.%m.%Y %H:%M:%S") eth1-up] couldn't find dnsmasq to kill" | tee -a /var/log/RDS.log
		else
			echo "[$(date +"%d.%m.%Y %H:%M:%S") eth1-up] killed dnsmasq (in order to restart it)" | tee -a /var/log/RDS.log
		fi
	else
		echo "[$(date +"%d.%m.%Y %H:%M:%S") eth1-up] custom-dnsmasq.conf already uses eth1" | tee -a /var/log/RDS.log
	fi
fi
For the commenting part I use /etc/network/if-post-down.d/eth1-post-down:
#!/bin/sh -e
# eth1 post-down

# sudo cp -v /********/bin/config/eth1-post-down /etc/network/if-post-down.d/ && sudo chown -c root: /etc/network/if-post-down.d/eth1-post-down && sudo chmod -c 755 /etc/network/if-post-down.d/eth1-post-down

[ "$IFACE" = "eth1" ] || exit 0
[ "$PHASE" = "post-down" ] || exit 0
if [ -e /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf ]; then
	if [ "`grep -P "^interface=eth1$" /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf`" = "" ]; then
		echo "[$(date +"%d.%m.%Y %H:%M:%S") eth1-post-down] \"interface=eth1\" already commented in custom-dnsmasq.conf" | tee -a /var/log/RDS.log
	else
		echo "[$(date +"%d.%m.%Y %H:%M:%S") eth1-post-down] commenting \"interface=eth1\" in custom-dnsmasq.conf" | tee -a /var/log/RDS.log
		sed -i s/"^interface=eth1$"/"# interface=eth1"/ /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf
	fi
fi
I notice anyway that when shutdowning PC router the eth1-post-down script above doesn't work so I also use /etc/systemd/system/NetworkManager.service.d/network-manager-override.conf:
# sudo cp -v bin/systemd-services/network-manager-override.conf /etc/systemd/system/NetworkManager.service.d/ && sudo chown root: /etc/systemd/system/NetworkManager.service.d/network-manager-override.conf && sudo chmod 664 /etc/systemd/system/NetworkManager.service.d/network-manager-override.conf && sudo systemctl daemon-reload
[Service]
ExecStartPre=/bin/sed -i s/"^interface=enp1s0$"/"# interface=enp1s0"/ /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf
ExecStopPost=/bin/sed -i s/"^interface=enp1s0$"/"# interface=enp1s0"/ /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf
You'll also have to open the DNS (53) and DHCP (67) ports only on eth1:

sudo ufw allow in on eth1 to any port 53 comment 'allow DNS access from LAN'
sudo ufw allow in on eth1 to any port 67 comment 'allow DHCP access from LAN'

Useful commands:
sudo kill -s USR1 `pidof dnsmasq` -> generates dnsmasq statistics in /var/log/dnsmasq/dnsmasq.log
tailf /var/log/dnsmasq/dnsmasq.log
tailf /var/log/RDS.log
tailf /var/log/dhcpd.leases.log
journalctl -fu NetworkManager
grep -P "interface=eth1$" /etc/NetworkManager/dnsmasq.d/custom-dnsmasq.conf
to be continued ...

iptables

iptables processing steps (original image link)



Redirect eth0:3240 to 127.0.0.1:32400
sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -a | grep 'net.ipv4.ip_forward'
sysctl net.ipv4.ip_forward -> this reads the value
sudo sysctl -w net.ipv4.conf.eth0.route_localnet=1
sudo sysctl -a | grep 'net.ipv4.conf.eth0.route_localnet'
# you'll need the rule below when using ufw
sudo ufw allow to 127.0.0.1 port 32400

Suppose we have a server with an eth0 with the ip 192.168.1.31.

Set this iptables rule on the server:
sudo iptables -t nat -I PREROUTING -p tcp -i eth0 --dport 3240 -j DNAT --to-destination 127.0.0.1:32400
or using the ip for eth0:
sudo iptables -t nat -I PREROUTING -p tcp -d 192.168.1.31 --dport 3240 -j DNAT --to-destination 127.0.0.1:32400
in order to work this command on a client computer (but not on the server):
curl -kLD http://192.168.1.31:3240/web/index.html

Set only this iptables rule on the server:
sudo iptables -t nat -I OUTPUT -p tcp -o lo --dport 3240 -j REDIRECT --to-ports 32400
in order to work these curl commands on the server:
curl -kLD - http://127.0.0.1:3240/web/index.html 
curl -kLD - http://192.168.1.31:3240/web/index.html

View and delete rules
sudo iptables -t nat --line-number -L -v
sudo iptables -t nat -D PREROUTING 1 -> deletes rule 1 from PREROUTING
sudo iptables -t nat -D OUTPUT 1 -> deletes rule 1 from OUTPUT

Linux media conversion

sudo apt install libav-tools
webm to mp4
http://askubuntu.com/questions/323944/convert-webm-to-other-formats
ffmpeg -i "Jurjak - Bucuresti.webm" -qscale 0 "Jurjak - Bucuresti.mp4"
ffmpeg -fflags +genpts -i "Jurjak - Bucuresti.webm" -r 24 "Jurjak - Bucuresti1.mp4" -> change to 24 FPS
ffmpeg -i "Jurjak - Bucuresti.webm" -vf scale=-1:720 "Jurjak - Bucuresti1.mp4" -> change to 720p
mp4 to mp3
ffmpeg -i "song-name.mp4" -b:a 192K -vn "song-name.mp3"
mkv to mp3
ffmpeg -i "song-name.mkv" -b:a 192K -vn "song-name.mp3"
webm to mp3
ffmpeg -i "song-name.webm" -b:a 192K -vn "song-name.mp3"

Linux youtube downloader errors

first, let's see the youtube-dl error
youtube-dl https://www.youtube.com/playlist?list=PLEmVsSEEP5HDTSik5ZSyOWz0qsS1tPos_
[youtube:playlist] PLEmVsSEEP5HDTSik5ZSyOWz0qsS1tPos_: Downloading webpage
[download] Downloading playlist: cantece pt copii in germana
[youtube:playlist] playlist cantece pt copii in germana: Downloading 6 videos
[download] Downloading video 1 of 6
[youtube] dtZ7U7csvcw: Downloading webpage
[youtube] dtZ7U7csvcw: Downloading video info webpage
[youtube] dtZ7U7csvcw: Extracting video information
[youtube] dtZ7U7csvcw: Downloading MPD manifest
Traceback (most recent call last):
  File "/usr/bin/youtube-dl", line 6, in 
    youtube_dl.main()
  File "/usr/lib/python2.7/dist-packages/youtube_dl/__init__.py", line 444, in main
    _real_main(argv)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/__init__.py", line 434, in _real_main
    retcode = ydl.download(all_urls)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 1791, in download
    url, force_generic_extractor=self.params.get('force_generic_extractor', False))
  File "/usr/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 705, in extract_info
    return self.process_ie_result(ie_result, download, extra_info)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 866, in process_ie_result
    extra_info=extra)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 758, in process_ie_result
    extra_info=extra_info)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py", line 694, in extract_info
    ie_result = ie.extract(url)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 357, in extract
    return self._real_extract(url)
  File "/usr/lib/python2.7/dist-packages/youtube_dl/extractor/youtube.py", line 1671, in _real_extract
    formats_dict=self._formats):
  File "/usr/lib/python2.7/dist-packages/youtube_dl/extractor/common.py", line 1547, in _extract_mpd_formats
    compat_etree_fromstring(mpd.encode('utf-8')), mpd_id, mpd_base_url,
  File "/usr/lib/python2.7/dist-packages/youtube_dl/compat.py", line 2526, in compat_etree_fromstring
    doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
  File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1476, in __init__
    "No module named expat; use SimpleXMLTreeBuilder instead"
ImportError: No module named expat; use SimpleXMLTreeBuilder instead

some checks
Search for pyexpat*.so:
ll /usr/lib/python2.7/lib-dynload/pyexpat*
-rw-r--r-- 1 root root 68K Nov 19 11:35 /usr/lib/python2.7/lib-dynload/pyexpat.x86_64-linux-gnu.so
Check for it's dependencies:
ldd /usr/lib/python2.7/lib-dynload/pyexpat.x86_64-linux-gnu.so
	linux-vdso.so.1 =>  (0x00007fff059fd000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fac3b8c7000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fac3b4fe000)
	libexpat.so.1 => /u01/app/oracle/product/12.1.0/dbhome_1/lib/libexpat.so.1 (0x00007fac3b2da000)
	/lib64/ld-linux-x86-64.so.2 (0x000055c542d57000)
When strange dependencies are listed (e.g. libexpat.so.1 from oracle) try to fix them.
Check for LD_LIBRARY_PATH value:
echo $LD_LIBRARY_PATH
/u01/app/oracle/product/12.1.0/dbhome_1/lib:/lib:/usr/lib:/usr/lib64

possible solution
Solution (for me this will probably break oracle):
unset LD_LIBRARY_PATH
ldd /usr/lib/python2.7/lib-dynload/pyexpat.x86_64-linux-gnu.so
	linux-vdso.so.1 =>  (0x00007ffd1ff13000)
	libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe242eba000)
	libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe242af1000)
	libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007fe2428c7000)
	/lib64/ld-linux-x86-64.so.2 (0x0000558368063000)
And now youtube-dl works again!
Of course you'll have to set LD_LIBRARY_PATH when running oracle.

X server and related managers

See also
# nice explanation about the entire startx workflow
http://unix.stackexchange.com/questions/243195/what-desktop-environment-does-startx-run-and-how-can-i-change-it
# explanation about sessions
http://askubuntu.com/questions/62833/how-do-i-change-the-default-session-for-when-using-auto-logins

# list available desktop environments
ls -l /usr/share/xsessions
# show current login manager
cat /etc/X11/default-display-manager
# see also lightdm-greeter from Alternatives Configurator:
ls -l /usr/share/xgreeters
# how to restore Unity login greeter
cat /etc/lightdm/lightdm.conf
[SeatDefaults]
autologin-user=
allow-guest=false
greeter-session=unity-greeter -> add this line

# check the available session managers with
update-alternatives --list x-session-manager
# or get a more verbose description indicating which one is default with
update-alternatives --display x-session-manager
# shows the link to the default session manager
ls -l /etc/alternatives/x-session-manager
# change the default session manager by running
update-alternatives --config x-session-manager

# list available window managers
update-alternatives --list x-window-manager
# shows the link to the default window manager
ls -l /etc/alternatives/x-window-manager

# http://askubuntu.com/questions/62833/how-do-i-change-the-default-session-for-when-using-auto-logins
# list of available session types
ls -l /usr/share/xsessions
# see also ~/.dmrc for the current default selected session type
cat ~/.dmrc
# see also user defaults with
cat /var/lib/AccountsService/users/$USER

q: what to put in .xsession (e.g. for xrdp)?
a: pick from update-alternatives --list x-session-manager
warn: some of them won't work (something related to 3D graphics)
worked for me: xfce4-session, lxsession, mate-session, startlxde, openbox-session

# Zorin OS theme
# http://www.noobslab.com/2015/09/do-you-like-windows-10-look-but-love.html
# https://launchpad.net/~noobslab/+archive/ubuntu/themes?field.series_filter=xenial
sudo add-apt-repository ppa:noobslab/themes
sudo apt-get update
sudo apt-get install windos-10-themes

# 9 Great XFCE Themes
# Ambiance theme for XFCE (with xfwm4)
Current XFCE theme:
grep -nr ThemeName .config/xfce4
When Settings -> Appearance doesn't open try running it from command line:
xfce4-appearance-settings

# change xfce desktop icon background/shadow
# see /usr/share/doc/xfdesktop4/README
# my ~/.gtkrc-2.0.mine
style "xfdesktop-icon-view" {
    XfdesktopIconView::label-alpha = 1

    fg[NORMAL] = "#ffffff"
    fg[SELECTED] = "#ffffff"
    fg[ACTIVE] = "#ffff00"
}
widget_class "*XfdesktopIconView*" style "xfdesktop-icon-view"

Plex Transcoding with low cost slow CPU

I have Ubuntu 16.04.1 LTS on this low power SoC board Asrock N3150DC-ITX with N3150 CPU:
http://ark.intel.com/products/87258/Intel-Celeron-Processor-N3150-2M-Cache-up-to-2_08-GHz

According to https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server- (see The Guideline) I quote:
Very roughly speaking, for a single full-transcode of a video, the following PassMark scores are a good guideline for a requirement: 1080p/10Mbps: 2000 PassMark 720p/4Mbps: 1500 PassMark
I found my CPU on one of Plex's pointed charts: http://cpubenchmark.net/midlow_range_cpus.html When you click on CPU's link in the chart it will get you to http://cpubenchmark.net/cpu.php?cpu=Intel+Celeron+N3150+%40+1.60GHz&id=2546 from where I quote:
Description: Socket: FCBGA1170 Clockspeed: 1.6 GHz Turbo Speed: 2.1 GHz No of Cores: 4 Max TDP: 6 W Average CPU Mark 1693
With only 1693 mark you'll say there's no way this lazy CPU to transcode a HEVC ... but there is! You'll have to mount a RAM directory in /etc/fstab e.g.: tmpfs /var/plex-transcoding-temporary-dir tmpfs defaults,relatime,mode=1777,size=99G This line will mount 99 GB of your RAM (surely much less 99 GB) to /var/plex-transcoding-temporary-dir directory which then you'll have to configure as the Plex's transcoder temporary directory. I have 16 GB RAM but while transcoding a 1080p HVEC I only need less 2 GB RAM while also keeping in RAM my Ubuntu 16.04 desktop with mysql, sickrage, couchpotato, transmission, nginx and other. Plex uses a maximum transcoding cache of 100 MB so I guess it won't use more than 100 MB of your RAM for transcoding. Plex won't transcode a movie larger than your tmpfs RAM directory size so I declare 99 GB just to be sure to transcode any possible movie. My transcoding options: Transcoder quality: automatic Transcoder temporary directory: /var/plex-transcoding-temporary-dir Background transcoding x264 preset: faster Maximum simultaneous video transcode: 1 Amazing, isn't it?

Yahoo certificate error

ERROR (ubuntu + chromium)
https://forums.yahoo.net/t5/Errors/NET-ERR-CERTIFICATE-TRANSPARENCY-REQUIRED-on-chromium-53/m-p/141922/highlight/true#M36857
ERR_CERTIFICATE_TRANSPARENCY_REQUIRED
You cannot visit mail.yahoo.com right now because the website uses HSTS.
SOLUTION (no logic here but working at 16 Nov 2016)
https://www.quora.com/How-do-you-fix-the-privacy-error-in-Chrome-Your-connection-is-not-private
On the page displaying this error, click anywhere on the page where the click won't have any action (just to focus the page). Type badidea and voila! You'll be automatically redirected to the destined page. This works sometimes with other sites too.

Boost or at least unlock the website’s performance

### configure /etc/sysctl.conf:
# Uncomment the next line to enable TCP/IP SYN cookies
# See http://lwn.net/Articles/277146/
# Note: This may impact IPv6 TCP sessions too
net.ipv4.tcp_syncookies=0
# https://linux.die.net/man/5/proc
# https://www.kernel.org/doc/Documentation/sysctl/fs.txt
fs.file-max = 6815744
# https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
# https://linux.die.net/man/7/tcp
# The maximum number of queued connection requests which have still not received an acknowledgement from the connecting client.
net.ipv4.tcp_max_syn_backlog = 65535
# https://www.kernel.org/doc/Documentation/sysctl/net.txt
# Maximum number  of  packets,  queued  on  the  INPUT  side, when the interface receives packets faster than kernel can process them.
# for 1G NIC:
net.core.netdev_max_backlog = 3000

### limits (/etc/security/limits.conf)
# see current user limits:
ulimit -a
# see process limits (e.g. pid 1660):
cat /proc/1660/limits

### systemd service
# configure http server's "max number of open files" limit (soft and hard):
[Service]
LimitNOFILE=65536

### debug your website's performance:
tailf apps/log/nginx-error.log
# watch for "too many open files" problem:
tailf /var/log/syslog | grep SNMP
# watch the main log for other possible problems:
tailf /var/log/syslog
# your application server logs:
tailf ~/apps/opt/apache-tomcat-7.0.64/logs/catalina.out
# see (roughly) how many sockets are open:
watch --interval=1 'netstat -tuna | wc -l'
# or using lsof to count the list of open files:
watch --interval=1 'lsof | wc -l'
# use apache benchmarking tool:
ab -c 1000 -n 10000 -s 80 -H 'Accept-Encoding: gzip' -qd https://yourhost/yourwebsite > nginx1k-10k-ssl.txt

### Conclusion (jetty as application server):
http with nginx (+gzip) in front of jetty is 44% slower comparing to jetty direct access.
https with nginx (+gzip) in front of jetty is 2x faster comparing to jetty direct access.

### test with tomcat with Tomcat Native Library:
# https://tomcat.apache.org/native-doc/
# text/plain:
curl -i http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMString
# text/plain:
curl -i http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMStringDeferred
# application/json:
curl -i http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMObjectToJson
# application/json, get all ORDER BY sql:
curl -i http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMObjectToJsonDeferred
# application/json, get all ORDER BY sql:
curl -i http://127.0.0.1:8080/exifweb/app/json/appconfig/testGetNoCacheableOrderedAppConfigs
# application/json, search by indexed string column sql:
curl -i http://127.0.0.1:8080/exifweb/app/json/appconfig/testGetNoCacheableAppConfigByName

rm -v adr*.txt tom*.txt ng-tom*.txt ngs-tom*.txt ngs-gz*.txt
grep -P "Failed|Requests|Document Length|Request rate|Reply status" adr*.txt tom*.txt ng-tom*.txt ngs-tom*.txt ngs-gz*.txt
# -H 'Accept-Encoding: gzip'

## tomcat:
# RAM text/plain
ab -c 3500 -n 35000 -s 360 -qdr http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMString > tom-testRAMString-3,5k.txt
# RAM text/plain deferred
ab -c 2300 -n 23000 -s 360 -qdr http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMStringDeferred > tom-testRAMStringDeferred-2,3k.txt

# RAM application/json
ab -c 3000 -n 30000 -s 360 -qdr http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMObjectToJson > tom-testRAMObjectToJson-3k.txt
# RAM application/json deferred
ab -c 1900 -n 19000 -s 360 -qdr http://127.0.0.1:8080/exifweb/app/json/appconfig/testRAMObjectToJsonDeferred > tom-testRAMObjectToJsonDeferred-1,9k.txt

# sql: get all ORDER BY
ab -c 675 -n 6750 -s 360 -qdr http://127.0.0.1:8080/exifweb/app/json/appconfig/testGetNoCacheableOrderedAppConfigs > tom-testGetNoCacheableOrderedAppConfigs-675.txt

# sql: search by indexed string column
ab -c 800 -n 8000 -s 360 -qdr http://127.0.0.1:8080/exifweb/app/json/appconfig/testGetNoCacheableAppConfigByName > tom-testGetNoCacheableAppConfigByName-800.txt

## nginx -> tomcat:
# RAM text/plain
ab -c 2250 -n 22500 -s 360 -qdr http://127.0.0.1/photos/app/json/appconfig/testRAMString > ng-tom-testRAMString-2,25k.txt
# RAM text/plain deferred
ab -c 1400 -n 14000 -s 360 -qdr http://127.0.0.1/photos/app/json/appconfig/testRAMStringDeferred > ng-tom-testRAMStringDeferred-1,4k.txt

# RAM application/json
ab -c 1975 -n 19750 -s 360 -qdr http://127.0.0.1/photos/app/json/appconfig/testRAMObjectToJson > ng-tom-testRAMObjectToJson-1,975k.txt
# RAM application/json deferred
ab -c 1450 -n 14500 -s 360 -qdr http://127.0.0.1/photos/app/json/appconfig/testRAMObjectToJsonDeferred > ng-tom-testRAMObjectToJsonDeferred-1,45k.txt

# sql: get all ORDER BY
ab -c 625 -n 6250 -s 360 -qdr http://127.0.0.1/photos/app/json/appconfig/testGetNoCacheableOrderedAppConfigs > ng-tom-testGetNoCacheableOrderedAppConfigs-625.txt

# sql: search by indexed string column
ab -c 710 -n 7100 -s 360 -qdr http://127.0.0.1/photos/app/json/appconfig/testGetNoCacheableAppConfigByName > ng-tom-testGetNoCacheableAppConfigByName-710.txt

## tomcat (ssl):
# RAM text/plain
ab -c 90 -n 900 -s 360 -qdr https://127.0.0.1:8443/exifweb/app/json/appconfig/testRAMString > toms-testRAMString-90.txt

# RAM application/json
ab -c 90 -n 900 -s 360 -qdr https://127.0.0.1:8443/exifweb/app/json/appconfig/testRAMObjectToJson > toms-testRAMObjectToJson-90.txt

# sql: get all ORDER BY
ab -c 90 -n 900 -s 360 -qdr https://127.0.0.1:8443/exifweb/app/json/appconfig/testGetNoCacheableOrderedAppConfigs > toms-testGetNoCacheableOrderedAppConfigs-90.txt

# sql: search by indexed string column
ab -c 90 -n 900 -s 360 -qdr https://127.0.0.1:8443/exifweb/app/json/appconfig/testGetNoCacheableAppConfigByName > toms-testGetNoCacheableAppConfigByName-90.txt

## nginx -> tomcat (ssl):
# RAM text/plain
ab -c 550 -n 5500 -s 360 -qdr https://127.0.0.1/photos/app/json/appconfig/testRAMString > ngs-tom-testRAMString-550.txt

# RAM application/json
ab -c 550 -n 5500 -s 360 -qdr https://127.0.0.1/photos/app/json/appconfig/testRAMObjectToJson > ngs-tom-testRAMObjectToJson-550.txt

# sql: get all ORDER BY
ab -c 410 -n 4100 -s 360 -qdr https://127.0.0.1/photos/app/json/appconfig/testGetNoCacheableOrderedAppConfigs > ngs-tom-testGetNoCacheableOrderedAppConfigs-410.txt

# sql: search by indexed string column
ab -c 450 -n 4500 -s 360 -qdr https://127.0.0.1/photos/app/json/appconfig/testGetNoCacheableAppConfigByName > ngs-tom-testGetNoCacheableAppConfigByName-450.txt

## nginx (gzip) -> tomcat (ssl):
# RAM text/plain
ab -c 560 -n 5600 -s 360 -qdr -H 'Accept-Encoding: gzip' https://127.0.0.1/photos/app/json/appconfig/testRAMString > ngs-gz-tom-testRAMString-560.txt

# RAM application/json
ab -c 560 -n 5600 -s 360 -qdr -H 'Accept-Encoding: gzip' https://127.0.0.1/photos/app/json/appconfig/testRAMObjectToJson > ngs-gz-tom-testRAMObjectToJson-560.txt

# sql: get all ORDER BY
ab -c 405 -n 4050 -s 360 -qdr -H 'Accept-Encoding: gzip' https://127.0.0.1/photos/app/json/appconfig/testGetNoCacheableOrderedAppConfigs > ngs-gz-tom-testGetNoCacheableOrderedAppConfigs-405.txt

# sql: search by indexed string column
ab -c 445 -n 4450 -s 360 -qdr -H 'Accept-Encoding: gzip' https://127.0.0.1/photos/app/json/appconfig/testGetNoCacheableAppConfigByName > ngs-gz-tom-testGetNoCacheableAppConfigByName-445.txt

## nginx
ab -c 625 -n 6250 -s 360 -qdr https://127.0.0.1/public/mysqld.sh > ngs-625.txt
ab -c 4600 -n 40000 -s 360 -qdr http://127.0.0.1/public/mysqld.sh > ngs-4600.txt

gitweb on apache

# projects web page will be: https://192.168.1.8/gitweb/
# Create a git project (e.g. testproject.git):
# mkdir -p /opt/GITRepositories/testproject.git
# cd /opt/GITRepositories/testproject.git
# git init --bare --shared
# cp -v /opt/GITRepositories/test.git/hooks/post-update.sample /opt/GITRepositories/test.git/hooks/post-update
# now https://192.168.1.8/testproject.git is ready for cloning:
# git clone https://192.168.1.8/testproject.git

# cat /etc/httpd/conf.d/git.conf
SetEnv GIT_PROJECT_ROOT /opt/GITRepositories
SetEnv GIT_HTTP_EXPORT_ALL

<LocationMatch "^/[^/]+\.git(/.*)">
	AuthType Basic
	AuthName "Git Access"
	AuthUserFile "/opt/GITRepositories/committers.txt"
	Require valid-user
	# Require group committers
</LocationMatch>

AliasMatch ^/([^/]+\.git)/(objects/[0-9a-f]{2}/[0-9a-f]{38})$			/opt/GITRepositories/$1/$2
AliasMatch ^/([^/]+\.git)/(objects/pack/pack-[0-9a-f]{40}.(pack|idx))$	/opt/GITRepositories/$1/$2
ScriptAliasMatch \
		"(?x)^/([^/]+\.git/(HEAD | \
						info/refs | \
						objects/(info/[^/]+ | \
								[0-9a-f]{2}/[0-9a-f]{38} | \
								pack/pack-[0-9a-f]{40}\.(pack|idx)) | \
						git-(upload|receive)-pack))$" \
		/usr/libexec/git-core/git-http-backend/$1

# ScriptAlias /gitweb	/var/www/git/gitweb.cgi
Alias /gitweb /var/www/git
<Directory /var/www/git>
	AuthType Basic
	AuthName "Git Access"
	AuthUserFile "/opt/GITRepositories/committers.txt"
	Require valid-user

	Options +ExecCGI
	AddHandler cgi-script .cgi
	DirectoryIndex gitweb.cgi
</Directory>

# grep -i -P "^[^#]" /etc/gitweb.conf 
$projects_list_description_width = "50";
$projectroot = "/opt/GITRepositories";
$home_link_str = "projects";
$base_url = "https://192.168.1.8/gitweb/";
@git_base_url_list = qw(https://192.168.1.8)
see also Basic authentication password creation

ssh, http and https multiplexing

This is about how to have the ssh and http(s) server share the same port (e.g. 80 or 443 port).
This is really cool :).

# Used sources:
# http://yalis.fr/cms/index.php/post/2014/02/22/Multiplex-SSH-and-HTTPS-on-a-single-port
# http://blog.cppse.nl/apache-proxytunnel-ssh-tunnel
# http://serverfault.com/questions/355271/ssh-over-https-with-proxytunnel-and-nginx
# http://tyy.host-ed.me/pluxml/article4/port-443-for-https-ssh-and-ssh-over-ssl-and-more
# http://ipset.netfilter.org/iptables.man.html
# http://ipset.netfilter.org/iptables-extensions.man.html
# http://man7.org/linux/man-pages/man8/ip-rule.8.html
# http://lartc.org/howto/lartc.netfilter.html
# http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml?search=Unassigned
# https://cloud.githubusercontent.com/assets/2137369/15272097/77d1c09e-1a37-11e6-97ef-d9767035fc3e.png
# http://www.adminsehow.com/2011/09/iptables-packet-traverse-map/

### begin sshttp setup 1
# https://github.com/stealth/sshttp
# Below are the preparations for this setup:
# sshttpd listens on 80 for ssh and http connections. It forwards to ssh:1022 and nginx:880.
# Will work these:
ssh -p 1022 gigi@127.0.0.1		-> access tried from within 192.168.1.31 host
ssh -p 1022 gigi@192.168.1.31	-> access tried from within 192.168.1.31 host
ssh -p 80 gigi@adrhc.go.ro		-> access tried from within 192.168.1.31 host or from internet
http://127.0.0.1/public/		-> access tried from within 192.168.1.31 host
http://127.0.0.1:880/public/	-> access tried from within 192.168.1.31 host
http://192.168.1.31:880/public/	-> access tried from within 192.168.1.31 host
http://192.168.1.31/public/		-> access tried from 192.168.1.31's LAN
http://adrhc.go.ro/public/		-> access tried from within 192.168.1.31 host or from internet
# Won't work this:
ssh -p 1022 gigi@adrhc.go.ro	-> access tried from within 192.168.1.31 host or from internet
http://192.168.1.31/public/		-> access tried from within 192.168.1.31 host
http://adrhc.go.ro:880/public/	-> access tried from within 192.168.1.31 host or from internet

# /etc/modules
modprobe nf_conntrack_ipv4
modprobe nf_conntrack
echo "nf_conntrack" >> /etc/modules
echo "nf_conntrack_ipv4" >> /etc/modules

# in /etc/ssh/sshd_config make sure to have:
# Port 1022
# Banner /etc/sshd-banner.txt 
# Makefile uses the content of /etc/sshd-banner.txt, e.g.:
# SSH_BANNER=-DSSH_BANNER=\"adrhc\'s\ SSH\ server\"
cat /etc/sshd-banner.txt
adrhc's SSH server

# configure nf-setup, e.g. for sshttpd.service below should be:
DEV="eth0"
SSH_PORT=1022
# HTTP_PORT=1443
HTTP_PORT=880
# also you could add this afterwards in order not to run nf-setup if already run:
if [ "`iptables -t mangle -L | grep -v -P "^ufw-" | grep -P "^DIVERT.+tcp spt:$HTTP_PORT"`" != "" ]; then
	echo "sshttp netfilter rules already applied ..."
	exit 0
fi
echo "applying sshttp netfilter rules ..."

# for nginx or apache take care of address binding not to overlap with sshttpd.service, e.g.:
#    server {
#        listen	127.0.0.1:80;
#        listen	127.0.0.1:880;
#        # listen 192.168.1.31:80; -> used/bound by sshttpd.service below
#        listen	192.168.1.31:880;

# install the systemd sshttpd.service defined below:
sudo chown root: /etc/systemd/system/sshttpd.service && sudo chmod 664 /etc/systemd/system/sshttpd.service && sudo systemctl daemon-reload; cp -v $HOME/compile/sshttp/nf-setup $HOME/apps/bin

# systemd sshttpd.service:
[Unit]
# see https://github.com/stealth/sshttp
Description=SSH/HTTP(S) multiplexer
# for any address binding conflict that occurs between ufw, ssh, nginx and sshttp I want ufw, ssh and nginx to win against sshttp
After=network.target
# sudo iptables -L | grep -v -P "^ufw-" | grep -P "1022|1443|880|DIVERT|DROP|ssh"
# sudo iptables -t mangle -L | grep -v -P "^ufw-" | grep -P "1022|1443|880|DIVERT|DROP|ssh"
[Service]
Type=forking
RuntimeDirectory=sshttpd
ExecStartPre=-/bin/chown nobody: /run/sshttpd
ExecStartPre=-/home/gigi/apps/bin/nf-setup
Restart=on-failure
RestartSec=3
TimeoutStartSec=5
TimeoutStopSec=5
# using 443 for sshttpd:
# ssh -p 443 gigi@adrhc.go.ro
# wget --no-check-certificate https://adrhc.go.ro/
# ExecStart=/home/gigi/apps/bin/sshttpd -n 4 -S 1022 -H 1443 -L 443 -l 192.168.1.31 -U nobody -R /run/sshttpd
# using 80 for sshttpd:
# ssh -p 80 gigi@adrhc.go.ro
# wget http://adrhc.go.ro/public
ExecStart=/home/gigi/apps/bin/sshttpd -n 4 -S 1022 -H 880 -L 80 -l 192.168.1.31 -U nobody -R /run/sshttpd
[Install]
WantedBy=multi-user.target

### begin sshttp setup 2 (read first sshttp step 1)
# Below are the preparations for this setup:
# sshttpd listens on 444 for ssh and https connections. 
# sshttpd forwards to ssh:1022 or stunnel:1443.
# stunnel:1443 forwards to nginx:127.0.0.1:1080 or ssh:127.0.0.1:22 based on sni.
# the original remote client's ip is accessible (only for https but not ssh) with $realip_remote_addr (http://nginx.org/en/docs/http/ngx_http_realip_module.html#real_ip_header)

# Issue: any redirect (301 or 302) used in the server 127.0.0.1:1080 defined below will set Location header to http instead of https
# - see sshttp setup 3 for a solution 
# - see https://forum.nginx.org/read.php?2,269623,269647#msg-269647 (listen proxy_protocol and rewrite redirect scheme) for a better? solution:
src/http/ngx_http_header_filter_module.c: 
#if (NGX_HTTP_SSL) 
if (c->ssl || port == 443) { 
*b->last++ ='s'; 
} 
#endif 

# Won't work Transmission remote GUI but the web page will still work.
# ERROR (while using Transmission remote GUI):
	2016/09/19 15:03:42 [error] 5562#0: *2431 broken header: ">:azX���g��^}q�/���A��Rp(���n3��0�,�(�$��
	����kjih9876�����2�.�*�&���=5��/�+�'�#��	����g@?>3210����EDCB�1�-�)�%���</�A���
	�                                                                                      ��
	�g
		127.0.0.1
	" while reading PROXY protocol, client: 127.0.0.1, server: 127.0.0.1:443
	NӾHu|���4|�sf��Q�j$������0�,�(�$��432 broken header: ">:LM2V
	����kjih9876�����2�.�*�&���=5��/�+�'�#��	����g@?>3210����EDCB�1�-�)�%���</�A���
	�                                                                                      ��
	�g
		127.0.0.1
	" while reading PROXY protocol, client: 127.0.0.1, server: 127.0.0.1:443

# in systemd sshttpd.service change to:
# router: 443 -> 444 -> also make sure ufw allows 444
# ssh -p 443 gigi@adrhc.go.ro
# wget --no-check-certificate https://adrhc.go.ro/
ExecStart=/********/apps/bin/sshttpd -n 4 -S 1022 -H 1443 -L 444 -l 192.168.1.31 -U nobody -R /run/sshttpd

# in nginx add this "magic" server:
server {
	listen 127.0.0.1:1080 default_server proxy_protocol;
	include xhttpd_1080_proxy.conf;
	port_in_redirect off;
	# change also fastcgi_params! (see below)
	... your stuff ...
}

# xhttpd_1080_proxy.conf:
set_real_ip_from 192.168.1.0/24;
set_real_ip_from 127.0.0.0/8;
# set_real_ip_from ::1/32; -> doesn't work for me
real_ip_header proxy_protocol;
set $real_internet_https "on";
set $real_internet_port "443";

# in fastcgi_params have (besides your stuff):
# This special fastcgi_params must be used only by "magic server" (127.0.0.1:1080)!
fastcgi_param HTTPS $real_internet_https if_not_empty;
fastcgi_param SERVER_PORT $real_internet_port if_not_empty;

# stunnel.conf for server side
# sudo killall stunnel; sleep 1; sudo bin/stunnel etc/stunnel/stunnel.conf
pid = /run/stunnel.pid
debug = 4
output = /********/apps/log/stunnel.log
options = NO_SSLv2
compression = deflate
cert = /********/apps/etc/nginx/certs/adrhc.go.ro-server-pub.pem
key = /********/apps/etc/nginx/certs/adrhc.go.ro-server-priv-no-pwd.pem
[tls]
accept = 192.168.1.31:1443
connect = 127.0.0.1:1080
protocol = proxy
[ssh]
sni = tls:tti.go.ro
connect = 127.0.0.1:22
renegotiation = no
debug = 5
cert = /********/apps/etc/nginx/certs/adrhc.go.ro-server-pub.pem
key = /********/apps/etc/nginx/certs/adrhc.go.ro-server-priv-no-pwd.pem
[www on any]
sni = tls:*
connect = 127.0.0.1:1080
protocol = proxy

# stunnel.conf for client side
# killall stunnel; sleep 1; stunnel ****stunnel.conf && tailf ****stunnel.log
# ssh -p 1194 gigi@localhost
pid = /****************/temp/stunnel.pid
debug = 4
output = /****************/****stunnel.log
options = NO_SSLv2
[tti.go.ro]
# Set sTunnel to be in client mode (defaults to server)
client = yes  
# Port to locally connect to
accept = 127.0.0.1:1194  
# Remote server for sTunnel to connect to
connect = adrhc.go.ro:443
sni = tti.go.ro
verify = 2
CAfile = /****************/****Temp/Zyxel/adrhc.go.ro-server-pub.pem
# checkHost = certificate's CN field (see "Rejected by CERT at" in stunnel.log for learning CN)
checkHost = adrhc.go.ro
# CAfile = /****************/****Temp/Zyxel/adr-pub.pem
# checkHost = adr

### begin sshttp setup 3 (read first sshttp step 2)
# any redirect (301 or 302) used in the server 127.0.0.1:1080 defined above will go to the https server
# Issue: the original remote client's ip is not accessible (https or ssh)

# you'll need the https nginx configuration for your site listening at least on 127.0.0.1:443
# you no longer need the "magic" server defined above
# How this works:
# browser/stunnel-client useing ssl -> sshttpd:443 -> stunnel[tls to http] using ssl -> stunnel[http to https]

# stunnel.conf for server side
# sudo killall stunnel; sleep 1; sudo bin/stunnel etc/stunnel/stunnel.conf
pid = /run/stunnel.pid
debug = 4
output = /********/apps/log/stunnel.log
options = NO_SSLv2
compression = deflate
cert = /********/apps/etc/nginx/certs/adrhc.go.ro-server-pub.pem
key = /********/apps/etc/nginx/certs/adrhc.go.ro-server-priv-no-pwd.pem
[tls]
accept = 192.168.1.31:1443
connect = 127.0.0.1:1081
protocol = proxy
[ssh]
sni = tls:tti.go.ro
connect = 127.0.0.1:22
renegotiation = no
debug = 5
cert = /********/apps/etc/nginx/certs/adrhc.go.ro-server-pub.pem
key = /********/apps/etc/nginx/certs/adrhc.go.ro-server-priv-no-pwd.pem
[tls to http]
sni = tls:*
connect = 127.0.0.1:1081
# connect = 127.0.0.1:1080
# protocol = proxy
[http to https]
accept = 127.0.0.1:1081
connect = 127.0.0.1:443
client = yes

### begin sslh setup
# https://github.com/yrutschle/sslh
# Here I use ssh:1021 instead of ssh:1022.
sudo apt-get install sslh

sudo useradd -d /nonexistent -M -s /bin/false sslh
# according to https://github.com/yrutschle/sslh#capabilities-support I need:
sudo setcap cap_net_bind_service,cap_net_admin+pe /usr/sbin/sslh-select
sudo getcap -rv /usr/sbin/sslh-select

cat /etc/default/sslh
RUN=yes
DAEMON=/usr/sbin/sslh-select
# with --transparent the local ip is not acceptable:
DAEMON_OPTS="--transparent --timeout 1 --numeric --user sslh --listen 192.168.1.31:334 --ssh 192.168.1.31:1021 --http 192.168.1.31:80 --pidfile /var/run/sslh/sslh.pid"
# without --transparent is acceptable also local ip:
# DAEMON_OPTS="--transparent --timeout 1 --numeric --user sslh --listen 192.168.1.31:334 --ssh 192.168.1.31:1021 --http 127.0.0.1:80 --pidfile /var/run/sslh/sslh.pid"

cat /etc/systemd/system/sslh.service.d/custom.conf 
# cp -v $HOME/bin/systemd-services/sslh-setup.sh $HOME/apps/bin
[Service]
ExecStartPre=-/********/apps/bin/sslh-setup.sh
ExecStart=
ExecStart=/usr/sbin/sslh-select --foreground $DAEMON_OPTS
SuccessExitStatus=15

cat sslh-setup.sh
#!/bin/sh
if [ "`sudo iptables -t mangle -L | grep -P "^SSLH\s.+\sspt:1021"`" != "" ]; then
	echo "SSLH netfilter rules already applied ..."
	exit 0
fi
iptables -t mangle -N SSLH
iptables -t mangle -A OUTPUT --protocol tcp --out-interface eth0 --sport 1021 --jump SSLH
iptables -t mangle -A OUTPUT --protocol tcp --out-interface eth0 --sport 80 --jump SSLH
iptables -t mangle -A SSLH --jump MARK --set-mark 0x1
iptables -t mangle -A SSLH --jump ACCEPT
ip rule add fwmark 0x1 lookup 100
ip route add local 0.0.0.0/0 dev lo table 100

sudo systemctl daemon-reload
sudo systemctl enable sslh
sudo systemctl start sslh

Ubuntu and Oracle

# see also https://wiki.centos.org/HowTos/Oracle12onCentos7
# see also https://adrhc.go.ro/wordpress/centos-and-oracle/

# Follow this (works with Ubuntu 16.04 too):
# http://www.techienote.com/install-oracle-12c-on-ubuntu/

# systemd oracle.service (working when only one db is automatically started with /etc/oratab)
[Unit]
Description=Oracle 12c
After=local-fs.target
Wants=local-fs.target

[Service]
Type=forking

User=oracle
Group=oinstall

RuntimeDirectory=oracle
PIDFile=/run/oracle/oracle.pid

Restart=on-failure
RestartSec=3

TimeoutSec=0

ExecStart=/u01/app/oracle/product/12.1.0/dbhome_1/bin/dbstart /u01/app/oracle/product/12.1.0/dbhome_1
ExecStop=/u01/app/oracle/product/12.1.0/dbhome_1/bin/dbshut /u01/app/oracle/product/12.1.0/dbhome_1

[Install]
WantedBy=multi-user.target

# modify dbstart and dbshut in order to create /run/oracle/oracle.pid needed by oracle.service
# /u01/app/oracle/product/12.1.0/dbhome_1/bin/dbstart
startinst() {
...
      if [ $? -eq 0 ] ; then
        echo "" 
		OS_PID=$(sqlplus -S / AS SYSDBA <<EOF
select SPID from v\$process where PNAME = 'PMON';
EOF
)
		OS_PID=$(echo "$OS_PID" | /bin/grep -P "\d+")
        echo "$0: ${INST} \"${ORACLE_SID}\" warm started (PID $OS_PID)." 
		if [ -d /run/oracle ]; then
			echo "$OS_PID" > /run/oracle/oracle.pid
			echo "created /run/oracle/oracle.pid"
		fi
      else
        $LOGMSG "" 
        $LOGMSG "Error: ${INST} \"${ORACLE_SID}\" NOT started." 
      fi
# /u01/app/oracle/product/12.1.0/dbhome_1/bin/dbshut
  if test $? -eq 0 ; then
	if [ -f /run/oracle/oracle.pid ]; then
		# see /u01/app/oracle/product/12.1.0/dbhome_1/bin/dbstart
		rm -fv /run/oracle/oracle.pid
	fi
    echo "${INST} \"${ORACLE_SID}\" shut down."
  else
    echo "${INST} \"${ORACLE_SID}\" not shut down."
  fi

Docker

# pull and run an image
docker run hello-world
docker run -itP centos cat /etc/redhat-release
# lists all the images on your local system
docker images --help
docker images
# show all containers on the system
docker ps --help
docker ps -a
docker ps --no-trunc -a
# log into the Docker Hub
docker login --username=yourhubusername
# removing containers
docker ps -a
docker rm --help
docker rm 6075298d5896
# modify an image
docker run -itP -v "$HOME/KIT":/adrhc/KIT -v /home/adrk/certs/:/adrhc/certs centos /bin/bash
cd root
yum -y update
yum install -y wget
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum localinstall -y epel-release-latest-7.noarch.rpm
yum -y update
yum install -y nano mlocate zip unzip iftop htop net-tools openssh-clients openssh-server which sysvinit-tools psmisc less man-db openssl davfs2 fuse
# configure sshd
sed -i s/"#Port 22"/"Port 322"/ /etc/ssh/sshd_config
# if you want to change the port on a SELinux system, you have to tell SELinux about this change:
semanage port -a -t ssh_port_t -p tcp 322
# solving ERROR: Could not load host key: /etc/ssh/ssh_host_rsa_key
/usr/bin/ssh-keygen -A
/usr/sbin/sshd
netstat -tulpn
CTRL+D
# committing d513e8dff620 container to a new named adrhc/centos7:v2 image:
docker commit -m "CentOS + epel" -a "adrhc" d513e8dff620 adrhc/centos7:v2
# or commit using the container's name (gloomy_goldstine) to a new named adrhc/centos7:v2 image:
docker commit -m "CentOS + epel" -a "adrhc" gloomy_goldstine adrhc/centos7:v2
# or commit last created container to a new named adrhc/centos7:v2 image:
docker commit -m "CentOS + epel" -a "adrhc" `docker ps -lq` adrhc/centos7:v2
# push an image to Docker Hub (see it at https://cloud.docker.com/_/repository/list)
docker push adrhc/centos7
# run the above commited image:
docker run -itP -v "$HOME/KIT":/adrhc/KIT -v /home/adrk/certs/:/adrhc/certs adrhc/centos7:v2 /bin/bash -> will create the container 3a63cfee66f4
# renaming 3a63cfee66f4 container created above
docker ps -a | grep 3a63cfee66f4
docker rename 3a63cfee66f4 my_centos7
# or rename last created container:
docker rename `docker ps -lq` my_centos7
# re-running the container 3a63cfee66f4
docker start 3a63cfee66f4
docker start my_centos7
# connecting to/bringing to front the running container
docker attach 3a63cfee66f4
docker attach my_centos7
# detach (see https://groups.google.com/forum/#!msg/docker-user/nWXAnyLP9-M/kbv-FZpF4rUJ)
docker run -i -t → can be detached with ^P^Q and reattached with docker attach
docker run -i → cannot be detached with ^P^Q; will disrupt stdin
docker run → cannot be detached with ^P^Q; can SIGKILL client; can reattach with docker attach
# stopping a running container
docker stop my_centos7
# get the IP address of the running my_centos7 container
docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my_centos7
# remove container
docker rm 3a63cfee66f4
# or by name
docker rm my_centos7
# remove image
docker images; docker ps -a
docker rmi 143d6907480f
docker rmi -f 143d6907480f -> removes related containers too
# connect using ssh to the container named my_centos7
# make sure the container exposes desired ports (https://docs.docker.com/engine/reference/commandline/run/)
ssh -p 322 root@`docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' my_centos7`

Systemd and systemctl

# see https://www.freedesktop.org/wiki/Software/systemd/TipsAndTricks/

systemctl                           -> shows all active units
systemctl list-units --type=service -> shows all active services (--all to see loaded but inactive services too)
systemctl list-units --type=swap	-> shows swap unit configurations

# show cgroup tree
systemd-cgls

# services started by multi-user.target
systemctl show -p "Wants" multi-user.target | less

###################
# svnserve.service:
#
# sudo chown ************ bin/svnserve.service && sudo chmod 664 bin/svnserve.service && cp -v bin/svnserve.service /etc/systemd/system/ && sudo chown root: /etc/systemd/system/svnserve.service && sudo chmod 664 /etc/systemd/system/svnserve.service && sudo systemctl daemon-reload
# or
# sudo chown ************ bin/svnserve.service
# sudo chmod 664 bin/svnserve.service
# cp -v bin/svnserve.service /etc/systemd/system/
# sudo chown root: /etc/systemd/system/svnserve.service
# sudo chmod 664 /etc/systemd/system/svnserve.service
# sudo systemctl daemon-reload

# sudo systemctl status svnserve
# sudo systemctl enable svnserve
# sudo systemctl start svnserve
# journalctl -xe
# journalctl -xf

[Unit]
Description=Server for the 'svn' repository access method

# see https://www.freedesktop.org/software/systemd/man/systemd.unit.html#
# see https://www.freedesktop.org/software/systemd/man/systemd.special.html#
After=local-fs.target
Wants=local-fs.target

[Service]
Type=forking

User****
Group*********

RuntimeDirectory=svnserve
RuntimeDirectoryMode=750

PIDFile=/run/svnserve/svnserve.pid

# RuntimeDirectory is doing this:
# User****
# Group*********
# PIDFile=/run/svnserve/svnserve.pid
# PermissionsStartOnly=true
# ExecStartPre=/bin/mkdir /run/svnserve
# ExecStartPre=/bin/chown ************ /run/svnserve

KillMode=process
Restart=on-failure
RestartSec=3

# A shorthand for configuring both TimeoutStartSec= and TimeoutStopSec= to the specified value.
TimeoutSec=5

# -r root, --root=root
#	Sets  the  virtual  root for repositories served by svnserve.  
#	The pathname in URLs provided by the client will be interpreted 
#	relative to this root, and will not be allowed to escape this root.
ExecStart=/usr/bin/svnserve -d -r /mnt/1TB/DataWin_to_sync/SVNRepoLinux --log-file /********/apps/log/svnserve.log --pid-file /run/svnserve/svnserve.pid

[Install]
WantedBy=multi-user.target

######################
# couchpotato.service:
#
[Unit]
Description=CouchPotato
After=local-fs.target
Wants=local-fs.target

[Service]
Type=simple
User****
Group*********
RuntimeDirectory=couchpotato
RuntimeDirectoryMode=750
PIDFile=/run/couchpotato/couchpotato.pid
# KillMode=process
# Restart=on-failure
# RestartSec=3
TimeoutStartSec=60
TimeoutStopSec=15

# couchpotato.service: Supervising process 22342 which is not our child. We'll most likely not notice when it exits
#
# ExecStart=/usr/bin/python /********/apps/opt/couchpotato/CouchPotato.py --config_file /********/apps/etc/couchpotato.conf --daemon --data_dir /********/apps/opt/couchpotato_data --pid_file /run/couchpotato/couchpotato.pid

ExecStart=/usr/bin/python /********/apps/opt/couchpotato/CouchPotato.py --config_file /********/apps/etc/couchpotato.conf --data_dir /********/apps/opt/couchpotato_data --quiet

[Install]
WantedBy=multi-user.target

###################
# anytermd.service:
#
[Unit]
Description=A Terminal Anywhere
After=local-fs.target
Wants=local-fs.target

[Service]
Type=forking
PIDFile=/run/anytermd.pid

KillMode=process
Restart=on-failure
RestartSec=3
TimeoutSec=5

# anytermd.service: Main process exited, code=exited, status=1/FAILURE
#
# ExecStart=/********/apps/bin/anytermd --command "/bin/login -p adr" --port 23456 --user root --local-only
# ExecStart=/********/apps/bin/anytermd --command "/bin/login -p adr" --port 23456 --user root --foreground --local-only
SuccessExitStatus=1
ExecStart=/********/apps/bin/anytermd --command "/bin/login -p adr" --port 23456 --user root --local-only

[Install]
WantedBy=multi-user.target

################
# tomcat.service
#
[Unit]
Description=Apache Tomcat
After=local-fs.target network.target
Wants=local-fs.target

[Service]
Type=forking
WorkingDirectory=~
RuntimeDirectory=tomcat
RuntimeDirectoryMode=750
PIDFile=/run/tomcat/tomcat.pid
User****
Group*********
Environment=CATALINA_HOME=/********/apps/opt/apache-tomcat-7.0.64
Environment=CATALINA_PID=/run/tomcat/tomcat.pid
PermissionsStartOnly=true
ExecStartPre=/bin/rm -vf /********/apps/opt/apache-tomcat-7.0.64/logs/*
ExecStartPre=/bin/rm -vf /********/exifweb*.log

## KillMode=process
# Restart=on-failure
# RestartSec=3
TimeoutStartSec=60
TimeoutStopSec=15

# ExecStart=/bin/sh -c 'CATALINA_PID=/run/tomcat.pid /********/apps/opt/apache-tomcat-7.0.64/bin/startup.sh'
ExecStart=/********/apps/opt/apache-tomcat-7.0.64/bin/startup.sh
ExecStop=/********/apps/opt/apache-tomcat-7.0.64/bin/shutdown.sh

[Install]
WantedBy=multi-user.target

Oracle and systemd

# http://docs.oracle.com/database/121/index.htm
# https://172.16.148.137:5500/em/login
# start db:
[oracle@redhat7 ~]$ sqlplus / AS SYSDBA
STARTUP
# start TNS listener
[oracle@redhat7 ~]$ lsnrctl start
# TNS listener status
[oracle@redhat7 ~]$ lsnrctl status
cat /u01/app/oracle/product/12.1.0/dbhome_1/network/admin/listener.ora
# stop TNS listener
[oracle@redhat7 ~]$ lsnrctl stop
# TNS listener status
ps -ef | grep [t]nslsnr
# oracle + listener startsqlplus system/Yxx32145 AS SYSDBA
# for this you need the apropiate environment variables in /home/oracle/.bash_profile e.g.:
TMPDIR=$TMP; export TMPDIR
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/12.1.0/dbhome_1; export ORACLE_HOME
ORACLE_SID=orcl; export ORACLE_SID
PATH=$ORACLE_HOME/bin:$PATH; export PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/lib64; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH
# then allow dbstart in /etc/oratab to start a specific instance:
[root@redhat7 ~]# sed -i s/"N$"/"Y"/ /etc/oratab
[oracle@redhat7 ~]$ $ORACLE_HOME/bin/dbstart $ORACLE_HOME
# oracle status:
ps -ef | grep [o]racle
ps -ef | grep [o]ra_
# oracle status from sqlplus:
[oracle@redhat7 ~]$ sqlplus / AS SYSDBA
select INSTANCE_NAME, STARTUP_TIME, STATUS, ARCHIVER, "THREAD#", LOGINS, INSTANCE_ROLE, ACTIVE_STATE from v$instance;
select SPID as "Process ID" , STID as "Thread ID", PNAME as "Process Name", EXECUTION_TYPE as "Process Type" from v$process order by 4,1,2;
# stop db:
[oracle@redhat7 ~]$ sqlplus / AS SYSDBA
SHUTDOWN IMMEDIATE
# or allow dbstart in /etc/oratab to start a specific instance (stop db + listener):
# see http://docs.oracle.com/database/121/UNXAR/strt_stp.htm#UNXAR417
# see https://oracle-base.com/articles/linux/automating-database-startup-and-shutdown-on-linux
[root@redhat7 ~]# sed -i s/"N$"/"Y"/ /etc/oratab
[oracle@redhat7 ~]$ $ORACLE_HOME/bin/dbshut $ORACLE_HOME
tailf $ORACLE_HOME/startup.log
# list options existing in a response file:
grep -v -P "^#|=$" KIT/Oracle/db.rsp | grep -v "^$"
# Log messages are written to:
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/alert_orcl.log
/u01/app/oracle/diag/rdbms/orcl/orcl/alert/log.xml
find /u01 -name log.xml

# when after normal shutdown oracle processes are still running do a shutdown abort from sqlplus

# show oracle pid (for systemd):
[oracle@redhat7 ~]$ output=$(sqlplus -S / AS SYSDBA <<EOF
select SPID from v\$process where PNAME = 'PMON';sysman
EOF
);echo "$output" | grep -P "\d+"
# or show this way:
ps -e | grep [p]mon_orcl | awk '{print $1;}'

# change $ORACLE_HOME/bin/dbstart in order to show PMON's PID:
	OS_PID=$(sqlplus -S / AS SYSDBA <<EOF
select SPID from v\$process where PNAME = 'PMON';
EOF
)
	OS_PID=$(echo "$OS_PID" | grep -P "\d+")
	echo "$0: ${INST} \"${ORACLE_SID}\" warm started (PID $OS_PID)."
	if [ -d /run/oracle ]; then
		echo "$OS_PID" > /run/oracle/oracle.pid
		echo "created /run/oracle/oracle.pid"
	fi
# or change like this:
	OS_PID=$(ps -e | grep [p]mon_orcl | awk '{print $1;}')
	echo "$0: ${INST} \"${ORACLE_SID}\" warm started (PID $OS_PID)."
	if [ -d /run/oracle ]; then
		echo "$OS_PID" > /run/oracle/oracle.pid
		echo "created /run/oracle/oracle.pid"
	fi
# in order to use dbstart and dbshut with systemd you'll have to also change them 
# so that awk, cat, cut, grep, touch, chmod commands 
# are used with their absolute path (e.g. /bin/cat)

# systemd oracle.service
[Unit]
Description=Oracle 12c
After=local-fs.target
Wants=local-fs.target
[Service]
Type=forking
User=oracle
Group=oinstall
RuntimeDirectory=oracle
PIDFile=/run/oracle/oracle.pid
Restart=on-failure
RestartSec=3
TimeoutSec=0
ExecStart=/u01/app/oracle/product/12.1.0/dbhome_1/bin/dbstart /u01/app/oracle/product/12.1.0/dbhome_1
ExecStop=/u01/app/oracle/product/12.1.0/dbhome_1/bin/dbshut /u01/app/oracle/product/12.1.0/dbhome_1
[Install]
WantedBy=multi-user.target

# all users of the database visible to the current user
select username from all_users order by username;

# tables for schema/user
SELECT DISTINCT OBJECT_NAME FROM ALL_OBJECTS WHERE OBJECT_TYPE = 'TABLE' AND OWNER = 'HR';

# change user's password:
ALTER USER sys IDENTIFIED BY "Yxx32145";
ALTER USER system IDENTIFIED BY "Yxx32145";
sqlplus system/Yxx32145 AS SYSDBA

# Enterprise Manager
# http://www.dba-oracle.com/t_11g_connect_sysdba_ora_01031.htm
sys - allowed to connect AS SYSDBA
system - not allowed to connect AS SYSDBA