gitweb on apache

# projects web page will be:
# Create a git project (e.g. testproject.git):
# mkdir -p /opt/GITRepositories/testproject.git
# cd /opt/GITRepositories/testproject.git
# git init --bare --shared
# cp -v /opt/GITRepositories/test.git/hooks/post-update.sample /opt/GITRepositories/test.git/hooks/post-update
# now is ready for cloning:
# git clone

# cat /etc/httpd/conf.d/git.conf
SetEnv GIT_PROJECT_ROOT /opt/GITRepositories

<LocationMatch "^/[^/]+\.git(/.*)">
	AuthType Basic
	AuthName "Git Access"
	AuthUserFile "/opt/GITRepositories/committers.txt"
	Require valid-user
	# Require group committers

AliasMatch ^/([^/]+\.git)/(objects/[0-9a-f]{2}/[0-9a-f]{38})$			/opt/GITRepositories/$1/$2
AliasMatch ^/([^/]+\.git)/(objects/pack/pack-[0-9a-f]{40}.(pack|idx))$	/opt/GITRepositories/$1/$2
ScriptAliasMatch \
		"(?x)^/([^/]+\.git/(HEAD | \
						info/refs | \
						objects/(info/[^/]+ | \
								[0-9a-f]{2}/[0-9a-f]{38} | \
								pack/pack-[0-9a-f]{40}\.(pack|idx)) | \
						git-(upload|receive)-pack))$" \

# ScriptAlias /gitweb	/var/www/git/gitweb.cgi
Alias /gitweb /var/www/git
<Directory /var/www/git>
	AuthType Basic
	AuthName "Git Access"
	AuthUserFile "/opt/GITRepositories/committers.txt"
	Require valid-user

	Options +ExecCGI
	AddHandler cgi-script .cgi
	DirectoryIndex gitweb.cgi

# grep -i -P "^[^#]" /etc/gitweb.conf 
$projects_list_description_width = "50";
$projectroot = "/opt/GITRepositories";
$home_link_str = "projects";
$base_url = "";
@git_base_url_list = qw(
see also Basic authentication password creation

Angularjs 2


angular 2 project seed


webpack lazy loading
not working for me



Migrating from 2.x to 3.0


basics javascript


initialize a node.js project (this creates package.json)
node init

sass vs less

Immediately-Invoked Function Expression (IIFE)

basics webpack
sudo npm list -g --depth=0
sudo npm install -verbose webpack -g
webpack --progress --colors -d
webpack --progress --colors -d --config webpack.config.js
npm search angular-in-memory-web-api
install dev dependencies:
npm install --dev -> deprecated, use the following below
npm install --only=dev
	gigi@gigi-desktop:~/Projects/semaphore-ng2-webpack$ node start
		throw err;
	Error: Cannot find module '/********/Projects/semaphore-ng2-webpack/start'
		at Function.Module._resolveFilename (module.js:326:15)
		at Function.Module._load (module.js:277:25)
		at Function.Module.runMain (module.js:442:10)
		at startup (node.js:136:18)
		at node.js:966:3
	it's not "node start" but "npm start"
list dependencies for typings@1.4.0:
sudo npm list typings@1.4.0
npm config list
npm config ls -l
npm config ls -l | grep cache
How to view the dependency tree of a given npm module?
sudo npm install -g npm-remote-ls
npm-remote-ls typings
just view a package
npm view typings@1.4.0
here npm start is equivalent to:
npm install webpack-dev-server@1.14.1 --only=dev
find . -name webpack-dev-server.js
./node_modules/webpack-dev-server/bin/webpack-dev-server.js --progress --display-errors-details --inline --colors --config ./webpack/

npm install tslint@3.13.0 --dev
tslint --version
npm run lint

debug with WebStorm

webpack multiple entry points
build with (this one uses
rm js/* 2>/dev/null; node build.js
or install globally webpack@1.13.1:
sudo npm install webpack@1.13.1 -g
then change webpack.config.js:
// var CommonsChunkPlugin = require("../../lib/optimize/CommonsChunkPlugin");
var webpack = require('webpack');
const CommonsChunkPlugin = webpack.optimize.CommonsChunkPlugin;
then build with:
rm js/* 2>/dev/null; webpack
For changing webpack.config.js from "commons" to e.g. "adunate" you'll
need to also change in pageA.html, pageAB.html,

git clone
git checkout material2
install global dependencies:
sudo npm install --global webpack webpack-dev-server karma-cli protractor typescript rimraf
install project dependencies (package.json):
npm install --verbose

Promise and catch keyword are highlighted with red
go to Languages & Frameworks -> JavaScript and select ECMAScript 6

html5 semantic elements and its usage

Angular 2 lazy loading with Webpack
angular2-router-loader is now angular-router-loader
See for a working example.

angular2-webpack-starter + bootstrap
see also "'use:' revered back to 'loader:'" in webpack.test.js
npm install --save-dev bootstrap
npm install --save-dev image-webpack-loader
add to vendor.browser.ts before "if ('production' === ENV)"
// bootstrap 3.3.7:
import 'bootstrap/dist/css/bootstrap';
import 'bootstrap/dist/css/bootstrap-theme';
change webpack.common.js with this:
resolve: {
	extensions: [..., '.css'],
and this:
	test: /\.css$/,
	use: ['to-string-loader', 'css-loader'],
	include: /src\//,
	exclude: /node_modules\//
// for bootstrap
	test: /\.css$/,
	loader: ['style-loader', 'css-loader'],
	exclude: /src\//,
	include: /node_modules\/bootstrap/
and this:
// for bootstrap
	test: /\.(jpe?g|gif|svg)$/i,
	loaders: [
		// image-webpack-loader chained with the file-loader (equivalent to: file?...!image-webpack?...)
{test: /\.png$/, loader: "url-loader?limit=100"},
{test: /\.eot(\?v=\d+\.\d+\.\d+)?$/, loader: "url-loader?limit=1000&mimetype=application/"},
{test: /\.(woff|woff2)$/, loader: "url-loader?&limit=1000&mimetype=application/font-woff"},
{test: /\.ttf(\?v=\d+\.\d+\.\d+)?$/, loader: "url-loader?limit=1000&mimetype=application/octet-stream"}

AfterContentInit, ContentChildren

difference between @ContentChildren and @ViewChildren
viewProviders vs providers
AfterViewInit, ViewChild, ViewChildren, ContentChild

Fine grained change detection with Angular 2
DoCheck, KeyValueDiffers

ChangeDetectionStrategy: OnPush vs Default

How to use IterableDiffers -> see app.component.ts
ngDoCheck() {
	const changes = this.differ.diff(this.heroes);
	if (changes) {
		console.log('new change');// for splitting up changes
		changes.forEachAddedItem(r => console.log('added ', r));
		changes.forEachRemovedItem(r => console.log('removed ', r))
		changes.forEachMovedItem(r => console.log('moved ', r))

Pipes and Internationalization API

Animation -> click on ease function for cubic-bezier code
transition(':leave', [
  animate('2s cubic-bezier(0.755, 0.05, 0.855, 0.06)', style({
	opacity: 0,
	transform: 'translateY(100%)'

Build error
[at-loader] Checking finished with 4 errors
[at-loader] src/app/gui/components/validation-alerts.component.ts:6:14 
    Cannot find namespace 'jasmine'. 

[at-loader] src/app/gui/components/validation-alerts.component.ts:32:23 
    Cannot find name '$'. 

[at-loader] src/app/gui/components/validation-alerts.component.ts:34:14 
    Cannot find name '$'. 

[at-loader] src/app/gui/components/validation-alerts.component.ts:158:26 
    Cannot find name '$'. 
search the appropriate tsconfig.json (or all of them - tsconfig*.json) file and add:
  "compilerOptions": {
    "types": [

Important details
  • By default, the router re-uses a component instance when it re-navigates to the same component type without visiting a different component first.
  • When subscribing to an observable in a component, you almost always arrange to unsubscribe when the component is destroyed. There are a few exceptional observables where this is not necessary. The ActivatedRoute observables are among the exceptions. The ActivatedRoute and its observables are insulated from the Router itself. The Router destroys a routed component when it is no longer needed and the injected ActivatedRoute dies with it. Feel free to unsubscribe anyway. It is harmless and never a bad practice. See also route.snapshot
  • Use route parameters to specify a required parameter value within the route URL

Ufw (uncomplicated firewall)


important files

Uncomplicated Firewall
sudo ufw show added
sudo ufw status verbose
sudo ufw show listening
sudo ufw limit ssh
sudo ufw allow 80
sudo ufw allow 443
sudo ufw allow 32400
sudo ufw allow in from
sudo ufw allow in on eth1 to port 3389 proto tcp comment 'allow RDP access from LAN'
sudo ufw allow from to any proto gre comment 'allow VPN with MarchenGarten'
sudo ufw allow from to any port 3389 proto tcp
sudo ufw allow in on enp1s0 to any port 8083
# sudo ufw delete limit 1443
# sudo ufw delete 11 -> removes rule with order number 11
tailf /var/log/kern.log | grep "\[UFW BLOCK\]"
tailf /var/log/syslog | grep "\[UFW BLOCK\]"

transmission firewall with peer-port-random-on-start = false
grep port /********/.config/transmission-daemon/settings.json
sed -i s/"peer-port-random-on-start\": true"/"peer-port-random-on-start\": false"/ /********/.config/transmission-daemon/settings.json
peerport="`grep peer-port\\" /********/.config/transmission-daemon/settings.json | awk '{sub(/,/,\"\",$2); print $2;}'`"
sudo ufw allow $peerport

transmission firewall with peer-port-random-on-start = true
sed -i s/"peer-port-random-on-start\": false"/"peer-port-random-on-start\": true"/ /********/.config/transmission-daemon/settings.json
grep peer-port-random-low /********/.config/transmission-daemon/settings.json
grep peer-port-random-high /********/.config/transmission-daemon/settings.json
# sudo ufw allow proto udp to any port 49152:65535
# sudo ufw allow proto tcp to any port 49152:65535
sudo ufw allow 49152:65535/tcp
sudo ufw allow 49152:65535/udp

show ufw logs
tailf /var/log/kern.log | grep "\[UFW BLOCK\]"
tailf /var/log/syslog | grep "\[UFW BLOCK\]"

enable at startup
# theoretically should work only this but practically doesn't:
sudo sed -i s/"ENABLED=no"/"ENABLED=yes"/ /etc/ufw/ufw.conf
# you should add this too to /etc/rc.local before "exit 0" line:
if ! ufw enable; then 
	echo "Can't start ufw!"
	echo "UFW started!"

# Set to yes to apply rules to support IPv6 (no means only IPv6 on loopback
# accepted). You will need to 'disable' and then 'enable' the firewall for
# the changes to take affect.
sudo sed -i s/"IPV6=yes"/"IPV6=no"/ /etc/default/ufw

Configuring port forwarding (add rules to /etc/ufw/before.rules)
# see also
sudo sed -i s/"#net\/ipv4\/ip_forward"/"net\/ipv4\/ip_forward"/ /etc/ufw/sysctl.conf

turn off ipv6 autoconfiguration
sudo sed -i s/"#net\/ipv6\/conf\/default\/autoconf=0"/"net\/ipv6\/conf\/default\/autoconf=0"/ /etc/ufw/sysctl.conf
sudo sed -i s/"#net\/ipv6\/conf\/all\/autoconf=0"/"net\/ipv6\/conf\/all\/autoconf=0"/ /etc/ufw/sysctl.conf

configuration status
grep -nr 'ENABLED' /etc/ufw/ufw.conf
grep -nr -P "DEFAULT_FORWARD_POLICY|IPV6=" /etc/default/ufw
grep -nr -P "net\/ipv4\/ip_forward|net\/ipv6\/conf\/default\/autoconf|net\/ipv6\/conf\/all\/autoconf" /etc/ufw/sysctl.conf

deny access to an ip
sudo ufw deny from

limit access to an ip
sudo ufw insert 1 limit from comment 'uri abuser limited to anywhere'
sudo ufw insert 1 limit in proto tcp from to port 80,443,49152:65535 comment 'tcp abuser limited to on 80,443,49152:65535'
sudo ufw insert 1 limit in proto udp from to port 80,443,49152:65535 comment 'udp abuser limited to on 80,443,49152:65535'

Redirect from to ( is on eth0 interface)

# The locally generated packets does not pass via the PREROUTING chain!
sudo sysctl -w net.ipv4.ip_forward=1
sudo sysctl -a | grep 'net.ipv4.ip_forward'
sudo sysctl -w net.ipv4.conf.eth0.route_localnet=1
sudo sysctl -a | grep 'net.ipv4.conf.eth0.route_localnet'

# It seems that you could configure the above in /etc/ufw/sysctl.conf too though I haven't tested it.
/etc/default/ufw should have DEFAULT_FORWARD_POLICY="ACCEPT"

# in /etc/ufw/before.rules before filter section:
# -A = append last
# -I = insert first
# sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
# -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000
# sudo iptables -t nat -I PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination
-I PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination

# you'll need also the rule below  
sudo ufw allow to port 3000
# otherwise external users won't be allowed on port 80 and you'll see logs like this:
[UFW BLOCK] IN=eth0 OUT= MAC=3c:11:11:f0:21:11:00:11:0f:09:00:04:08:00 SRC= DST= LEN=60 TOS=0x00 PREC=0x00 TTL=63 ID=54696 DF PROTO=TCP SPT=9194 DPT=3000 WINDOW=26280 RES=0x00 SYN URGP=0

sudo ufw disable && sudo ufw enable

APT (Advanced Package Tool)

search packages by name using REGEX
apt-cache search libapr
apt-cache search 'php.*sql'
apt-cache search apache.\*perl
apt-cache search elvis\|vim

list the contents of a (not-installed) package
apt-file list mysql-client-5.1

showing package information
apt-cache showpkg libconfig-dev

check a package status
dpkg --get-selections | grep apache2

find which package contains a file
use also
apt-file update
apt-file -i search --regex /knemo-modem-transmit-receive.svg$ -> doesn't work for this specific file
apt-file -i search knemo-modem-transmit-receive.svg -> doesn't work for this specific file
apt-file search fusil/fusil-ogg123 -> this works so for the above fails maybe the repository from where the files were installed now is missing
dpkg --search knemo-modem-transmit-receive.svg -> but this one works (with owning package installed)
dpkg -S 'doc/*sql'

show package summary & contents
dpkg -l mongodb-compass
dpkg -L kibana
dpkg-query -L kibana

install deb file with automatic dependency resolution
sudo apt-get install ./Downloads/skypeforlinux-64.deb

install all packages you need to compile $PACKAGENAME
apt-get build-dep $PACKAGENAME

list repositories
grep -rh ^deb /etc/apt/sources.list /etc/apt/sources.list.d/

remove repository
sudo add-apt-repository --remove ppa:whatever/ppa

ssh, http and https multiplexing

This is about how to have the ssh and http(s) server share the same port (e.g. 80 or 443 port).
This is really cool :).

# Used sources:

### begin sshttp setup 1
# Below are the preparations for this setup:
# sshttpd listens on 80 for ssh and http connections. It forwards to ssh:1022 and nginx:880.
# Will work these:
ssh -p 1022 gigi@		-> access tried from within host
ssh -p 1022 gigi@	-> access tried from within host
ssh -p 80		-> access tried from within host or from internet		-> access tried from within host	-> access tried from within host	-> access tried from within host		-> access tried from's LAN		-> access tried from within host or from internet
# Won't work this:
ssh -p 1022	-> access tried from within host or from internet		-> access tried from within host	-> access tried from within host or from internet

# /etc/modules
modprobe nf_conntrack_ipv4
modprobe nf_conntrack
echo "nf_conntrack" >> /etc/modules
echo "nf_conntrack_ipv4" >> /etc/modules

# in /etc/ssh/sshd_config make sure to have:
# Port 1022
# Banner /etc/sshd-banner.txt 
# Makefile uses the content of /etc/sshd-banner.txt, e.g.:
# SSH_BANNER=-DSSH_BANNER=\"adrhc\'s\ SSH\ server\"
cat /etc/sshd-banner.txt
adrhc's SSH server

# configure nf-setup, e.g. for sshttpd.service below should be:
# HTTP_PORT=1443
# also you could add this afterwards in order not to run nf-setup if already run:
if [ "`iptables -t mangle -L | grep -v -P "^ufw-" | grep -P "^DIVERT.+tcp spt:$HTTP_PORT"`" != "" ]; then
	echo "sshttp netfilter rules already applied ..."
	exit 0
echo "applying sshttp netfilter rules ..."

# for nginx or apache take care of address binding not to overlap with sshttpd.service, e.g.:
#    server {
#        listen;
#        listen;
#        # listen; -> used/bound by sshttpd.service below
#        listen;

# install the systemd sshttpd.service defined below:
sudo chown root: /etc/systemd/system/sshttpd.service && sudo chmod 664 /etc/systemd/system/sshttpd.service && sudo systemctl daemon-reload; cp -v $HOME/compile/sshttp/nf-setup $HOME/apps/bin

# systemd sshttpd.service:
# see
Description=SSH/HTTP(S) multiplexer
# for any address binding conflict that occurs between ufw, ssh, nginx and sshttp I want ufw, ssh and nginx to win against sshttp
# sudo iptables -L | grep -v -P "^ufw-" | grep -P "1022|1443|880|DIVERT|DROP|ssh"
# sudo iptables -t mangle -L | grep -v -P "^ufw-" | grep -P "1022|1443|880|DIVERT|DROP|ssh"
ExecStartPre=-/bin/chown nobody: /run/sshttpd
# using 443 for sshttpd:
# ssh -p 443
# wget --no-check-certificate
# ExecStart=/home/gigi/apps/bin/sshttpd -n 4 -S 1022 -H 1443 -L 443 -l -U nobody -R /run/sshttpd
# using 80 for sshttpd:
# ssh -p 80
# wget
ExecStart=/home/gigi/apps/bin/sshttpd -n 4 -S 1022 -H 880 -L 80 -l -U nobody -R /run/sshttpd

### begin sshttp setup 2 (read first sshttp step 1)
# Below are the preparations for this setup:
# sshttpd listens on 444 for ssh and https connections. 
# sshttpd forwards to ssh:1022 or stunnel:1443.
# stunnel:1443 forwards to nginx: or ssh: based on sni.
# the original remote client's ip is accessible (only for https but not ssh) with $realip_remote_addr (

# Issue: any redirect (301 or 302) used in the server defined below will set Location header to http instead of https
# - see sshttp setup 3 for a solution 
# - see,269623,269647#msg-269647 (listen proxy_protocol and rewrite redirect scheme) for a better? solution:
if (c->ssl || port == 443) { 
*b->last++ ='s'; 

# Won't work Transmission remote GUI but the web page will still work.
# ERROR (while using Transmission remote GUI):
	2016/09/19 15:03:42 [error] 5562#0: *2431 broken header: ">:azX���g��^}q�/���A��Rp(���n3��0�,�(�$��
	����kjih9876�����2�.�*�&���=5��/�+�'�#��	����g@?>3210����EDCB�1�-�)�%���</�A���
	�                                                                                      ��
	" while reading PROXY protocol, client:, server:
	NӾHu|���4|�sf��Q�j$������0�,�(�$��432 broken header: ">:LM2V
	����kjih9876�����2�.�*�&���=5��/�+�'�#��	����g@?>3210����EDCB�1�-�)�%���</�A���
	�                                                                                      ��
	" while reading PROXY protocol, client:, server:

# in systemd sshttpd.service change to:
# router: 443 -> 444 -> also make sure ufw allows 444
# ssh -p 443
# wget --no-check-certificate
ExecStart=/********/apps/bin/sshttpd -n 4 -S 1022 -H 1443 -L 444 -l -U nobody -R /run/sshttpd

# in nginx add this "magic" server:
server {
	listen default_server proxy_protocol;
	include xhttpd_1080_proxy.conf;
	port_in_redirect off;
	# change also fastcgi_params! (see below)
	... your stuff ...

# xhttpd_1080_proxy.conf:
# set_real_ip_from ::1/32; -> doesn't work for me
real_ip_header proxy_protocol;
set $real_internet_https "on";
set $real_internet_port "443";

# in fastcgi_params have (besides your stuff):
# This special fastcgi_params must be used only by "magic server" (!
fastcgi_param HTTPS $real_internet_https if_not_empty;
fastcgi_param SERVER_PORT $real_internet_port if_not_empty;

# stunnel.conf for server side
# sudo killall stunnel; sleep 1; sudo bin/stunnel etc/stunnel/stunnel.conf
pid = /run/
debug = 4
output = /********/apps/log/stunnel.log
options = NO_SSLv2
compression = deflate
cert = /********/apps/etc/nginx/certs/
key = /********/apps/etc/nginx/certs/
accept =
connect =
protocol = proxy
sni =
connect =
renegotiation = no
debug = 5
cert = /********/apps/etc/nginx/certs/
key = /********/apps/etc/nginx/certs/
[www on any]
sni = tls:*
connect =
protocol = proxy

# stunnel.conf for client side
# killall stunnel; sleep 1; stunnel ****stunnel.conf && tailf ****stunnel.log
# ssh -p 1194 gigi@localhost
pid = /****************/temp/
debug = 4
output = /****************/****stunnel.log
options = NO_SSLv2
# Set sTunnel to be in client mode (defaults to server)
client = yes  
# Port to locally connect to
accept =  
# Remote server for sTunnel to connect to
connect =
sni =
verify = 2
CAfile = /****************/****Temp/Zyxel/
# checkHost = certificate's CN field (see "Rejected by CERT at" in stunnel.log for learning CN)
checkHost =
# CAfile = /****************/****Temp/Zyxel/adr-pub.pem
# checkHost = adr

### begin sshttp setup 3 (read first sshttp step 2)
# any redirect (301 or 302) used in the server defined above will go to the https server
# Issue: the original remote client's ip is not accessible (https or ssh)

# you'll need the https nginx configuration for your site listening at least on
# you no longer need the "magic" server defined above
# How this works:
# browser/stunnel-client useing ssl -> sshttpd:443 -> stunnel[tls to http] using ssl -> stunnel[http to https]

# stunnel.conf for server side
# sudo killall stunnel; sleep 1; sudo bin/stunnel etc/stunnel/stunnel.conf
pid = /run/
debug = 4
output = /********/apps/log/stunnel.log
options = NO_SSLv2
compression = deflate
cert = /********/apps/etc/nginx/certs/
key = /********/apps/etc/nginx/certs/
accept =
connect =
protocol = proxy
sni =
connect =
renegotiation = no
debug = 5
cert = /********/apps/etc/nginx/certs/
key = /********/apps/etc/nginx/certs/
[tls to http]
sni = tls:*
connect =
# connect =
# protocol = proxy
[http to https]
accept =
connect =
client = yes

### begin sslh setup
# Here I use ssh:1021 instead of ssh:1022.
sudo apt-get install sslh

sudo useradd -d /nonexistent -M -s /bin/false sslh
# according to I need:
sudo setcap cap_net_bind_service,cap_net_admin+pe /usr/sbin/sslh-select
sudo getcap -rv /usr/sbin/sslh-select

cat /etc/default/sslh
# with --transparent the local ip is not acceptable:
DAEMON_OPTS="--transparent --timeout 1 --numeric --user sslh --listen --ssh --http --pidfile /var/run/sslh/"
# without --transparent is acceptable also local ip:
# DAEMON_OPTS="--transparent --timeout 1 --numeric --user sslh --listen --ssh --http --pidfile /var/run/sslh/"

cat /etc/systemd/system/sslh.service.d/custom.conf 
# cp -v $HOME/bin/systemd-services/ $HOME/apps/bin
ExecStart=/usr/sbin/sslh-select --foreground $DAEMON_OPTS

if [ "`sudo iptables -t mangle -L | grep -P "^SSLH\s.+\sspt:1021"`" != "" ]; then
	echo "SSLH netfilter rules already applied ..."
	exit 0
iptables -t mangle -N SSLH
iptables -t mangle -A OUTPUT --protocol tcp --out-interface eth0 --sport 1021 --jump SSLH
iptables -t mangle -A OUTPUT --protocol tcp --out-interface eth0 --sport 80 --jump SSLH
iptables -t mangle -A SSLH --jump MARK --set-mark 0x1
iptables -t mangle -A SSLH --jump ACCEPT
ip rule add fwmark 0x1 lookup 100
ip route add local dev lo table 100

sudo systemctl daemon-reload
sudo systemctl enable sslh
sudo systemctl start sslh