Every system administrator daily use SSH to connect to remote systems and perform they daily tasks: the very most of the time these consist into typing statements on the terminal or copying files from and to the remote system, or again running remote commands, but SSH is much more than this: it not only provides additional facilities such as agent  or forwarding, port forwarding and X11 forwarding, but it has also a subsystem that can be exploited to provide SSH secured services such as SFTP.

The goal of the "OpenSSH Tutorial - The Ultimate SSH Guide To Understand It" post is to tell you what historically drove us to SSH, describe the protocol suite in detail and provide a thorough tutorial on using all of these facilities.

SSH is a huge topic: thoroughly explaining both server and client side would require much more than a single post - actually even just explaining server side would deserve several posts. For this reason this post shows only the minimum required settings that are required server side to enable the features that are instead thoroughly described client side. In addition to that, some parts of this post are a little bit redundant, but it was the only way I found to clearly explain how things work from the client perspective and from the server perspective.

This post is based on Red Hat Enterprise Linux 9, but the same concepts apply to the very most of the Linux distributions.

Since I'm publishing on Valentine's day, and as often happens I am away for work, I want to make a dedication:

To my beloved wife, that supported (and put up with) me and my profession, that often has looked much more a mission, throughout all these years. Thank you darling.

Remote Connections Overview

It is certainly wise to spend a few words on which were the most used remote connection protocols used before SSH superseded them, and an overview of the SSH protocols themselves.

Telnet

The Teletype Over Network Protocol (TelNet) is an old (1969) protocol and its related client/server application.
Originally designed to work over Network Control Protocol (NCP), it has been ported to TCP using the well-known TCP port 23. It is probably one of the first attempts to develop a terminal that connects to systems remotely. With the RFC 15 (extended in RFC 855 later on), it is one of the very first IETF standards.
It is a very old protocol, nowadays the telnet client is mostly used to connect to other plain text services (HTTP, SMTP, POP3, IMAP) to check connectivity, but sometimes also to interactively type statements when troubleshooting.
Anyway, mind that it is a very old and insecure (it's not encrypted) protocol, and that must be avoided to use unless it is strictly necessary.

There do exist extensions to Telnet that provide Transport Layer Security (TLS) security and also Simple Authentication and Security Layer (SASL) authentication, but they are not supported by the very most of the Telnet implementations. One remarkable use case is the IBM 5250 terminal emulation: since SSH does not implement it, on some IBM systems they reserved the 992 TCP port for secure telnet and provide a TN5250/TN3270 custom telnet.

The R-* Remote utilities

The Berkeley r-commands are a suite of computer programs aimed at enabling login or remote command execution from a UNIX client to a UNIX server. They were developed in 1982 by the Computer Systems Research Group at Berkeley, and have been incorporated in the BSD UNIX.
The most known r-commands are:

  • rcp (remote copy)
  • rexec (remote execution),
  • rlogin (remote login)
  • rsh (remote shell),
  • rstat, ruptime
  • rwho (remote who)

Since they were based on an early implementation of TCP/IP, they gradually showed their security vulnerabilities: most notably

  • they do not require the user to specify a password - they authenticate at host level checking the host IP address as defined in the "/etc/hosts.equiv" and ".rhosts" configuration files. The only notable exception is rlogin, that asks  for the user password if host authentication fails.
  • they communicate over an un-encrypted channel - that means that the user password is easy to be stolen by the bad guys

For this reasons, in 1995 the Secure SHell (SSH) protocols and applications, initially written by Tatu Ylonen, supplanted them and the telnet application too.

The Secure SHell (SSH) Protocols And Applications

The Secure SHell (SSH) has been designed as a secure replacement for Telnet and all the BSD "r-commands" - the word "shell" might be misleading: the Secure SHell is not actually a shell like BASH, C and so on: it is an entire protocol suite along the applications that implement it.

The most used applications and subsystems are:

the sshd daemon

the service that provides SSH protocol

the slogin client

a the client aimed at replacing rlogin

the ssh client

the replacement of rsh and telnet

the scp client

client used to copy files to a remote system using the SSH protocol - it replaces rcp

the s_client client

a secure client that can be used to checking connectivity or troubleshooting SSL/TLS services such as HTTPS.

the sftp-subsystem

a subset of the SSH protocol that implements SFTP – not to be confused with FTPS!

the sftp client

the Secure FTP client

SSH addresses the following security issues:

  • eavesdropping: the entire conversation is encrypted, so there's no risk of getting password or data stolen
  • session hijacking: an attacker cannot take over an existing connection because the attacker will not be able to correctly generate the integrity checksums

The protocols use the registered port TCP 22 and their specifications distinguish among two major versions:

  • SSH1 
  • SSH2 (brings several interesting features, such as the support for validating keys using a certificate authority)

Everything is managed by a single daemon (sshd) that listens for incoming SSH connections.

SSH security model leverages on public/private key pairs: for example it uses asymmetric keys for:

  • verify host identity
  • setup the encrypted communication channel
  • as an optional authentication method to validate connecting user

The first two features are achieved by using SSHd host keys.

SSH Supports the following authentication mechanisms:

rhosts

this is the less secure authentication mechanism, disabled by default, that has been implemented to mimic the behaviour of the BSD r*commands SSH is aimed to replace

rhosts with RSA authentication of the client host

it is an improved version of rhosts: first the server checks the identity of the client using public key cryptography: if it succeeds then it continues same way as the simple rhosts authentication mechanism. Mind that this mechanism identifies just the connecting client, not the user

RSA authentication of the user

the client sends the public key of the user to the server: if it is listed among the authorized ones for the user used to connect, the server responds with a challenge let the client prove that it knows the user's private key too

Kerberos v5 authentication and TIS authentication server

authentication leverage on Kerberos v5 or TIS

Passwords

last and fallback authentication method of SSH: just prompt for the user's username and password.

SSHd - An Overview Of The Server Daemon

Before going on, it is necessary to explain some very basic concepts of the server daemon, otherwise there is the risk to misunderstand some client concepts.

Host Keys

The usage of the host key made by SSH is twofold:

  • enable the client to securely identify the server by checking the fingerprint of the host key, or if working in "rhost" mode, it enable the server to identify the client host 
  • securely exchange the symmetric key that is used to encrypt the traffic during the session

This means that you must always remember and do what's explained in the following warning box:

Whenever you clone a machine – either VM or bare-metal, you must delete these keys ( they are regenerated automatically when the SSH server starts): if you don't do this you'll end up with many host using the same key pairs, and so having the same fingerprint. It's somehow if you were hijacking the identities of your server by yourself – not such a good idea by the security perspective.

An SSH server has more than one host key, so to be able to provide to the client a compatible one with the highest degree of security: on Red Hat systems the host keys files are stored beneath the "/etc/ssh" directory.

Let's list them by typing:

ls -1 /etc/ssh/*key*

the output is as follows:

/etc/ssh/ssh_host_ecdsa_key
/etc/ssh/ssh_host_ecdsa_key.pub
/etc/ssh/ssh_host_ed25519_key
/etc/ssh/ssh_host_ed25519_key.pub
/etc/ssh/ssh_host_rsa_key
/etc/ssh/ssh_host_rsa_key.pub

We can easily display the fingerprint of each public key by typing:

find /etc/ssh -name *key*.pub -exec ssh-keygen -lf {} \;

on my system the output is as follows:

2048 b6:b8:0e:0f:14:cb:1c:09:49:b6:ef:0d:3b:85:20:cd   (RSA)
256 1c:05:d9:c4:be:54:ed:4e:d3:42:4f:51:2b:4f:a9:22   (ECDSA)
256 b2:0e:f7:8e:8f:27:e8:c2:b5:db:78:90:de:2d:b0:d8   (ED25519)

from the format of the output you can argue the above are MD5 fingerprints - you may prefer to print their SHA256 fingerprint:

ssh-keygen -l -E sha256 -f ssh_host_ecdsa_key.pub

mind that -E option is not available on all SSH versions, so you may miss it.

Authentication Mechanisms

As already explained, SSH supports several authentication mechanisms: do not blindly assume that you can configure all of them - they are available only if they have been enabled at compile time - of course the very most of the Linux distributions provide SSH compiled with the very most used authentication mechanisms, but it is certainly useful know how to verify which are them.

We can list the shared libraries sshd is linked to and filter the output displaying only the ones related to authentication as follows:

ldd /usr/sbin/sshd |egrep "(pam|ldap|krb5|gssapi|sasl)" | sort

on my system the output is as follows:

	libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f0b13675000)
	libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f0b1338e000)
	libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f0b110e2000)
	libldap-2.4.so.2 => /lib64/libldap-2.4.so.2 (0x00007f0b14557000)
	libpam.so.0 => /lib64/libpam.so.0 (0x00007f0b151c7000)
	libsasl2.so.3 => /lib64/libsasl2.so.3 (0x00007f0b12717000)

As you can see it can directly support several mechanisms.

On Red Hat systems the most important is PAM: just read the warning in the "/etc/ssh/sshd_config" – the sshd configuration file:

sudo grep UsePAM /etc/ssh/sshd_config

the output is as follows:

# WARNING: 'UsePAM no' is not supported in Red Hat Enterprise Linux and may cause several
UsePAM yes

So bear this in mind this:

Red Hat is PAM centric: never and ever disable PAM.

PAM - An Overview

The Pluggable Authentication Modules, often simply PAM, is a library that provides a set of authentication modules that spare developers from having to develop the authentication and authorization mechanisms by themselves. Since the OpenSSH shipped by Red Hat is linked with the PAM library, you can enable and configure PAM modules at wish.

PAM is a black beast: thoroughly explaining it is of course outside the scope of this post.

The path to the PAM configuration file used by sshd on Red Hat systems is "/etc/pam.d/sshd": despite it is just one file, mind that  very often a PAM config file includes the contents of other PAM configuration files.

Let's look at the "include" keyword on the sshd PAM configuration file:

grep include /etc/pam.d/sshd

this is the output on my system:

auth       include      postlogin
account    include      password-auth
password   include      password-auth
session    include      password-auth
session    include      postlogin

As you see, each of the four PAM module interfaces types ("auth", "account", "password", "session") loads the settings from external files.

We can get the list of unique files as follows:

grep include /etc/pam.d/sshd |sed 's/^.*include[ ]*//'|sort -u

the output is as follows:

password-auth
postlogin

Now let's quickly see and discuss the "password-auth" file is configured:

#%PAM-1.0
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 1000 quiet_success
auth        required      pam_deny.so

account     required      pam_unix.so
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 1000 quiet
account     required      pam_permit.so

password    requisite     pam_pwquality.so try_first_pass local_users_only retry=3 authtok_type=
password    sufficient    pam_unix.so sha512 shadow nullok try_first_pass use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
-session     optional      pam_systemd.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so

Without going too much into the deep here, roughly put this is what it claims, grouped by module interface type:

Authentication module:

  • it sets the variables defined into the "/etc/security/pam_env.conf" file (line 4: "pam_env" module)
  • it authenticates users using "/etc/passwd" and "/etc/shadow" as security realms (line 5: "pam_unix" module). The default behavior of this module is to deny access to users whom do not have the password set into these files: here this behavior is altered using the "nullok" option (mind that SSH anyway prevents login using account that does not have a password set unless you set "PermitEmptyPasswords yes" into the "/etc/ssh/sshd_config" file). This module by default prompts the user for typing the password, but this behavior here is altered by the "try_first_pass": this means that if you put another module before it in the stack, "pam_unix" first try to re-use the password that has already been typed for this module, and prompt for a password only if the previous module has not requested a password. Since the entry is marked as "sufficient", if the module succeeds, the evaluation of the interface module of type account immediately returns success, otherwise it proceeds with the next modules in the stack, but holding the failed state so that the final outcome is anyway failed
  • the "pam_succed_if" module (line 6) is used to check if the uid is greater or equal to 1000 ("uid >= 1000" option): the default behavior of the module is to log both failure (unmatched condition) and success (matched condition), but here the default behavior is altered using the "quiet_success" option, so only users with a uid lower than 1000 are logged. Mind that uid lower than 1000 means a system user (and the "root" user is among them): if the flow reached this point, it means that the previous module has failed, and so the aim of this setting is to log system users that failed the password
  • the "pam_deny" module (line 7) is then used to return a failure.

Authorization module:

  • it begins by using the "pam_unix" module again (line 9), but this time to check and apply the authorization settings stored in both the "/etc/passwd" and "/etc/shadow" files. Since the module is marked as "required", in case of failure the failed state is set, but the process continues until every module of the stack of authorization interface type is processed, reporting the failed state to the PAM library only at the end of the process
  • the "pam_localuser" module (line 10) makes sure that the user is defined in the "/etc/passwd" file: since it is marked as "sufficient", if the user is local the next modules of type authorization interface are skipped
  • if the process has come until here it means that the user is not a local one: this time "pam_succeed_if" (line 11) succeeds if the uid number is lower than 1000 (system account): in this case the module immediately returns success without logging anything
  • the last line (line 12) of the module of type interface authorization is reached only by non-local users that are not system accounts: the "pam_permit" module is used to always return success (authentication has already succeeded as the outcome of the module of type interface authorization

Session module:

  • line 18: "pam_keyinit" module revokes from the kernel keyring any still valid key that could have been set in a previous session
  • line 19: "pam_limits" enforces the policy defined into the "/etc/security/limits.conf "and any ".conf" file contained into the "/etc/security/limits.d directory"
  • line 20: "pam_systemd" register the session in the systemd user manager
  • line 21: here "pam_succeed_if" is used to implement a rule that applies only to cron jobs ("service in crond") to prevent them from failing if the user account is expired
  • line 22: the "pam_unix" module here is exploited for accounting purposes.
I intentionally skipped commenting on the settings of the password module: I did it since they are used when performing a password change, so they are not strictly related to the login process, and so they are off-topic to this post.

SSH Subsystems

A quite undocumented part of SSH is the "subsystem": subsystems are a convenient way to bind an SSH connection to an application - the most famous and used subsystem is certainly the SFTP subsystem, but SSH enable you also to define additional custom subsystems as per your needs.

The SFTP Subsystem

The SFTP subsystem is configured by the following directive of the sshd configuration file:

Subsystem       sftp    /usr/libexec/openssh/sftp-server 

It is enabled by default since also the scp command line utility relies on it.

Never disable the SFTP subsystem: SCP relies on it!.

You can connect to a server using SFTP using the "sftp" command line utility.

For example, to connect to the "ftp.carcano.ch" host as "joker" user type:

sftp joker@ftp.carcano.ch

now change directory to the root of the filesystem of the "ftp.carcano.ch" host:

sftp> cd /
sftp> ls

the output is as follows:

/afs         /bin         /boot        /dev
/etc         /home        /lib         /lib64
/lost+found  /media       /mnt         /opt
/proc        /root        /run         /sbin
/srv         /sys         /tmp         /usr
/vagrant     /var 

this means that the connected user can browse throughout all the filesystem of the host, of course accordingly with permissions of files and directories.

Chrooted SFTP

Although this might be fine with some users, it may lead to unpleasant situations with less "trusted" users: it is enough a little mistake while setting permissions, and you risk to have anybody with a valid credential be able to access information (maybe also sensitive ones) they were not supposed to be able to get, and even overwrite files.

For this reason the SFTP default configuration is not suitable when dealing with untrusted users: if you want to enable SFTP access to untrusted users you must setup a chrooted configuration: in such a setup, once the user connects he finds itself into a chroot jail he cannot escape from.

The very first thing to do to accomplish this setup is create the chroot path:

umask 0022
mkdir -p /srv/sftp/home

since the user will see only the contents of the chroot jail, it is wise to copy the timezone file, so to get the right time on logs.

mkdir /srv/sftp/etc
cp /usr/share/zoneinfo/Europe/Zurich /srv/sftp/etc/localtime

then we need to define a group of users that will be forced to use the chrooted SFTP subsystem when connecting.

In this example this group is called "sftponly":

groupadd sftponly

the last step is to configure a group based matching rule that binds members of the "sftponly" group to the sftp subsystem,  force chrooting and disable potentially harmful features such as TCP forwarding, X11Forwarding and setting a TTFY (more on these features later on).

This is accomplished by adding the following snippet to the end of "/etc/ssh/sshd_config" file

Match Group sftponly
    ChrootDirectory /srv/sftp
    ForceCommand internal-sftp
    AllowTcpForwarding no
    X11Forwarding no
    PermitTTY no

of course, restart sshd to apply the changes:

systemctl restart ssh

We are ready to have a go: in this example we use the "joker" user, so create the it and add it to this group:

useradd -d /home/joker -M -g sftponly -s /bin/false joker
passwd joker

mind that the root directory for the members of the "sftponly" group is not "/", but "/srv/sftp": this means that we must create the "/home/joker" directory (the home of the "joker" user) beneat the "/srv/sftp" directory:

mkdir /srv/sftp/home/joker
chown joker:sftponly /srv/sftp/home/joker

Now let's connect to the system as the "joker" user:

sftp joker@ftp.carcano.ch

once connected, let's put the "/usr/share/doc/openssh/README":

sftp> put /usr/share/doc/openssh/README

the output is as follows

Uploading /usr/share/doc/openssh/README to /home/joker/README
README                                      100% 2134     1.5MB/s   00:00  

now that we verified the user can write to its home directory, let's list the contents of the root directory ("/"):

sftp> ls /

the output must be as follows:

etc   home

as expected there are only the "/etc" and "/home" directories, so this is not the actual root of the filesystem of the server.

Disconnect from the host:

sftp> exit

Anyway joker is stubborn, so he wants to try to get a shell:

ssh joker@ftp.carcano.ch

the output must be as follows:

joker@ftp.carcano.ch's password: 
PTY allocation request failed on channel 0
This service allows sftp connections only

as you see we successfully put joker into a chrooted jail: he can spread panic everywhere he wants, ... but only within his jail, only managing its own files.

How Sessions Work Under The Hood

It certainly worth to provide also an overview on how sessions work under the hood:

Before going on it is worth knowing the path to the sshd main configuration file: "/etc/ssh/sshd_config".

As soon as a client attempts a connection two different ssh processes are spawned. In this example, the the user connects as "marco" - we can enumerate the ssh processes by typing:

ps ax -o pid,user,ppid,pgid,sid,cmd|head -n 1;ps ax -o pid,user,ppid,pgid,sid,cmd |grep "[s]shd: marco"

the output is as follows:

   PID USER       PPID   PGID    SID CMD
  3675 root       3288   3675   3675 sshd: marco [priv]
  3683 marco      3675   3675   3675 sshd: marco@pts/0

please note they belong to the same session (SID 3675):

The first (PID 3675) is spawned at connect time and actually creates the session used by SSH itself (so it is the session leader). It is used to handle the authentication – it runs as root so to have access to secured files such as "/etc/shadow" to perform authentication of local users indeed.

The second one (PID 3683), spawned by the first one after a succeeding at login, is connected to the pseudo-tty multiplexer ("/dev/ptmx") - we can easily verify it:

sudo lsof -p 3683 |grep ptmx

the output is as follows:

[sudo] password for marco: 
sshd    3683 marco    9u   CHR                5,2      0t0     1137 /dev/ptmx
sshd    3683 marco   13u   CHR                5,2      0t0     1137 /dev/ptmx
sshd    3683 marco   14u   CHR                5,2      0t0     1137 /dev/ptmx

The second process also sets-up things such as the pseudo-tty slave or create local UNIX sockets for agent forwarding (we'll see this specific topic later on):

ls -al /dev/pts/0

the output is as follows:

crw--w----. 1 marco tty 136, 0 26 apr 23.12 /dev/pts/0

We can display the process tree by typing:

pstree -n -p |grep 3683

the output is as follows:

           `-sshd(3288)---sshd(3675)---sshd(3683)---bash(3684)-+-pstree(4149)

As you can see this last ssh process (3683) spawns the BASH shell (3684).

Let's have closer look to this too:

ps j | head -n 2

the output is as follows:

  PPID    PID   PGID    SID TTY       TPGID STAT   UID   TIME COMMAND
  3683   3684   3684   3684 pts/0      4502 Ss    1000   0:00 -bash

as we are expecting, BASH is actually using pts/0.

SSH Deep Dive

After being acquainted with the server part of the SSH suite, we are ready to have a go with the clients.

Before going on you may find useful to read Cryptography Quick Guide – Understand Symmetric And Asymmetric Cryptography And How To Use It With Openssl: the post provides a good explain of cryptography and provides a detailed explain of the Diffie-Helman algorithm too. 

Encrypted Connection Setup

Connection Initialization

  • The Client initializes the connection sending to the server a message with the SSH protocol version that it is going to use and the software name and version of the client.
  • The server replies ending a message with the SSH protocol version that is going to use and the software name and version of the server

Cryptographic Algorithms Negotiation

The client initializes the Key Exchange (KEX INIT) sending an order by preference list of algorithms it supports. The algorithms types:

  • Host Keys: they are used both for asymmetric public key cryptography, digital signature or digital certificate. Examples are RSA, Elliptic Curve, ...
  • Symmetric private key cryptography encryption:  they are used to encrypt messages. Examples ChaCha20-Poly1305, AES 256 GCM, ...
  • Message Authentication Code (MAC): examples are HMAC SHA2 256, UMAC 64 ETM
  • Compression Algorithms: Zlib and such

The server replies with its own ordered by preference lists of cryptographic algorithms.

If there is a match for each of the lists, the connection continues, otherwise it is hanged up.

Key Exchange (KEX)

The purpose of this phase is to enable both parties to generate secret symmetric keys that will be used to encrypt the connection and exchange them using a secure channel encrypted by using a shared ephemeral secret symmetric key generated on the fly by each of the parties on their own - it's not magic: it's math, ... and Diffie-Hellman.

In this phase both client and server use the agreed Diffie-Helman algorithm (for example Elliptic Curve Diffie-Hellman - ECDH):

  • the client generates an ephemeral asymmetric key-pair and sends it to the server
  • the server
    • generates an ephemeral asymmetric key-pair
    • use the received public ephemeral key and the freshly generated ephemeral asymmetric key-pair to derive the shared ephemeral symmetric private key used only during the Key Exchange
    • replies to the client sending its own public key (the host key), the ephemeral public it has just generated by his side, a key exchange hash computed on several values. Please mind that the shared ephemeral symmetric private key is among the values used to compute the hash. The hash is then signed with the host private key
  • the client, once received the message, must verify the received hash: he already has every value necessary for the computation except the shared ephemeral symmetric private key generated by the server - here come to play Diffie-Helman again, enabling the client to guess the same secret key that was generated server side. Now the client has everything needed to compute the hash on his own and check if the hash received by the server matches. If they don't match, the connection is hung up.
  • The last step is the check of the host key, which is described in the next paragraph.

From this point onward, each party generates three symmetric keys:

  • one key for the encryption,
  • one key is used as initialization vectors (IV)
  • the last key is for checking messages integrity

and exchange them using the shared ephemeral symmetric private key previously generated.

Mind that these keys have an expiration, and so they are periodically regenerated and exchanged.

From this point onward, the connection is encrypted.

Host Keys Check

At the end of the connection setup, by default each SSH client of the suite (ssh, sftp, scp, ...) verify the identity of the SSH server: if it provides a certificate, it can verify it using the PKI framework checking if it has been signed by a trusted Certification Authority, otherwise it checks a database. The last resort database is the "~/.ssh/known_hosts" file: the contents of this file is a list of trusted FQDN(or IP)/fingerprints pairs.

It searches into the file using the FQDN or IP as the lookup key: if the one of the remote server is found, then the fingerprint provided by the server is checked to verify if it matches the one in the file.

A special case is of course the first time connecting to a remote server: its FQDN or IP is not in the "~/.ssh/known_hosts" file obviously, so it prompts the user asking if he want to trust the fingerprint provided by the server:

The authenticity of host 'jump-ci-up2a001.mgmt.carcano.local (192.168.254.253)' can't be established.
ECDSA key fingerprint is MD5:1c:05:d9:c4:be:54:ed:4e:d3:42:4f:51:2b:4f:a9:22.
Are you sure you want to continue connecting (yes/no)?

If the user types "yes", then the fingerprint is considered trusted and so a record with the FQDN (or IP) along with the fingerprint is added to the "~/.ssh/known_hosts" text file. From this point on, subsequent connections to that server won't bother the user about verifying the identity of this server again.

As you certainly thought, the first connection is the most critical moment, since you are deciding whether to mark a fingerprint as trusted : in this moment you are vulnerable of a "man in the middle" attack. For this reason, when providing access information to users, besides the usual information such as server FQDN, security best practices require to provide also the list of fingerprints so that the connecting user can know if the fingerprint of the server is the right one.

If you trust your DNS server (for example if you are working inside a local environment isolated from the Internet with only local DNS servers, or even better if you configured DNSSEC), you can avoid to manually set the trust each time at the first connection by using the following option:

VerifyHostKeyDNS=yes

When the option is set to "yes", the client lookups through DNS the fingerprints into SSHFP records of the server you are connecting.

You can add this option to the "~/.ssh/config" file, so to have it always applied without having to provide it each time in the command line.

The Clients

SSH Client

The most used client of the SSH suite is certainly the "ssh" command line tool: it provides a client that connects to a remote host using the SSH protocol. To launch it, just type "ssh" followed by the name of the host you want to connect.

For example, to connect to the "jump-ci-up2a001.mgmt.carcano.local" host:

ssh jump-ci-up2a001.mgmt.carcano.local

SFTP - Secure FTP

As we saw, SFTP emulates the FTP protocol over SSH. Besides the confidentiality layer provided by SSH, it is much more handy to setup and managed compared to the FTPS (FTP over SSL): FTPS works exactly same way as the plain FTP - this means that, unless working in ACTIVE mode, when the FTPS server is behind a firewall you need to have a firewall with an FTPS helper, otherwise you need to allow incoming connection to the range of ports used by the data channel each time dynamically negotiated between the server and the client. On the contrary, SFTP uses the same channel used by SSH, so on your firewall you just need to open port TCP/22.

SFTP implements the same set of commands used by FTP, so listing them here would be off topic.

To launch it, just type "sftp" followed by the name of the host you want to connect to.

For example:

sftp joker@ftp.carcano.ch

An SFTP server can be regular or chrooted: you can guess it by listing the contents of the root directory ("/")  of the remote host:

sftp> ls /

if the output is as follows:

/afs         /bin         /boot        /dev
/etc         /home        /lib         /lib64
/lost+found  /media       /mnt         /opt
/proc        /root        /run         /sbin
/srv         /sys         /tmp         /usr
/vagrant     /var 

this means that the connected user can browse throughout all the filesystem of the host, and so it is a "regular" SFTP.

If instead the output is a subset of the previous one, such as:

etc   home

it means that the user is connected within a chroot jail, and so can see only a subset of the filesystem of the remote filesystem - this last is the most secure kind of setup.

Secure Copy - SCP

As we saw the SSH suite has been specifically developed to deprecate dangerous utilities such as "rcp": the SSH equivalent command for it is "scp".

If you can connect to a server using ssh, but you get disconnected when using SCP, it is very likely that the system administrator disabled the SFTP subsystem. This subsystem is indeed used also by SCP.

The syntax of the command is really simple:

scp source destination

within the source or destination path you can also specify the FQDN of remote hosts as needed.

For example, to copy the local file "/usr/share/doc/openssh/README" to the "/tmp" of the "jump-ci-up2a001.mgmt.carcano.local" remote host:

scp /usr/share/doc/openssh/README jump-ci-up2a001.mgmt.carcano.local:/tmp

you can of course do the opposite, copying the file "/usr/share/doc/openssh/README" from the "jump-ci-up2a001.mgmt.carcano.local" remote host to the "/tmp" directory of the local system:

scp jump-ci-up2a001.mgmt.carcano.local:/usr/share/doc/openssh/README /tmp

It is worth mentioning that if you add the "-n" option, the command executes a dry-run, showing what it will do if you seriously run it without the "-n" option.

Mind that if necessary - for example when scripting - you can get rid of the scp statistics printed while running by providing the "-q" option.

You can also recurse across sub-directories by providing the "-r" option.

Be very careful when copying directory trees using the "-r" option:  scp will copy links (both symlinks and hard links) as files and even worse circular directory links cause infinite loops. In such a scenario it is more convenient to rely on piping over ssh using a command like the following one:

ssh mcarcano@jump-ci-up2a001.mgmt.carcano.local 'tar zcf - foodirectory' > foodirectory.tar.gz

other useful command line switches are:

  • "-p" preserve permissions by providing
  • "-u" remove source files by providing

if you fancy, you can create a "smv" alias as follows:

alias smv="scp -u"

as for globbing, remember that escape is done first by the running shell, and then by the command.

This means that this won't work:

scp jump-ci-up2a001.mgmt.carcano.local:/home/marco/*.txt /tmp

unless you escape it as follows:

scp jump-ci-up2a001.mgmt.carcano.local:/home/marco/\*.txt /tmp

Advanced Topics

Key based user authentication

As previously told, public key authentication is one of the mechanisms supported by OpenSSH: this authentication mechanism works pretty similar to the host key checking algorithm, adding the steps necessary for checking if the supplied public key is actually authorized to login as the specified user.

The steps are as follows:

  1. the client sends to the server the key ID of the public key that is going to be used for authenticating
  2. the server lookups into the authorized_keys files of the account the user wants to connect as ("~/.ssh/authorized_keys") for a public key with a matching key ID
    If it is not found, the login is denied and the connection gets closed by the server, otherwise the server generates a random value, encrypts it using the public key and sends the encrypted message to the client.
  3. the client is able to decrypt the received encrypted message only if he actually have the private key related to the public key the server used to encrypt the message: the client combines the decrypted value with the shared session key that is in use to encrypt the communication, and calculates the MD5 hash of this value: this MD5 hash is then sent back to the server
  4. since the server already have both the shared session key and the original value he previously sent encrypted to the client, it performs the same operation: it combines them and calculates the MD5 hash of this value: when he receives the MD5 has calculate by the client, both MD5 hash must match: if they don't, the login is denied and the connection is closed, otherwise login succeeds.

If the supplied public key is not among the list of the authorized ones, the authentication process continues using the mechanism specified by PAM stack.

Mind that the authorized_keys file is just the most simple and so the most broadly used mechanism to provide a list of public keys that can be used to login as the specified user. Anyway, despite its simplicity, it is cumbersome to maintain. An example of corporate level solution may be to rely on na LDAP server: in such a setup the SSH server lookups the LDAP server for the public key of the logging in user and automatically fetches it: it is then up to the RBAC rules (that can be retrieved from the LDAP server too) to restrict or grant to the user access to the server.  FreeIPA (the upstream project of Red Hat Directory Server) works exactly this way: it provides facilities that enable it to centrally authorize SSH access using public keys with a granularity per host and per user. For more information on this specific topic, please refer to the FreeIPA documentation.

Setting up the public key authentication for a user is very simple - first, generate the key pair you want to use:

ssh-keygen -b 2048 -C "Marco Carcano's personal keypair"

when prompted, type the password you want to use to encrypt the private key.

The outcome is the creation of two files - let's list them:

ls -l ~/.ssh

the output is as follows:

total 12
-rw-------. 1 marco marco 1675 Apr 27 18:13 id_rsa
-rw-r--r--. 1 marco marco  402 Apr 27 18:13 id_rsa.pub

These files are:

~/.ssh/id_rsa

public key

~/.ssh/id_rsa.pub

private key

Never and ever create unencrypted private keys: mind that anybody whomcan get a copy of the private key can impersonate its owner, so leaving the key unencrypted is a huge security risk!
Private means private – you must never and ever give to anybody this key.

ssh-keygen has a lot of options – I strongly suggest you to give a quick look to them.

Now that we have a key pair we must authorize the public key - in this example we rely on the "~/.ssh/authorized_keys", so we need to add the public key to that file on the remote SSH server.

The easiest way is exploiting the ssh-copy command line utility as follows:

ssh-copy-id mcarcano@jump-ci-up2a001.mgmt.carcano.local

in order to add the key to the file, we must login first, so we have to type the password of the user (not the one of the private key): if the authentication succeeds then the key gets added to the authorized key list of that user and we are now authorized to login as the "vagrant" user without having to supply its password.

The default settings specify "~/.ssh/authorized_keys'' as the path of the authorized key. The default value has been set this way to enable the users to autonomously alter their own authorized_keys file, but in some scenarios this is not a feasible option. In this case, you can change it to whatever you need by setting the "AuthorizedKeysFile'' directive in the"/etc/ssh/sshd-config"file.

When dealing with authorized_keys files managed by the user himself, it is mandatory to set permissions of its containing directory as follow:

ls -dl ~/.ssh

the output is:

drwx------. 2 mcarcano mcarcano 25 24 apr 18.54 /home/mcarcano/.ssh

it must be readable only by the owner.

The same is for the "authorized_keys" file:

ls -al ~/.ssh/authorized_keys

the output is as follows:

-rw-r--r--. 1 mcarcano mcarcano 176 24 apr 18.54 /home/mcarcano/.ssh/authorized_keys

Mind that the Selinux context matters too. If you are sure that you set everything properly, but SSH refuses the key used by your client a quick command that often fixes is restorecon - for example:

restorecon -R /home/mcarcano/.ssh

In some of the early releases of Red Hat Enterprise Linux 6 you must issue this command right after creating the ".ssh" directory and/or "authorized_keys" file.

Now that we have added our public key to the authorized keys list, we can connect to the remote server using the related private key (the default path of the file is ".ssh/id_rsa", but you can of course specify a different one using the "-i" option.)

ssh mcarcano@jump-ci-up2a001.mgmt.carcano.local

The password prompt we get this time is for the password necessary to unlock (decrypt) the private key we are about to use to decrypt the challenge the server sent us.

ssh-agent

When working interactively, the password entering step is really annoying, especially if you have to continuously jump to different servers.

Luckily, you can rely on the SSH agent daemon to automatically provide the unlocked key when necessary, sparing you from manually typing the unlocking password each time.

There are actually two ways of launching it: with a sub-shell and single-shell – honestly I don't like the sub-shell method since if the agent crashes or gets killed also your shell will go away. That's why I'm showing you only the single shell way.

Just launch the "ssh-agent" evaluating its output, so that the shell sets the "SSH_AUTH_SOCK" and "SSH_AGENT_PID environment variables:

eval $(ssh-agent)

When launched this way, the "ssh-agent" survives when you close your shell: the pro is that you can easily reconnect to it if for any reason you get disconnected, the con is that when you are done, before disconnecting, you must remember to kill it by typing the "ssh-agent -k" command before exiting: it guess the process to kill using the value of the "SSH_AGENT_PID" variable.

The "SSH_AUTH_SOCK" variable contains the path to the UNIX domain socket to connect to the agent: the ssh client reads this variable to know if there's a running agent and how to forward requests to it.

Whenever the SSH server sends the key based authentication method request, the SSH client forwards it to the ssh-agent process through a local UNIX socket. Doing things this way the client never sees the key – this kind of design quite recalls the one used by PKCS#11 token framework, where keys are only seen and directly managed by the hardware token; ssh-agent supports PKCS#11 devices by the way.

Let's see the current value of the "SSH_AUTH_SOCK" variable:

echo ${SSH_AUTH_SOCK}

As I told you, if for any reason you get disconnected, once reconnected you are not required to launch "ssh-agent": you can simply set the SSH_AUTH_SOCK variable with the same value again.

So, for example:

SSH_AUTH_SOCK=/tmp/ssh-CXe4JvAG7Ee5/agent.3595

Note that socket security is managed by the operating systems itself:

ls -l /tmp/ssh-CXe4JvAG7Ee5/agent.3595

the output is as follows:

srw-------. 1 mcarcano mcarcano 0 Apr 28 18:27 /tmp/ssh-CXe4JvAG7Ee5/agent.3595

Mind that "ssh-agent" can handle more than just one key - key management is performed by "ssh-add" command line utility:

ssh-add /path/to/private/key

add the key to the agent

ssh-add -l

list managed keys

ssh-add -d /path/to/private/key

add the key to the agent

ssh-add -x

lock the agent

ssh-add -X

unlock the agent

now let's add a key to to the agent and set it to expire after one hour only (-t 3600):

ssh-add /home/mcarcano/.ssh/id_rsa -t 3600

of course it prompts you to type the decrypting password:

Mind that ssh-agent keeps in memory your decrypted keys – a good advice I can give you with ssh-add is to supply the -t parameter too, or, even more convenient, to specify -t parameter when launching ssh-agent: by doing so every key you add gets this expiration timeout.

Now try to connect to the remote server: you are now be able to login without requests for passwords:

ssh mcarcano@jump-ci-up2a001.mgmt.carcano.local

A very handy feature of ssh-agent is that it can be even forwarded - this is called SSH chaining. This means that the remote server you are connected to creates a local UNIX socket bound to the local ssh-agent of your workstation. If you launch an ssh client from the remote server to other machines, key-based authentication requests will be piped across the local UNIX socket to the sshd service of the remote server, then back to the local ssh-agent on your workstation.

This feature on the ssh server (it is on by default) is managed by the AllowAgentForwarding parameter (/etc/ssh/sshd_config file).

Forwarding ssh-agent is really handy when connection to hosts is granted only when coming from a trusted bastion host: by requesting Agent Forwarding when SSH connects to the bastion host, you should not have to bother with key requests when SSH connects from the bastion to other hosts.

For example:

eval $(ssh-agent)

the output is as follows:

ssh-add /home/mcarcano/.ssh/id_rsa -t 3600
ssh -A mcarcano@jump-ci-up2a001.mgmt.carcano.local

let see the PID of the ssh instance we are using:

ps x |grep [s]shd

the output is as follows:

  4248 ?        S      0:00 sshd: marco@pts/0

now let see the UNIX socket to forward agent requests:

sudo lsof -p 4248 |grep agent.4248

the output is as follows:

sshd    4248 marco   14u  unix 0x00000000c6569d08      0t0    72491 /tmp/ssh-46EAaXGHLX/agent.4248 type=STREAM

well: I think you should have got how does it work.

SSH Chaining

Agent forwarding, along with SSH chaining, is dramatically useful when working in security concerned environments where compliance and firewall rules grants connecting via SSH to servers only from a bastion host.
This means that (the best practices say) you should SSH connect from your workstation to the bastion host, and then SSH to each host you want to administer.

Forwarding agent is risky: if the bastion host gets compromised there's the concrete risk of having your key compromised – there are documented cases of this. Be careful if you really need to use it.
A better approach is chaining the SSH connections by using ProxyCommand or ProxyJump: they do not need Agent forwarding on bastion host, so setting “AllowAgentForwarding no'' into /etc/ssh/sshd_config on bastion host is certainly wise. ProxyJump relies on SSH tunneling.

By using the proxy jump option (available since OpenSSH 7.3) you can connect to a host passing through the bastion host directly from the workstation.

For example:

ssh -J jump-ci-up2a001.mgmt.carcano.local www-ci-ut1a001.test.carcano.local

since in such an environment you should always proxy onto jump-ci-up2a001.mgmt.carcano.local, you can configure ssh client to automatically add ProxyJump option: simply add the following entries to your .ssh/config file

Host *.test.carcano.local
    ProxyJump jump-ci-up2a001.mgmt.carcano.local

now you can simple type:

ssh www-ci-ut1a001.test.carcano.local

and your connection gets automatically added ProxyJump option to pass through the bastion host.

If you are working with older SSH versions, you can fall-back to ProxyCommand – you can add the following snippet to .ssh/config and achieve the same outcome:

Host *.test.carcano.local
    ProxyCommand ssh -W %h:%p jump-ci-up2a001.mgmt.carcano.local

If your SSH version is so old that does not have -W option you can still use ProxyCommand, but you need to have netcat installed on the bastion host (and this will probably disappoint your security team).

However, if you cannot avoid this, install netcat utility (the "nc" command) on the bastion host:

yum install -y nc

then add the following entry to you ".ssh/config" file:

Host  jump-ci-up2a001.mgmt.carcano.local
    ForwardAgent yes

Host *.test.carcano.local
    ProxyCommand ssh -q jump-ci-up2a001.mgmt.carcano.local nc %h %p

I showed you only for the sake of completeness, but you really avoid using this last configuration.

SSH Tunneling

The agent is not the only thing SSH can forward: it can also forward connections too. This is achieved by using the SSH connection as a tunnel.

Tunnels are connections with two endpoints: packages that connect to the SSH tunnel are encapsulated inside SSH and transported to the other end of the tunnel, that is either the host that runs the SSH client software or the SSH server, depending on the direction of the packets.

When packets reach the end of the tunnel, they get forwarded to the real destination.

There are two kind of forwarding:

Local ( -L option)

the tunnel goes from local (the SSH client) to the SSH server, that forwards (proxies) traffic to the final destination, either onto the SSH server host itself, or onto a remote host that must be reachable by the SSH server

Remote ( -R option)

the tunnel goes from remote (the SSH server) to the SSH client, that forwards (proxies) traffic to the final destination, either onto the SSH client itself, or onto a remote host that must be reachable by the SSH client

Mind that only privileged users (root) can open ports below 1024: if the user you are using requests to create a proxy that listens on a TCP port on has no privilege to create sockets below port 1024, then the tunneling request will be denied.

Let's see some use cases that can be sorted out with SSH tunnels.

Add A Confidentiality Layer

In this example the "mail.legacyisp.ch" IMAP server is very outdated and old, and does not provide an SSL protected IMAP endpoint: we can add a confidentiality layer and use it to securely transport the traffic from our machine to the SSH endpoint.

Just type a statement like the following one:

ssh -L:10143:localhost:143 mail.carcano.ch

then configure the mail client on our workstation to use localhost port 10143 as the IMAP server.

Connect To A Private Host

Another very typical use case is exploiting a remote Internet facing SSH service as it would be an SSL VPN concentrator, having it connecting us to a device that is hosted on a private network.

In this example, from the local LAN we want to connect to "lp001.printers.local" IPP printer that is on the LAN of the remote office in Lugano. In that office, there's an Internet facing SSH service reachable at "lugano.carcano.ch" Internet host.

The command to type is::

ssh -L:10631:lp001.printers.local:631 lugano.carcano.ch

you can configure the IPP printer on your workstation, specifying "localhost" port 10631 as the printer's host address.

In this example we setup a tunnel from the local machine to a device in a remote office, ... but since we are bold we can also do more. For example we can connect to a remote RDP server ("rdp001.lugano.local") in the office in Lugano, and enable it to access a web server ("wiki.zurich.local") in our local office in Zurich. 

Just type the following statement:

ssh -L3390:rdp001.lugano.local:3389 -R:8080:wiki.zurich.local:80 lugano.carcano.ch

By opening an RDP connection to "localhost:3390", it is now possible to connect to the "rdp001.lugano.local" RDP service, since the SSH tunnel forwards the connection to it. Once logged into the RDP server, we must add an entry to its hosts file so that "wiki.zurich.local" resolves with the IP address of the SSH server on the LAN at the Lugano office.

After doing this, we can simply launch a browser and open the "http://wiki.zurich.local:8080" URL.

It is a little bit crazy, isn't it? Well, if you understand this example too, rest assured you got everything from SSH tunnels!

X Window Forwarding

You can of course use a SSH tunnel for forwarding the X Window's traffic: despite this can be achieved as any regular SSH tunnel, SSH provides a very handy feature, called X Forwarding that, besides automatically creating a SSH tunnel for forwarding the X traffic, it also takes care of exporting the DISPLAY variable accordingly to the tunnel's edge on the remote server and of generating a personal XAuthority credentials file.

Understanding the details of the X Forwarding requires being skilled on the X Window system XAuth authentication, that of course requires X Window skills. Since I don't want to give a minimalistic neither trivial explain, please read X Window Tutorial - X Display Server HowTo And Cheatsheet: besides explaining how to use the SSH’s X forwarding, after a quick but very detailed overview of the X Window System, the post shows everything in action, starting by a very minimalistic X Display Server and experimenting with Host Auth and XAuthority, gradually adding components: first the Motif Window Manager, then we building and installing the UNIX Common Desktop Environment and the Xfce Desktop Environment. A must for every Linux professional!

Tips And Tricks

These are a few tips and tricks of fancy SSH use. If you have hints to add, ... just tell me and I'll be happy to add them here.

Piping

The SSH client supports piping a stream to it: this means that you can exploit an SSH connection to have the stream piped on the other side of the connection, and of course redirect it to a file. This is very useful when having to deal with database engines for example, since it enables you to dump and backup on a server different from the one running the database engine, without having the need to install special software on it.

Dump A Database To A Remote Host

In the following example, we use the "mysqldump" command line utility to dump the database "fancydatabase": instead of saving into a file on the local host, the stream is piped first to the gzip command line utility to compress it, and then to the ssh command line utility that connects to server "backup.carcano.local": once connected,  the stream is read using the cat command line utility and redirected to the "/srv/backups/fancydatabase_dump.sql.gz" file.

mysqldump -h localhost --opt fancydatabase -uroot -p | gzip -c | ssh backup.carcano.local 'cat > /srv/backups/fancydatabase_dump.sql.gz'

another example, this time using innobackup to do a full backup of MySQL a Galera node:

innobackupex --defaults-extra-file=my.cnf --galera-info --stream=tar ./ | pigz | ssh galera@backup.carcano.local "tar xizvf - -C /srv/backups/galera/full"

as you see the backup stream is generated as tar, then piped to pigz to be compressed and then piped to the backup node using SSH: on that node, the tar command line utility is used to restore the piped stream into the "/srv/backups/galera/full" directory

Otherwise, if you already have a full backup and you want to do an incremental backup:

innobackupex --defaults-extra-file=my.cnf --incremental --galera-info --incremental-basedir=/temp/user --stream=xbstream ./ 2>/dev/null | ssh galera@backup.carcano.local "xbstream -x -C /srv/backups/galera/incremental"

this time the backup stream is generated as xbstream and piped to the backup node using SSH: on that node, the xbstream command line utility is used to restore the piped stream into the "/srv/backups/galera/incremental" directory.

Rsync to a non-SSH enabled account

Let's make a more complex example: you need to rsync to a remote server as a user that is not authorised to connect using SSH. If you can connect using a different user, and this user has been granted sudo, you can workaround this as follows:

/usr/bin/rsync -e "/usr/bin/sudo -u ${DESTUSER} ssh" --log-file ${LOGFILE} -a ${SRCDIR}/  ${DESTSERVER}:${DESTDIR}

Live Capture Of Traffic On A Remote Host using Wireshark

A very nice and handy trick is launching the tcpdump command line utility on a remote host and pipe the stream of the captured traffic back to the client host, launching Wireshark to immediately analyze it. 

ssh fancyhost.carcano.local sudo tcpdump -U -s0 'tcp port 3000' -i any -w -| Wireshark -k -i -

Mind the above command requires that the user used to connect to have been granted passwordless sudo.

Caveats And Pitfalls

Here I collected a very few caveats and pitfalls that can be tricky for newbies.

Remote Host Fingerprint Mismatch

If you know that you have already connected to a SSH server and gets a message like this you should stop connecting and  investigate:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:NiWALx9MuS3xLtWYiG7g1UWZcyRK4/EcEuRcaqYp1CE.
Please contact your system administrator.

It is quite self-explanatory: the fingerprint of the SSH server is different from the one expected.

Take in account that besides someone else doing nasty things, this can also happen even for some valid reasons as:

  • the remote server software has been replaced by another one
  • the remote server has been restored from a template
  • outdated fingerprint on SSHFP DNS records
  • ...

TCP Wrapper

OpenSSH is linked to the tcp wrapper library: you can verify this by typing:

ldd /usr/sbin/sshd |grep libwrap

you get something as the following:

libwrap.so.0 => /lib64/libwrap.so.0 (0x00007fbee6301000)

Some administrators configure tcpwrapper to permit SSH connections only from specific hosts: this means that your connection is rejected even if you added all the firewall exceptions necessary to get there. The typical symptom is a connection immediately reset before prompting for the password. In this scenario, it is very likely that the culprit is tcpwrapper, so you must configure it by adding your client to the list of hosts allowed to connect to SSH.

Firewall With Stateful SSH Helpers And Host key Checking

But you know that things are not always as do they appear: I want to share a really unpleasant problem I had in the past that, despite it seemed related to tcpwrappers, was instead related to mismatching host fingerprints: by my perspective the symptom was simply getting the connection reset by the remote SSH server that previously worked (it was not managed by us, it was one of our partner's remote servers). The culprit couldn't be tcpwrapper, since they configured it to enable SSH access from everywhere.

The only hint I had was that the client replaced the SSH server software with something new. The problem was one of our corporate firewalls that was configured to proxy SSH connections and did not recognise the new host keys of the customer's SSH server: because of the mismatch, it was dropping connections.

Some firewalls perform stateful inspection even for SSH connections – this means that they are transparently (that means hidden, what a beautiful example of double thinking - Orwell docet) proxying connections. So they are actually performing a man-in-the-middle between you and the remote servers. If the firewall is configured to verify the fingerprint of the server and this changes, the firewall considers it wrong and so any connection attempt results in the firewall refusing to continue the proxied connection by resetting the SSH client. When you are working in large companies you are oblivious to firewall configurations, so a problem like this can be quite tricky to discover. The trick anyway is asking the remote party to provide you off-band the fingerprints of the host keys: if they don't match to the ones your client sees, it means there's someone else in the middle.

Footnotes

And with this last pearl of wisdom - I meant Orwell's double thinking, it ends this post dedicated to SSH: I tried to explain everything it is likely you need to know to have a good understanding of it, and tried to make the post engaging adding some details about the history and how it does work under the hood. We use SSH every day: we must be very confident when using it: just being able to connect to a remote host is not enough for a professional.

Writing a post like this takes a lot of hours. I'm doing it for the only pleasure of sharing knowledge and thoughts, but all of this does not come for free: it is a time consuming volunteering task. This blog is not affiliated to anybody, does not show advertisements nor sells data of visitors. The only goal of this blog is to make ideas flow. So please, if you liked this post, spend a little of your time to share it on Linkedin or Twitter using the buttons below: seeing that posts are actually read is the only way I have to understand if I'm really sharing thoughts or if I'm just wasting time and I'd better give up

 

2 thoughts on “OpenSSH Tutorial – The Ultimate SSH Guide To Understand It

  1. Thanks for the write up! One of the best I’ve seen on the topic. I’m especially thankful for you mentioning the firewalls issue as it’s one I’ve run into in the past and is always a little disconcerting.

Leave a Reply to Alex Sokolsky Cancel Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>