Hardware-Assisted Virtualization Technology

Hardware-based visualization technology (specifically Intel VT or AMD-V) improves the fundamental flexibility and robustness of traditional software-based virtualization solutions by accelerating key functions of the virtualized platform. This efficiency offers benefits to the IT, embedded developer, and intelligent systems communities.
With hardware-based visualization technology instead of software based virtualizing platforms,have  some new instructions to control virtualization. With them, controlling software (VMM, Virtual Machine Monitor) can be simpler, thus improving performance compared to software-based solutions including,

  • Speeding up the transfer of platform control between the guest operating systems (OSs) and the virtual machine manager (VMM)/hypervisor
  • Enabling the VMM to uniquely assign I/O devices to guest OSs
  • Optimizing the network for virtualization with adapter-based acceleration

 

An extra instruction set known as Virtual Machine Extensions or VMX has in processors with Virtualization Technology . VMX brings 10 new virtualization-specific instructions to the CPU: VMPTRLD, VMPTRST, VMCLEAR, VMREAD, VMWRITE, VMCALL, VMLAUNCH, VMRESUME, VMXOFF, and VMXON.

There are two modes to run under virtualization:
1. VMX root operation
2. VMX non-root operation.

Usually, only the virtualization controlling software (VMM), runs under root operation, while operating systems running on top of the virtual machines run under non-root operation.

To enter virtualization mode, the software should execute the VMXON instruction and then call the VMM software. The VMM software can enter each virtual machine using the VMLAUNCH instruction, and exit it by using the VMRESUME instruction. If the VMM wants to shutdown and exit the virtualization mode, it executes the VMXOFF instruction.

More recent processors have an extension called EPT (Extended Page Tables), which allows each guest to have its own page table to keep track of memory addresses. Without this extension, the VMM has to exit the virtual machine to perform address translations. This exiting-and-returning task reduces performance.

Intel VT
Intel VT performs above virtualization tasks in hardware, like memory address translation, which reduces the overhead and footprint of virtualization software and improves its performance. In fact, Intel developed a complete set of hardware based virtualization features designed to improve performance and security for virtualized applications.

Server virtualization with Intel VT
Get enhanced server virtualization performance in the data center using platforms based on Intel® Xeon® processors with Intel VT, and achieve faster VM boot times with Intel® Virtualization Technology FlexPriority and more flexible live migrations with Intel® Virtualization Technology FlexMigration (Intel® VT FlexMigration).

The Intel® Xeon® processor E5 family enables superior virtualization performance and a flexible, efficient, and secure data center that is fully equipped for the cloud.

The Intel® Xeon® processor 6500 series delivers intelligent and scalable performance optimized for efficient data center virtualization.

The Intel® Xeon® processor E7 family features flexible virtualization that automatically adapts to the diverse needs of a virtualized environment with built-in hardware assists.

AMD-V
With revolutionary architecture featuring up to 16 cores, AMD Opteron processors are built to support more VMs per server for greater consolidation—which can translate into lower server acquisition costs, operational expense, power consumption and data center floor space.
AMD Virtualization (AMD-V) technology is a set of on-chip features that help to make better use of and improve the performance in virtualization resources.

Virtualization Extensions to the x86 Instruction Set Enables software to more efficiently create VMs so that multiple operating systems and their applications can run simultaneously on the same computer
Tagged TLB Hardware features that facilitate efficient switching between VMs for better application responsiveness
Rapid Virtualization Indexing (RVI) Helps accelerate the performance of many virtualized applications by enabling hardware-based VM memory management
AMD-V Extended Migration Helps virtualization software with live migrations of VMs between all available AMD Opteron processor generations
I/O Virtualization Enables direct device access by aVM, bypassing the hypervisor for improved application performance and improved isolation of VMs for increased integrity and security

 

 

Advertisements

Configure SSH for Productivity

Multiple Connections

OpenSSH has a feature which makes it much snappier to get another terminal on a server you’re already connected.

To enable connection sharing, edit (or create) your personal SSH config, which is stored in the file ~/.ssh/config, and add these lines:

ControlMaster auto
ControlPath /tmp/ssh_mux_%h_%p_%r

Then exit any existing SSH connections, and make a new connection to a server. Now in a second window, SSH to that same server. The second terminal prompt should appear almost instantaneously, and if you were prompted for a password on the first connection, you won’t be on the second. An issue with connection sharing is that sometimes if the connection is abnormally terminated the ControlPath file doesn’t get deleted. Then when reconnecting OpenSSH spots the previous file, realizes that it isn’t current, so ignores it and makes a non-shared connection instead. A warning message like this is displayed:

ControlSocket /tmp/ssh_mux_dev_22_smylers already exists, disabling multiplexing

rm the ControlPath file will solve this problem.

 

Copying Files

Shared connections aren’t just a boon with multiple terminal windows; they also make copying files to and from remote servers a breeze. If you SSH to a server and then use the scp command to copy a file to it, scp will make use of your existing SSH connection ‒ and in Bash you can even have Tab filename completion on remote files, with the Bash Completion package. Connections are also shared with rsyncgit, and any other command which uses SSH for connection.

 

Repeated Connections

If you find yourself making multiple consecutive connections to the same server (you do something on a server, log out, and then a little later connect to it again) then enable persistent connections. Adding one more line in your config will ease your life.

ControlPersist 4h

That will cause connections to hang around for 4 hours (or try with your own define time) after you log out, you can get back to remote server within that time. Again, it really speeds up copying multiple files; a series of git push or scp commands doesn’t require authenticating with the server each time.ControlPersist requires OpenSSH 5.6 or newer.

 

 Passwords is not the only way

You can use SSH keys to log in to remote server instead of typing password. With keys you do get prompted for a pass phrase, but this happens only once per booting your computer, rather than on every connection. With OpenSSH generate yourself a private key with:

$ ssh-keygen

and follow the prompts. Do provide a pass phrase, so your private key is encrypted on disk. Then you need to copy the public part of your key to servers you wish to connect to. If your system has ssh-copy-id then it’s as simple as:

$ ssh-copy-id smylers@compo.example.org

Otherwise you need to do it manually:

  1. Find the public key. The output of ssh-keygen should say where this is, probably~/.ssh/id_rsa.pub.
  2. On each of your remote servers insert the contents of that file into~/.ssh/authorized_keys.
  3. Make sure that only your user can write to both the directory and file.

Something like this should work:

$ < ~/.ssh/id_rsa.pub ssh cloud.example.org 'mkdir -p .ssh; cat >> .ssh/authorized_keys; chmod go-w .ssh .ssh/authorized_keys'

Then you can SSH to servers, copy files, and commit code all without being hassled for passwords.

 

avoid using Full Hostnames

It’s tedious to have to type out full hostnames for servers. Typically a group of servers (cluster setup)s have hostnames which are subdomains of a particular domain name. For example you might have these servers:

  • www1.example.com
  • www2.example.com
  • mail.example.com
  • intranet.internal.example.com
  • backup.internal.example.com
  • dev.internal.example.com

Your network may be set up so that short names, such as intranet can be used to refer to them. If not, you may be able to do this yourself even without the co-operation of your local network admins. Exactly how to do this depends on your OS. Here’s what worked for me on a recent Ubuntu installation: editing /etc/dhcp/dhclient.conf, adding a line like this:

prepend domain-search "internal.example.com", "example.com";

and restarting networking:

$ sudo restart network-manager

The exact file to be tweaked and command for restarting networking seems to change with alarming frequency on OS upgrades, so you may need to do something slightly different.

 

Hostname Aliases

You can also define hostname aliases in your SSH config, though this can involve listing each hostname. For example:

Host dev
  HostName dev.internal.example.com

You can use wildcards to group similar hostnames, using %h in the fully qualified domain name:

Host dev intranet backup
  HostName %h.internal.example.com

Host www* mail
  HostName %h.example.com

 

skip typing Usernames

If your username on a remote server is different from your local username, specify this in your SSH config as well:

Host www* mail
  HostName %h.example.com
  User fifa

Now even though my local username is smylers, I can just do:

$ ssh www2

and SSH will connect to the fifa account on the server.

 

Onward Connections

Sometimes it’s useful to connect from one remote server to another, particularly to transfer files between them without having to make a local copy and do the transfer in two stages, such as:

www1 $ scp -pr templates www2:$PWD

Even if you have your public key installed on both servers, this will still prompt for a password by default: the connection is starting from the first remote server, which doesn’t have your private key to authenticate against the public key on the second server. In this point use agent forwarding, with this line in your .ssh/config:

ForwardAgent yes

Then your local SSH agent (which has prompted for your pass phrase and decoded the private key) is forwarded to the first server and can be used when making onward connections to other servers. Note you should only use agent forwarding if you trust the sys-admins of the intermediate server.

 

Resilient Connections

It can be irritating if a network blip terminates your SSH connections. OpenSSH can be told to ignore short outages by putting something like this in your SSH config seems to work quite well:

TCPKeepAlive no
ServerAliveInterval 60
ServerAliveCountMax 10

If the network disappears your connection will hang, but if it then re-appears with 10 minutes it will resume working.

 

Restarting Connections

Sometimes your connection will completely end, for example if you suspend your computer overnight or take it somewhere there isn’t internet access. When you have connectivity again the connection needs to be restarted. AutoSSH can spot when connections have failed, and automatically restart them; it doesn’t do this if a connection has been closed by user request. The AutoSSH works as a drop-in replacement for ssh. This requires ServerAliveInterval and ServerAliveCountMax to be set in your SSH config, and environment variable in your shell config:

export AUTOSSH_PORT=0

Then you can type autossh instead of ssh to make a connection that will restart on failure. If you want this for all your connections you can avoid the extra typing by making AutoSSH be your ssh command. For example if you have a ~/bin/ directory in your path (and before the system-wide directories) you can do:

$ ln -s /usr/bin/autossh ~/bin/ssh
$ hash -r

Now simply typing ssh will give you AutoSSH behaviour. If you’re using a Debian-based system, including Ubuntu, you should probably instead link to this file, just in case you ever wish to use ssh’s -M option:

$ ln -s /usr/lib/autossh/autossh ~/bin/ssh

 

 

Persistent Remote Processes

Sometimes you wish for a remote process to continue running even if the SSH connection is closed, and then to reconnect to the process later with another SSH connection. This could be to set off a task which will take a long time to run and where you’d like to log out and check back on it later (remote build, testing ..etc ).  If you’re somebody who prefers to have a separate window or tab for each shell, then it makes sense to do that as well for remote shells. In which case Dtach may be of use; it provides the persistent detached processes feature from Screen, and only that feature. You can use it like this:

$ dtach -A tmp/mutt.dtach mutt

The first time you run that it will start up a new mutt process. If your connection dies (type Enter ~. to cause that to happen) Mutt will keep running. Reconnect to the server and run the above command a second time; it will spot that it’s already running, and switch to it. If you were partway through replying to an e-mail, you’ll be restored to precisely that point.