Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
Download

Draft Forbes Group Website (Build by Nikola). The official site is hosted at:

https://labs.wsu.edu/forbes

5910 views
License: GPL3
ubuntu2004
Kernel: Python 3 (system-wide)

Table of Contents

Some notes about installing Linux on a Dell minitower with a GPU and user policy. Note: this file is evolving into the following collaborative set of notes on CoCalc:

Swan Policies

Disk

  • /home/${USER}: Your home folder. Keep the size of your home directory minimal (<2GB). I would like to implement automatic backups of these (not yet implemented), so important information should be kept here, but no working or temporary data.

  • /data/users/${USER}: Your personal user space on the main hard-drive. Keep this as small as possible. I would also like to keep this backed up (not yet implemented).

  • /data2/users/${USER}: Your personal user space on the large external hard-drive. This will not be backed up.

In addition there is shared space which should be accessible to everyone in the :student group.

  • /data2/shared/: Shared space (not backed up).

To find out which directories are taking up space, the following command is useful:

du -sh * | sort -h

Please symlink your ~/.conda directory to /data2 so that when you create environments, you don't overuse space:

mkdir -p /data2/users/${USER} mv ~/.conda /data2/users/${USER}/ # If you already have ~/.conda mkdir -p /data2/users/${USER}/.conda # If you do not have ~/.conda ln -s /data2/users/${USER}/.conda ~/

Install

To install, I used a USB:

  1. Download the Lubuntu ISO

  2. Check for corruption (18.04 Release notes):

    md5 ~/Downloads/lubuntu-18.04-alternate-i386.iso
  3. Make a bootable USB drive. (From my Mac I used Disk Utility to erase the drive,

Hardware Details

  • Dell Precision T1700 Minitower

  • Intel Xeon CPU E3-1241 v3 @ 3.50GHz Quad Core processor with Hyperthreading

!ssh swan cat /proc/cpuinfo
NBPORT=18888: For more information run 'j --help' processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2370.570 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2411.955 cache size : 8192 KB physical id : 0 siblings : 8 core id : 1 cpu cores : 4 apicid : 2 initial apicid : 2 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 2 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2437.248 cache size : 8192 KB physical id : 0 siblings : 8 core id : 2 cpu cores : 4 apicid : 4 initial apicid : 4 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 3 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2396.754 cache size : 8192 KB physical id : 0 siblings : 8 core id : 3 cpu cores : 4 apicid : 6 initial apicid : 6 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 4 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2025.711 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 1 initial apicid : 1 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 5 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2218.092 cache size : 8192 KB physical id : 0 siblings : 8 core id : 1 cpu cores : 4 apicid : 3 initial apicid : 3 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 6 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2345.344 cache size : 8192 KB physical id : 0 siblings : 8 core id : 2 cpu cores : 4 apicid : 5 initial apicid : 5 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management: processor : 7 vendor_id : GenuineIntel cpu family : 6 model : 60 model name : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz stepping : 3 microcode : 0x27 cpu MHz : 2456.151 cache size : 8192 KB physical id : 0 siblings : 8 core id : 3 cpu cores : 4 apicid : 7 initial apicid : 7 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt dtherm ida arat pln pts md_clear flush_l1d bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds bogomips : 6984.30 clflush size : 64 cache_alignment : 64 address sizes : 39 bits physical, 48 bits virtual power management:
!ssh swan lsb_release -a
NBPORT=18888: For more information run 'j --help' Distributor ID: Ubuntu Description: Ubuntu 18.10 Release: 18.10 Codename: cosmic No LSB modules are available.
!ssh swan nvidia-smi -q
NBPORT=18888: For more information run 'j --help' ==============NVSMI LOG============== Timestamp : Sat Jun 22 09:15:10 2019 Driver Version : 418.67 CUDA Version : 10.1 Attached GPUs : 1 GPU 00000000:01:00.0 Product Name : Quadro K2200 Product Brand : Quadro Display Mode : Disabled Display Active : Disabled Persistence Mode : Disabled Accounting Mode : Disabled Accounting Mode Buffer Size : 4000 Driver Model Current : N/A Pending : N/A Serial Number : 0421315001285 GPU UUID : GPU-f351701e-ae28-2cfc-a715-a7d24e386218 Minor Number : 0 VBIOS Version : 82.07.5A.00.01 MultiGPU Board : No Board ID : 0x100 GPU Part Number : N/A Inforom Version Image Version : 2010.0500.00.03 OEM Object : 1.1 ECC Object : N/A Power Management Object : N/A GPU Operation Mode Current : N/A Pending : N/A GPU Virtualization Mode Virtualization mode : None IBMNPU Relaxed Ordering Mode : N/A PCI Bus : 0x01 Device : 0x00 Domain : 0x0000 Device Id : 0x13BA10DE Bus Id : 00000000:01:00.0 Sub System Id : 0x109710DE GPU Link Info PCIe Generation Max : 2 Current : 2 Link Width Max : 16x Current : 16x Bridge Chip Type : N/A Firmware : N/A Replays Since Reset : 0 Replay Number Rollovers : 0 Tx Throughput : 14000 KB/s Rx Throughput : 0 KB/s Fan Speed : 27 % Performance State : P0 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Active SW Power Cap : Not Active HW Slowdown : Not Active HW Thermal Slowdown : N/A HW Power Brake Slowdown : N/A Sync Boost : Not Active SW Thermal Slowdown : Not Active Display Clock Setting : Not Active FB Memory Usage Total : 4042 MiB Used : 1 MiB Free : 4041 MiB BAR1 Memory Usage Total : 256 MiB Used : 2 MiB Free : 254 MiB Compute Mode : Default Utilization Gpu : 1 % Memory : 0 % Encoder : 0 % Decoder : 0 % Encoder Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 FBC Stats Active Sessions : 0 Average FPS : 0 Average Latency : 0 Ecc Mode Current : N/A Pending : N/A ECC Errors Volatile Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Aggregate Single Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Double Bit Device Memory : N/A Register File : N/A L1 Cache : N/A L2 Cache : N/A Texture Memory : N/A Texture Shared : N/A CBU : N/A Total : N/A Retired Pages Single Bit ECC : N/A Double Bit ECC : N/A Pending : N/A Temperature GPU Current Temp : 45 C GPU Shutdown Temp : 101 C GPU Slowdown Temp : 96 C GPU Max Operating Temp : N/A Memory Current Temp : N/A Memory Max Operating Temp : N/A Power Readings Power Management : Supported Power Draw : 2.85 W Power Limit : 39.50 W Default Power Limit : 39.50 W Enforced Power Limit : 39.50 W Min Power Limit : 30.00 W Max Power Limit : 39.50 W Clocks Graphics : 1045 MHz SM : 1045 MHz Memory : 2505 MHz Video : 940 MHz Applications Clocks Graphics : 1045 MHz Memory : 2505 MHz Default Applications Clocks Graphics : 1045 MHz Memory : 2505 MHz Max Clocks Graphics : 1124 MHz SM : 1124 MHz Memory : 2505 MHz Video : 1011 MHz Max Customer Boost Clocks Graphics : N/A Clock Policy Auto Boost : On Auto Boost Default : On Processes : None

OS

Decide which version of Linux you want.

Distribution: The first choice is which Linux Distribution you want to use. I chose [Ubuntu] since it is the most popular, and has lots of support. Another good option might be openSUSE.

Flavour: The next choice is Flavour, which mostly affects the GUI. Since I run a (mostly-headless) server, I chose to start with Lubuntu which I initially installed from a thumb-drive. This is fairly lightweight, and minimal.

Release: The next choice is which version of Ubuntu to upgrade to. As discussed here: "How do I decide what version of Ubuntu to install", you should basically choose either the highest revision or highest LTS revision as listed on Official Ubuntu Releases site. The LTS versions will have Long Term Support, and hence will require fewer updates. Another consideration might be which versions are supported by CUDA which would probably favour an LTS release.

After a few upgrades (18.04.2 LTS -> 18.10) I have the following:

!ssh swan "uname -m && cat /etc/*release"
NBPORT=18888: For more information run 'j --help' x86_64 DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.10 DISTRIB_CODENAME=cosmic DISTRIB_DESCRIPTION="Ubuntu 18.10" NAME="Ubuntu" VERSION="18.10 (Cosmic Cuttlefish)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.10" VERSION_ID="18.10" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=cosmic UBUNTU_CODENAME=cosmic
!ssh swan "uname -m && cat /etc/*release"
NBPORT=18888: For more information run 'j --help' x86_64 DISTRIB_ID=Ubuntu DISTRIB_RELEASE=18.04 DISTRIB_CODENAME=bionic DISTRIB_DESCRIPTION="Ubuntu 18.04.2 LTS" NAME="Ubuntu" VERSION="18.04.2 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.2 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic

There were a few things I had to do:

  • Manually configure the network interface (static IP in the department).

  • Install an ssh daemon:

sudo apt-get install openssh-server

Configuration

I added some modification to the default bash initialization files. These will apply for all users and add the following features (see the Config Files section for the exact implementation).

  • Alias source with source_if_exists which does not fail if file does not exist.

  • Allow for tracing of init files by touch ~/.trace.

  • Provide some useful bash completions (tab completion of various commands).

  • Sets up the modules system (see the Modules section).

  • Adds the system conda environments for the user and provides the alias j for starting Jupyter notebooks with appropriate port forwarding.

  • Use etckeeper to keep track of configuration:

    sudo apt-get update sudo apt-get install hg etckeeper sudo etckeeper init

Software

For (un)installing software use apt, apt-get, or aptitude:

  • aptitude: Needs to be installed, but provide more support and some more user-friendly features.

  • apt: A nicer interface than apt-get that is apparently recommended over apt-get, but maybe can't do all the little things apt-get can.

  • apt-get: The low level too. I heard that apt-get apparently has better dependency resolution than apt, but I cannot find this reference now.

These all use dpkg under the hood to do the actual installation. The latter is also useful if you want to see what is installed (see below). The list of sources is in /etc/apt/sources.list which may need to be updated.

sudo apt update sudo apt install aptitude
sudo aptitude update sudo aptitude upgrade sudo aptitude install -y \ fail2ban # Limits failed logins - 10min wait after 3 fails \ wget \ subversion git \ #mercurial \ duply # Backup \ python3 python3-pip \ etckeeper # Version control /etc \ bzr \ environment-modules \ myrepos \ libfftw3-dev swig \ gfortran libopenmpi-dev openmpi-common openmpi-bin # MCTDH-X \ gsl-bin \ pandoc \ emacs \ gnuplot \ smbclient # To connect to remote drives using smb. I use for backup \ apache2 npm \ ffmpeg mencoder # To make movies and animations \ xvfb # Virtual frame buffer for X11. Required for headless Mayavi \ ocl-icd-opencl-dev # For OpenCL \ apt-rdepends # For visualizing software dependencies \ iperf3 # Profiling nenwork behaviour \ libcrypto++6 # Needed by the bbcp tool \ tigervnc-standalone-server tigervnc-common \ uidmap dbus-user-session # Needed for docker rootless \ awscli # The following are for OneDrive Free Client github.com/skilion/onedrive sudo aptitude install -y libcurl4-openssl-dev libsqlite3-dev sudo snap install --classic dmd && sudo snap install --classic dub # Evolve extension for system mercurial. As of June 2020 Ubuntu does not support # Mercurial with python 3, so we install this manually. #sudo -H /usr/bin/pip3 install --upgrade mercurial hg-evolve hg-git #DOES NOT WORK YET... sudo aptitude install -y python2 python2-dev curl https://bootstrap.pypa.io/get-pip.py --output get-pip.py sudo /usr/bin/python2 get-pip.py rm get-pip.py sudo -H /usr/local/bin/pip2 install --upgrade mercurial==5.2 hg-evolve hg-git dulwich==0.19.16

I created a user admin for managing software (which is installed in /data). This directory is owned by admin.

sudo adduser admin sudo mkdir /data sudo chown admin /data

Now I install conda as follows

su admin mkdir -p /data/src/ wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh /data/src/ bash Miniconda-latest-Linux-x86_64.sh -p /data/apps/conda -b -f export PATH="/data/apps/conda/bin:$PATH"

I install Mercurial in this root conda environment so I can also install a few other tools with pip:

su admin export PATH="/data/apps/conda/bin:$PATH" conda install mercurial conda update -y --all pip install hg-git python-hglib hg clone https://bitbucket.org/marmoute/mutable-history -u stable /data/src/hg/mutable-history pip install /data/src/hg/mutable-history

Now create the environments

su admin export PATH="/data/apps/conda/bin:$PATH" conda env create mforbes/work module load cuda # See below - install cuda and the module files first for _e in work; do . activate $_e pip install pycuda \ scikit-cuda done

Apt Repositories

Modules

I manage the software with modules. In particular, I provide the conda modulefile.

Note: Ubuntu also has update-alternatives. see here: https://askubuntu.com/a/26518

Users and Groups

To list groups:

$ groups mforbes adm cdrom sudo dip plugdev lpadmin sambashare students

Details about who belongs to which group can be found in the file /etc/group.

To create new users:

  • sudo useradd -m <name>: The -m option creates their home directory.

  • Give the user a unique port in /etc/nbports for them to use when running jupyter.

To create a new group:

  • sudo addgroup students

To add users to a group:

  • sudo usermod -a -G students <name>

To make a shared directory for a group:

sudo mkdir /data/shared sudo mkdir /data2/shared sudo chown :students -R /data/shared sudo chmod a-rwx,g+rwsX -R /data/shared sudo chown :students -R /data2/shared sudo chmod a-rwx,g+rwsX -R /data2/shared

Similarly, to make user folders, but which are not shared by default:

sudo mkdir /data/users sudo mkdir /data2/users sudo chown :students -R /data/users sudo chmod a-rwx,g+rwx -R /data/users sudo chown :students -R /data2/users sudo chmod a-rwx,g+rwx -R /data2/users

*Notes: Users need to log out and log back in to enable these permissions to become effective. The +X here sets execute permision for directories but not files. The +s will cause users who make new files etc. to create them from the appropriate group.

Personal Configuration

mkdir -p ~/work/mmfbb mkdir ~/current hg clone ssh://hg@bitbucket.org/mforbes/configurations ~/work/mmfbb/configurations export PATH="$(cd ~/work/mmfbb/configurations/scripts && pwd):$PATH" cd ~/work/mmfbb/configurations/common && mmf_initial_setup -v cd ~/work/mmfbb/configurations/machines/generic && mmf_initial_setup -v cd ~/work/mmfbb/configurations/machines/linux/common && mmf_initial_setup -v cd ~/work/mmfbb/configurations/personal/mmf && mmf_initial_setup -v ln -s /data/apps/conda ~/.anaconda # Think about this: it should be done at a system level. #hg clone ssh://[email protected]/mforbes/mmfhg ~/work/mmfbb/mmfhg #cd ~/work/mmfbb/mmfhg && make install

Update Kernel

The recommended way to update your kernel is:

do-release-upgrade

During the upgrade to, I was asked to choose between LightDM and SDDM. I chose the former.

Difficulties

Apparently, upgrading your release is difficult. The recommended do-release-upgrade command only works in certain cases while the end version is current.

If you run into difficulties, here is a more systematic way to proceed. I was having difficulty because my release was not supported. I am following the discussion here to upgrade from 18.10 to 20.04LTS:

$ sudo do-release-upgrade Checking for a new Ubuntu release Your Ubuntu release is not supported anymore. For upgrade information, please visit: http://www.ubuntu.com/releaseendoflife ... An upgrade from 'cosmic' to 'eoan' is not supported with this tool.

Note eoan is 19.10 which I don't want. See Ubuntu Releases for the name you want. I want 20.04LTS which is called focal fossa.

  1. (optional) Backup:

$ screen -S Backup $ sudo rsync -vaxhWE --no-compress --progress --delete --ignore-errors / /mnt/data2/swan_backup/13May2020
  1. First find out which version you have installed:

$ lsb_release -a ... Description: Ubuntu 18.10 Release: 18.10 Codename: cosmic
  1. Update /etc/apt/sources.list to include the correct sources:

deb http://archive.ubuntu.com/ubuntu focal main restricted universe deb http://archive.ubuntu.com/ubuntu focal-security main restricted universe deb http://archive.ubuntu.com/ubuntu focal-backports main restricted universe deb http://archive.ubuntu.com/ubuntu focal-updates main restricted universe
  1. (optional) Cleanup any unused or obsolete kernels on /boot: (I started with a small boot partition... there is not space for more than a couple images).

$ cd /boot $ uname -a Linux swan 4.18.0-25-generic #26-Ubuntu SMP Mon Jun 24 09:32:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux $ ls -la ... $ rm ... # Remove anything that is not 4.18.0-25*
  1. Update your current os:

$ conda deactivate # Don't use my custom python kernels. $ sudo apt-get update ... $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: linux-generic linux-headers-generic linux-image-generic 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. $ sudo apt-get dist-upgrade

The last command gave some errors. I had to do the following:

  • sudo vi /etc/mercurial/hgrc.d/mmfhg.rc: Remove any extensions that were causing problems:

    [ui] #ignore.mmfhg = ${MMFHG}/hgignore [extensions] #evolve= #hggit =
  • sudo rm /etc/apt/apt.conf.d/50unattended-upgrades.ucftmp: See here.

  • Preparing to unpack .../at_3.1.23-1ubuntu1_amd64.deb ... Failed to reload daemon: Access denied. See here and here.

    $ service atd stop ... polkit-agent-helper-1: needs to be setuid root Error: Incorrect permissions on /usr/lib/policykit-1/polkit-agent-helper-1 (needs to be setuid root) $ sudo chmod 5755 /usr/lib/policykit-1/polkit-agent-helper-1 $ service atd stop $ systemctl daemon-reexec
  • WARNING: PV /dev/sdb5 in VG lubuntu-vg is using an old PV header, modify the VG to update.:

    $ sudo vgck --updatemetadata lubuntu-vg
  • Error 24 : Write error : cannot write compressed block: See here. I again ran out of space on /boot so had to remove old images (that were somehow regenerated):

    $ sudo rm -i /boot/*22*

After this, I did a cleanup:

$ sudo apt autoremove

This removed some important packages, so I reinstalled them after. Then rebooted.

$ sudo shutdown -r now

After this was done, I was still getting an incorrect MOTD and had to remove the cached file:

$ sudo rm /var/lib/ubuntu-release-upgrader/release-upgrade-available

Old

If you run into difficulties, here is a more systematic way to proceed. I was having difficulty because my release was not supported:

$ sudo do-release-upgrade Checking for a new Ubuntu release Your Ubuntu release is not supported anymore. For upgrade information, please visit: http://www.ubuntu.com/releaseendoflife
  1. First find out which version you have installed:

$ lsb_release -a ... Description: Ubuntu 18.10 Release: 18.10 Codename: cosmic
  1. Update /etc/apt/sources.list to include the correct sources:

deb http://old-releases.ubuntu.com/ubuntu/ cosmic main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ cosmic-updates main restricted universe multiverse deb http://old-releases.ubuntu.com/ubuntu/ cosmic-security main restricted universe multiverse
  1. Update your current os:

$ sudo apt-get update ... $ sudo apt-get upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages have been kept back: linux-generic linux-headers-generic linux-image-generic 0 upgraded, 0 newly installed, 0 to remove and 3 not upgraded. $ sudo apt-get upgrade linux-generic linux-headers-generic linux-image-generic ... $ sudo apt autoremove

Old

I used the following notes to upgrade 2 Ways to Upgrade From Ubuntu 18.04 To 18.10:

sudo apt update && sudo apt dist-upgrade sudo vi /etc/update-manager/release-upgrades # Change Prompt=lts to Prompt=normal sudo apt install update-manager-core do-release-upgrade

Note: My /boot partition is small, so I needed to remove old kernels first. Be sure to uninstall the packages rather than just deleting the files.

See also:

dd

CUDA

The simplest solution is to use Conda to install the appropriate toolkit:

conda install cupy

This will bring in the best supported version of [CUDA], needed if you want to use CuPy, which does not always support the latest toolkit.

If you need to install it for your operating system, then the next easiest might be

sudo apt-get install nvidia-visual-profiler nvidia-cuda-toolkit

but, this might install an older version of the toolkit though.

You can add the NVIDA repository as:

sudo add-apt-repository ppa:graphics-drivers/ppa

Another option is to try to get the driver directly from Nvidia, but I have found some conflicts with this. Here is my attempt:

I installed the CUDA toolkit as follows as directed on the CUDA Website. Note: you must choose a version compatibile with your kernel corresponding to the table listed there:

$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.10 Release: 18.10 Codename: cosmic $ uname -a Linux swan 4.18.0-22-generic #23-Ubuntu SMP Tue Jun 4 20:22:46 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

This means I have Ubuntu 18.10 with kernel 4.18.0. Follow the instructions on the CUDA Toolkit Download page to figure out which version you should get:

# As of 12 May 2022 # Optional: remove old versions sudo apt-get purge "^cuda" "^nvidia" "^libnvidia" "^libcuda" sudo apt-get autoremove # Optional: Update packages sudo apt-get update sudo apt-get upgrade # Current instructions for 20.04 LTS wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600 sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/3bf863cc.pub # The following adds line sto /etc/apt/sources.list sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /" sudo apt-get update sudo apt-get -y install cuda # Restart as suggested #sudo shutdown -r now

Note: please follow our restart policy:

[removed] Old Instructions ```bash wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1810/x86_64/cuda-repo-ubuntu1810_10.1.168-1_amd64.deb

Optional: remove old versions

sudo apt-get purge "^cuda" "^nvidia" "^libnvidia" "^libcuda" sudo apt-get autoremove

Optional: Update packages

sudo apt-get update sudo apt-get upgrade

sudo dpkg -i cuda-repo-ubuntu1810_10.1.168-1_amd64.deb sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1810/x86_64/7fa2af80.pub sudo apt-get update sudo apt-get install cuda

Restart as suggested

sudo shutdown -r now

Older still Note: CUDA does not work with this by default since it requires ``gcc-4.9``. See the instructions here: * http://askubuntu.com/questions/693145/installing-cuda-7-5-toolkit-on-ubuntu-15-10 Here is what I did: ```bash sudo mkdir -p /opt/compiler_cuda cd /opt/compiler_cuda sudo ln -s /usr/bin/gcc-4.9 gcc sudo ln -s /usr/bin/g++-4.9 g++ sudo ln -s /opt/compiler_cuda/gcc cc sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-5 60 --slave /usr/bin/g++ g++ /usr/bin/g++-5 sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.9 50 --slave /usr/bin/g++ g++ /usr/bin/g++-4.9 sudo sh cuda_7.5.18_linux.run --silent --toolkit --override sudo sh cuda_7.5.18_linux.run --silent --samples --override

Even Older

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1410/x86_64/cuda-repo-ubuntu1410_7.0-28_amd64.deb sudo dpkg -i cuda-repo-ubuntu1410_7.0-28_amd64.deb sudo apt-get update sudo apt-get install cuda

The correct version of GCC etc. were already installed.

Now install the various python tools:

su admin module load cuda # See below - install cuda and the module files first for _e in work2 work3; do . activate $_e pip install --upgrade ipdb pycuda scikit-cuda done

Now you can check which version you have:

$ dpkg -s cuda Package: cuda Status: install ok installed Priority: optional Section: multiverse/devel Installed-Size: 25 Maintainer: cudatools <cudatools@nvidia.com> Architecture: amd64 Version: 10.1.168-1 Depends: cuda-10-1 (>= 10.1.168) Description: CUDA meta-package Meta-package containing all the available packages required for native CUDA development. Contains the toolkit, samples, driver and documentation.

Daemons

The service wrapper allows you to inspect and control services or daemons such as Docker docker or your HTTP server apache2. This adds entries to /etc/init.d.

sudo service --status-all

Most linux servers are run with systemctl. To see which services are enabled:

systemctl list-unit-files | grep enabled

Config Files

The user's default shell is specified in /etc/passwd and can be set with

sudo usermod --shell /bin/bash mforbes

The default is /bin/bash but /bin/sh might be useful for users such as docker which are intended to run services. To provide a sane initial environment, I define the following initialization files which go in

/etc/profile /etc/bash.bashrc

These then source files in /etc/profile.d/. One feature of these files is that they look for a .trace file in the user's home directory and show the startup sequence.

$ touch ~/.trace

Upon login, I now see:

$ ssh swan Welcome to Ubuntu 18.10 (GNU/Linux 4.18.0-22-generic x86_64) ... Welcome to Swan. * To debug your startup configuration: $ touch ~/.trace # rm ~/.trace when done * To run jupyter use the j function: $ j MyNotebook.ipynb * For detailed instructions run $ j --help # Show port and instructions * Remember to activate the appropriate conda environment such as $ conda activate cugpe2 Last login: Sat Mar 21 14:59:58 2020 from 98.146.197.72 Tracing all sourced files... /etc/profile '/etc/bash.bashrc' /etc/profile.d/modules.sh '/etc/profile.d/modules.sh' '/usr/share/modules/init/bash' /usr/share/modules/init/bash_completion '/usr/share/modules/init/bash_completion' NBPORT=18888: For more information run 'j --help' '/data/apps/conda/etc/profile.d/conda.sh' '/usr/share/bash-completion/bash_completion' ... '/etc/profile.d/01-locale-fix.sh' ... '/home/mforbes/.bash_login' ...

Bash

!mkdir -p _generated/linux_config/etc
%%file _generated/linux_config/etc/helper_functions # Bash Helper Functions; -*-Shell-script-*- # dest = /etc/helper_functions # Keep this as the 2nd line for mmf_init_setup # This file defines useful helper functions for adding elemnts to # path-like variables without duplication. shopt -s extglob function source_if_exists() { # Source file iff it exists. test -f "$1" && . $* } if [ -a ~/.trace ]; then # If there is a file ~/.trace, then we overload the source and . functions to # print the files that are sourced. declare -f source > /dev/null || { TRACE_INDENT="" echo "Tracing all sourced files..." echo "${BASH_ARGV[0]}" function source() { echo "$TRACE_INDENT$1" TRACE_INDENT="${TRACE_INDENT} " source_if_exists $* TRACE_INDENT="${TRACE_INDENT:2}" } # Slightly different definition here - use . if you want an error reported # if the file does not exist. function .() { echo "$TRACE_INDENT'$*'" TRACE_INDENT="${TRACE_INDENT} " builtin . $* TRACE_INDENT="${TRACE_INDENT:2}" } } else alias source=source_if_exists fi function _prepend_path () { # Prepends path $2 to $1 local DIRTY_PATH=$2:${!1} # Simple path for checking colons local TMP_PATH=:${!1%:}: # Add leading and trailing colons TMP_PATH=:$2${TMP_PATH//:$2:/:} # Do not duplicate TMP_PATH=${TMP_PATH//+(:)/:} # Remove duplicate colons if [ -n "${DIRTY_PATH/#:*/}" ]; then TMP_PATH=${TMP_PATH#:} # Remove leading colon fi if [ -n "${DIRTY_PATH/%*:/}" ]; then TMP_PATH=${TMP_PATH%:} # Remove trailing colon fi export $1="$TMP_PATH" } function _append_path () { # Appends path $2 to $1 local DIRTY_PATH=${!1}:$2 # Simple path for checking colons local TMP_PATH=:${!1%:}: # Add leading and trailing colons TMP_PATH=${TMP_PATH//:$2:/:}$2: # Do not duplicate TMP_PATH=${TMP_PATH//+(:)/:} # Remove duplicate colons if [ -n "${DIRTY_PATH/#:*/}" ]; then TMP_PATH=${TMP_PATH#:} # Remove leading colon fi if [ -n "${DIRTY_PATH/%*:/}" ]; then TMP_PATH=${TMP_PATH%:} # Remove trailing colon fi export $1="$TMP_PATH" } function clean_path () { # Cleans up the specified path local DIRTY_PATH=${!1} # Simple path for checking colons local IFS=: # Use : as word separator for paths local reversed_path local clean_path for dir in ${!1}; # Reverse the path so that resultant do # path list is in correct order reversed_path=$dir:$reversed_path done for dir in ${reversed_path}; # Clean the path do _prepend_path clean_path $dir done clean_path=:$clean_path: clean_path=${clean_path//+(:)/:} # Remove duplicate colons if [ -n "${DIRTY_PATH/#:*/}" ]; then clean_path=${clean_path#:} # Remove leading colon fi if [ -n "${DIRTY_PATH/%*:/}" ]; then clean_path=${clean_path%:} # Remove trailing colon fi export $1="$clean_path" } function _contains () { local pattern=":$1:" local target=$2 case $pattern in *:$target:* ) return 1;; * ) return 0;; esac } function _clean_path () { # Cleans up the specified path echo "Cleaning..." export $1=`echo ${!1} | tr ":" "\n" | uniq | tr "\n" ":"` } function _clean_path () { # Cleans up the specified path echo "Cleaning..." local DIRTY_PATH=${!1} local IFS=: # Use : as word separator for paths local clean_path for dir in ${!1}; # Reverse the path so that resultant do # path list is in correct order reversed_path=$dir:$reversed_path done for dir in ${reversed_path}; do _contains "${clean_path}" "${dir}" && \ clean_path="${clean_path}:${dir}" done export $1="$clean_path" } function clean_path () { local new_path=`/usr/bin/python3 -E -c \ 'path="'"${!1}"'" while "::" in path: path = path.replace("::",":") clean=[] for dir in path.split(":"): if dir not in clean: clean.append(dir) clean = ":".join(clean) if path.startswith(":"): clean = ":" + clean if path.endswith(":"): clean = clean + ":" print(clean)'` export $1="${new_path}" } function prepend_path () { # Prepends path $2 to $1 and cleans export $1="$2:${!1}" } function append_path () { # Appends path $2 to $1 and cleans export $1="${!1}:$2" } function prepend_and_clean_path () { # Prepends path $2 to $1 and cleans export $1="$2:${!1}" clean_path $1 } function append_and_clean_path () { # Appends path $2 to $1 and cleans export $1="${!1}:$2" clean_path $1 }
Overwriting _generated/linux_config/etc/helper_functions
%%file _generated/linux_config/etc/profile # /etc/profile: #dest = /etc/profile # system-wide .profile file for the Bourne shell (sh(1)) and Bourne compatible shells # (bash(1), ksh(1), ash(1), ...). if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then . /etc/helper_functions fi if [ "$PS1" ]; then if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then # The file bash.bashrc already sets the default PS1. # PS1='\h:\w\$ ' if [ -f /etc/bash.bashrc ]; then . /etc/bash.bashrc fi else if [ "`id -u`" -eq 0 ]; then PS1='# ' else PS1='$ ' fi fi fi # The default umask is now handled by pam_umask. # See pam_umask(8) and /etc/login.defs. if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then . $i fi done unset i fi
Writing _generated/linux_config/etc/profile
%%file _generated/linux_config/etc/bash.bashrc # System-wide .bashrc file for interactive bash(1) shells. #dest = /etc/bash.bashrc # The following is needed so that the source alias (which is aliased # to source_if_exists) works in non-login shells. See: # http://stackoverflow.com/a/1615973/1088938 shopt -s expand_aliases . /etc/helper_functions # Setup Modules # The files in /etc/profile.d/modules.sh do not get source for non-login # shells, but the module command is often needed by these, so we include # it explicitly here. source /etc/profile.d/modules.sh # Add my mercurial goodies for everyone. export MMFHG=${MMFHG-"/data/apps/repositories/mmfhg"} # Set the NBPORT variable for the user from /etc/nbports export NBPORT=$(sed -n -e "s/${USER}: //p" /etc/nbports) if [ -z "$NBPORT" ]; then # https://unix.stackexchange.com/a/132524/37813 export NBPORT=$(python - <<END import socket s = socket.socket() s.bind(("", 0)) print(s.getsockname()[1]) s.close() END ) >&2 echo "Warning: ${USER} not in /etc/nbports. Using random port ${NBPORT}" fi >&2 echo "NBPORT=${NBPORT}: For more information run 'j --help'" j_help_message="\ Usage: j Notebook.ipynb This will run the notebook with --port ${NBPORT}. To view this on your laptop, use ssh to forward this port. I.e. add this to your '~/.ssh/config' file on your laptop: Host swannb HostName swan.physics.wsu.edu User ${USER} ForwardAgent yes LocalForward 10001 localhost:10001 LocalForward 10002 localhost:10002 LocalForward 10003 localhost:10003 LocalForward ${NBPORT} localhost:${NBPORT} # The following is for snakeviz LocalForward 8080 localhost:8080 If you subsequently connect with 'ssh swannb', then you can view the notebook on your laptop at http://localhost:${NBPORT}/tree (but use the link specified by Jupyter as it contains a required security token.) Conda Environments ================== Please create your own conda environments for your work from an environment.yml file for reproducible computing. These environments will be stored in ~/.conda/envs/ and will appear as special Kernels when Jupyter notebooks are run. Here is a sample environment.mpl3.yml file that creates an environment with matplotlib and Python 3: cat > environment.mpl3.yml <<EOF name: mpl3 channels: - defaults dependencies: - python=3 - matplotlib EOF This can be installed with: conda env create --file environment.mpl3.yml Now you can activate it (and should see it as a valid Kernel): conda activate mpl3 This is stored in the following directory: ~/.conda/envs/mpl3 Simply delete this to remove the environment: rm -rf ~/.conda/envs/mpl3 For more examples see https://bitbucket.org/mforbes/configurations/src/trunk/anaconda/ " # Function to open jupyter notebooks, reading local config files # if they exist. function j { if [ "$1" = "-h" -o "$1" = "--help" ]; then echo "${j_help_message}" return elif [ -f './jupyter_notebook_config.py' ]; then CONFIG_FLAG="--config=./jupyter_notebook_config.py" elif [ -f "$(hg root)/jupyter_notebook_config.py" ]; then CONFIG_FLAG="--config=$(hg root)/jupyter_notebook_config.py" else CONFIG_FLAG="" fi echo "conda activate jupyter" conda activate jupyter echo "jupyter notebook ${CONFIG_FLAG} --port ${NBPORT} $*" jupyter notebook "${CONFIG_FLAG}" --port ${NBPORT} "$*" conda deactivate } # To enable the settings / commands in this file for login shells as well, # this file has to be sourced in /etc/profile. # If not running interactively, don't do anything [ -z "$PS1" ] && return # check the window size after each command and, if necessary, # update the values of LINES and COLUMNS. shopt -s checkwinsize # set variable identifying the chroot you work in (used in the prompt below) if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then debian_chroot=$(cat /etc/debian_chroot) fi # set a fancy prompt (non-color, overwrite the one in /etc/profile) PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ ' # Commented out, don't overwrite xterm -T "title" -n "icontitle" by default. # If this is an xterm set the title to user@host:dir #case "$TERM" in #xterm*|rxvt*) # PROMPT_COMMAND='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD}\007"' # ;; #*) # ;; #esac # enable bash completion in interactive shells if ! shopt -oq posix; then if [ -f /usr/share/bash-completion/bash_completion ]; then . /usr/share/bash-completion/bash_completion elif [ -f /etc/bash_completion ]; then . /etc/bash_completion fi fi # sudo hint if [ ! -e "$HOME/.sudo_as_admin_successful" ] && [ ! -e "$HOME/.hushlogin" ] ; then case " $(groups) " in *\ admin\ *|*\ sudo\ *) if [ -x /usr/bin/sudo ]; then cat <<EOF To run a command as administrator (user "root"), use "sudo <command>". See "man sudo_root" for details. EOF fi esac fi # if the command-not-found package is installed, use it if [ -x /usr/lib/command-not-found -o -x /usr/share/command-not-found/command-not-found ]; then function command_not_found_handle { # check because c-n-f could've been removed in the meantime if [ -x /usr/lib/command-not-found ]; then /usr/lib/command-not-found -- "$1" return $? elif [ -x /usr/share/command-not-found/command-not-found ]; then /usr/share/command-not-found/command-not-found -- "$1" return $? else printf "%s: command not found\n" "$1" >&2 return 127 fi } fi # Commandline History export INPUTRC="${INPUTRC:-/etc/inputrc}"
Writing _generated/linux_config/etc/bash.bashrc

Modules

!mkdir -p _generated/linux_config/usr/share/modules/modulefiles
%%file _generated/linux_config/usr/share/modules/modulefiles/conda #%Module1.0 #dest = /usr/share/modules/modulefiles/conda proc ModulesHelp { } { puts stderr "\tAdds Conda directory to the PATH (includes hg)." } module-whatis "Adds Conda to the PATH (includes hg)." # Add to back too so that it is always available. We use this to provide # Mercurial for example which we install in the default environment prepend-path PATH /data/apps/conda/bin append-path PATH /data/apps/conda/bin/. prepend-path DYLD_LIBRARY_PATH /data/apps/conda/lib
Writing _generated/linux_config/usr/share/modules/modulefiles/conda
%%file _generated/linux_config/usr/share/modules/modulefiles/conda #%Module1.0 #dest = /usr/share/modules/modulefiles/cuda proc ModulesHelp { } { puts stderr "\tAdds CUDA directory to the PATH, CPATH etc." } module-whatis "Adds CUDA to your environment variable" # Add to back too so that it is always available. We use this to provide # Mercurial for example which we install in the default environment prepend-path PATH /usr/local/cuda/bin prepend-path LD_LIBRARY_PATH /usr/local/cuda/lib64 prepend-path CPATH /usr/local/cuda/include
Overwriting _generated/linux_config/usr/share/modules/modulefiles/conda

The module command is setup by some scripts deposited in /etc/profile.d/modules.sh which does not get executed for non-login shells. This can be a problem, so we explicitly source this in the /etc/bash.bashrc.

Mercurial

I provide some mercurial goodies with my mmfhg package which I install as follows:

sudo su admin mkdir -p /data/apps/repositories/ hg clone https://mforbes@bitbucket.org/mforbes/mmfhg cd mmfhg make install sudo ln -s /data/apps/repositories/mmfhg/hgrc /etc/mercurial/hgrc.d/mmfhg.rc

Python, PIP etc.

!mkdir -p _generated/linux_config/etc
%%file _generated/linux_config/etc/pip.conf # Global PIP configuration #dest = /etc/pip.conf [global] find-links = https://bitbucket.org/mforbes/mypi/
Writing _generated/linux_config/etc/pip.conf

Conda Environments

In install a base conda environment as the admin user for everyone to use. This is a Python 3 environment with the following packages installed:

  • mercurial: Now supports Python 3 (as of version 5.2)

  • git-annex: Support for archiving large files.

  • nbstripout: Cleaning notebooks - will replace with jupytext soon which needs to be installed in the jupyter environment.

  • argcomplete: Conda tab completion. Add the following to your .bashrc file:

    eval "$(register-python-argcomplete conda)"
  • conda-devenv: Allows including environment.yml files.

  • conda-tree: Allows you to visualize dependenices.

  • conda-verify, conda-build, anaconda-client: Building conda recipes and uploading them to anaconda cloud.

conda deactivate # Make sure you are in the base environment conda install python=3 mercurial git-annex \ nbstripout \ conda-devenv conda-tree \ conda-verify conda-build anaconda-client pip install --upgrade hg-evolve hg-git # Not yet available on conda for python=3?

Here are some custom project-specific environments that required more recent versions of packages than the generic environments listed above.

# environment.cugpe2.yml name: cugpe2
su admin export PATH="/data/apps/conda/bin:$PATH" conda create -y -n cugpe2 python=2 conda install -y -n cugpe2 numba \ jupyter \ scipy \ cudatoolkit \ numexpr \ matplotlib \ bokeh \ sympy \ theano \ docutils . activate cugpe2 pip install ipdb \ line_profiler \ memory_profiler \ snakeviz \ uncertainties \ pyfftw module load cuda # See below - install cuda and the module files first pip install pycuda \ scikit-cuda \ nbopen # Conda does not load the correct pexpect on Mac OS X, so do this # https://github.com/ipython/ipython/issues/9065 # https://github.com/conda/conda/issues/2010 pip install -U pexpect

Web Server

To host a website I followed the instructions on this page:

To start I just installed apache:

sudo apt-get install apache2

As pointed out in the comments, if you need the full LAMP stack, the configuration process can be simplified:

sudo apt-get update sudo apt-get install tasksel sudo tasksel install lamp-server

This points the server to /var/www/html. I then created a personal space:

sudo mkdir /var/www/html/forbes sudo chown mforbes /var/www/html/forbes ln -s /var/www/html/forbes ~/www-forbes

The server can be restarted using service:

sudo service apache2 restart

Apache Configuration

The configuration files for Apache are in the following location:

/etc/apache2/ |-- apache2.conf | `-- ports.conf |-- mods-enabled | |-- *.load | `-- *.conf |-- conf-enabled | `-- *.conf `-- sites-enabled |-- 000-default.conf |-- 000-default-le-ssl.conf `*.conf

Let's Encrypt and Certbot

Let's Encrypt uses Certbot as a tool for enabling free site certification. This requires using a supported Ubuntu release.

sudo apt update sudo apt install software-properties-common sudo add-apt-repository universe #sudo add-apt-repository ppa:certbot/certbot sudo apt update sudo apt install certbot python3-certbot-apache sudo certbot --apache #sudo apt-add-repository -r ppa:certbot/certbot

The commented out commands were needed earlier (pre Ubuntu 19.10) but should no longer be needed. Without removing this repo I ran into the following error) during apt update:

E: The repository 'http://ppa.launchpad.net/certbot/certbot/ubuntu focal Release' does not have a Release file.

Once the certificates are created, we might like to make them accessible to other applications like murmerd. To do this, we add the appropriate users (which run the services) to the ssl-cert group and change the permissions of the certificates. I did this following the suggestion here by modifying /etc/letsencrypt/cli.ini.

#/etc/letsencrypt/cli.ini ... post-hook = chmod 0640 /etc/letsencrypt/archive/*/privkey*.pem && chmod g+rx /etc/letsencrypt/live /etc/letsencrypt/archive && chown -R root:ssl-cert /etc/letsencrypt/live /etc/letsencrypt/archive

To add the user running mumbled to this group:

sudo usermod -a -G ssl-cert mumble-server

To find the user, look in sudo cat /etc/passwd.

References

I would like to be able to host files that can be served by other sites like https://viewer.pyvista.org. This requires enabling cross-origin resource sharing (CORS) which can be done by:

  1. Make sure that mod_headers is loaded:

    sudo a2enmod headers
  2. Adding the Header set Access-Control-Allow-Origin "*" to the appropriate directories in /etc/apached2/mods-enabled/headers.conf or the corresponding /var/www/html/Public/.htaccess file (the former is recommended):

    <!--/etc/apached2/mods-enabled/headers.conf--> <Directory "/var/www/html/Public"> Header set Access-Control-Allow-Origin "*" </Directory>
  3. Checking the configuration and restarting the service:

    sudo apachectl -t sudo service apache2 reload

This allows the following to work for example:

https://viewer.pyvista.org/?fileURL=https://swan.physics.wsu.edu/Public/papers/Mossman_2021/fig1.vtkjs

%%HTML <iframe src="https://viewer.pyvista.org/?fileURL=https://swan.physics.wsu.edu/Public/papers/Mossman_2021/fig1.vtkjs"></iframe>

Version Control Hosting

With BitBucket "sunsetting" their support for Mercurial, we needed to find a new option. Here we explore options for self-hosting.

Kallithea (incomplete)

Kallithea needs a database, so we make a directory for this in /data/kalithea and store it there.

sudo apt-get install npm su admin sudo apt-get install build-essential git python-pip python-virtualenv libffi-dev python-dev hg clone https://kallithea-scm.org/repos/kallithea cd kallithea conda create -n kallithea python=2 conda activate kallithea pip install --upgrade . mkdir -p "${CONDA_PREFIX}/etc/conda/activate.d" echo "export LC_ALL='C.UTF-8'" >> "${CONDA_PREFIX}/etc/conda/activate.d/env_vars.sh"
su admin conda activate kallithea mkdir /data/kallithea cd /data/kallithea kallithea-cli config-create my.ini kallithea-cli db-create -c my.ini #Created with root path `/data/kallithea`, user `mforbes` etc. kallithea-cli front-end-build gearbox serve -c my.ini

This runs Kallithea on port 5000 which could be accessed by users with ssh tunelling.

Another alternative is Heptapod. This is a mercurial interface to a friendly fork of GitLab CE intended to ultimately bring mercurial support to GitLab. This can be installed with Docker as discussed below.

As of 1 July 2020: This is my primary alternative to Bitbucket hosted as discussed below. We will probably ultimately host this on an AWS instance.

As of 14 March 2020:

Docker

Several packages (such as Heptapod and CoCalc) require a rather complete system, so are easiest to install using Docker containers. Here we discuss how to set these up. We are using Rootless mode which seems to work well and prevents the need for providing docker with root access.

Note: Be sure to completely purge any previous root-enabled version of Docker before proceeding.

ssh swan sudo apt-get update sudo apt-get upgrade sudo apt-get install uidmap sudo apt-get purge docker docker.io # Remove old root-version of docker. sudo apt-get autoremove --purge
sudo useradd -m docker sudo usermod -aG sudo docker # Enable sudo for docker. sudo su docker chsh -s /bin/bash bash curl -fsSL https://get.docker.com/rootless | sh

This allows the docker user to add processes to start services that will start at login.

sudo loginctl enable-linger docker systemctl --user start docker

Add the appropriate environmental variables to ~docker/.bashrc:

... # Docker install (rootless) export PATH=/home/docker/bin:$PATH export DOCKER_HOST=unix:///run/user/1017/docker.sock

Docker Cheatsheet

Here are some useful commands:

  • docker pull: Pulls an image.

  • docker create: Creates a container from an image.

  • docker start: Starts running a container.

  • docker stop: Stops ...

  • docker attach: Attach to a running container.

  • docker ps -a: List all containers (both running and stopped).

  • docker images: List all images.

  • docker rm: Remove a container.

  • docker rmi: Remove an image.

  • docker inspect: Lots of information about a container.

  • docker exec -it <name> /bin/bash: Connect to the specified container and run bash (like ssh-ing into the VM).

These appear in documentation, but I do not use them:

  • docker run: This is equivalent to docker create + docker start + docker attach. This can only be executed once. After the container is created, one cannot use subsequent calls to run to change, for example, port assignments. It is probably most useful for short foreground processes in conjunction with the --rm option.

Issues: I originally had a bunch of errors because of interference with the previously installed docker version (not rootless). These went away once I did sudo apt-get purge docker docker.io.

  • Aborting because rootful Docker is running and accessible. Set FORCE_ROOTLESS_INSTALL=1 to ignore.

  • Failed to start docker.service: Unit docker.socket failed to load: No such file or directory.

So I stopped the root docker service (from a previous install) and removed this file:

sudo service docker stop sudo systemctl disable docker sudo apt-get remove docker #sudo rm /var/run/docker.sock /lib/systemd/system/docker.service

After resolving these issues, I was having the following issue when trying to run the server with systemctl:

$ systemctl --user start docker $ docker run hello-world docker: Cannot connect to the Docker daemon at unix:///tmp/docker-1017/docker.sock. Is the docker daemon running?. See 'docker run --help'.

Heptapod

Heptapod is a service providing Mercurial access to GitLab CE. When running the public server, we host it here:

Here we describe how to run Heptapod in a Docker container. This is a service based on GitLab CE that provides a backend with issue tracking etc. for Mercurial. As above, I have created a docker user account on swan. First I login to this, then make some directories for the server data in /data2/docker/heptapod. Then I pull the docker image.

ssh docker@swan # Login to swan as docker echo 'export GITLAB_HOME="${HOME}/srv/heptapod' >> ~/.bashrc sudo mkdir -p /data2/docker # Make the data directory for docker sudo chown docker /data2/docker # Change owner... sudo chmod a-wrx,u+rwxs /data2/docker # ...and give appropriate permissions mkdir -p /data2/docker/heptapod # Now create the heptapod directory. sudo ln -s /data2/docker/heptapod /srv/ # Link it to /srv/heptapod... ln -s /data2/docker ~docker/srv # ...and to the docker home directory.

Now we pull the heptapod image and start a couple of containers:

  • heptapod-local: Only listens on local ports. To use this, users must login with ssh and forward ports appropriately so they can connect (see below).

  • heptapod-public: Listens on public ports. This exposes Heptapod to the world, which may be a security risk. We do this to allow "weak" collaborators access, or to enable transferring repositories from Bitbucket.

docker pull octobus/heptapod docker create \ --name heptapod-local \ --restart always \ --hostname localhost \ --publish 127.0.0.1:11080:80 \ --publish 127.0.0.1:11443:443 \ --publish 127.0.0.1:11022:22 \ --volume ${GITLAB_HOME}/config:/etc/gitlab \ --volume ${GITLAB_HOME}/logs:/var/log/gitlab \ --volume ${GITLAB_HOME}/data:/var/opt/gitlab \ octobus/heptapod docker create \ --name heptapod-public \ --restart always \ --hostname swan.physics.wsu.edu \ --publish 11080:80 \ --publish 11443:443 \ --publish 11022:22 \ --volume ${GITLAB_HOME}/config:/etc/gitlab \ --volume ${GITLAB_HOME}/logs:/var/log/gitlab \ --volume ${GITLAB_HOME}/data:/var/opt/gitlab \ octobus/heptapod

Now we can run whichever one we want:

docker start heptapod-local # Use this in general. #docker start heptapod-public # Use this when needed.

Once started, I initialized a mercurial repository in the configuration directory so I can keep track of configuration changes:

cd ~/srv/heptapod # For some reason, the following file is given rw permission only # for the user in the docker image, so we can't back it up... sudo chgrp docker config/heptapod.hgrc sudo chmod g+rw config/heptapod.hgrc hg init cat > .hgignore <<EOF syntax: glob data/ logs/ EOF hg add hg com -m "Initial commit"

Debugging

Look at the current logs with the following:

docker exec -it heptapod-public gitlab-ctl tail

Heptapod Backup (incomplete)

docker exec -t heptapod-public gitlab-backup create

This will put a file on the image which we exported:

  • /var/opt/gitlab/backups/1593678341_2020_07_02_12.10.11_gitlab_backup.tar

  • ~/srv/data/backups/1593678341_2020_07_02_12.10.11_gitlab_backup.tar

Optionally, the backup program can upload this to a remote storage location.

Another option is to backup the repositories, I use rclone to copy these to my Google Drive to a remote called gwsu_backups from my docker account on swan using the root_folder_id corresponding to a folder My Drive/backups/RClone/swan/repo_backup.

screen -S RCloneBackup rclone sync -Pl repo_backups gwsu_backups:

HTTP Redirect

Note: SSL does not yet work with non-standard ports... so I am using HTTP only. I have randomly chosen ports 11080, 11443 and 11022 for HTTP, HTTPS, and SSH access. These are not very memorable, so it would be nice to redirect https://swan.physics.wsu.edu/heptapod to https://swan.physics.wsu.edu:11443. To do this, we simply add a Redirect /heptapod https://swan.physics.wsu.edu:11443/ statement to one of the Apache config files:

#/etc/apache2/sites-enabled/000-default.conf <Virtualhost *:80> ... Redirect /heptapod http://swan.physics.wsu.edu:11080/ Redirect /discourse http://swan.physics.wsu.edu:10080/ </VirtualHost>
#/etc/apache2/sites-enabled/000-default-le-ssl.conf <Virtualhost *:443> ... Redirect /heptapod https://swan.physics.wsu.edu:11443/ Redirect /discourse https://swan.physics.wsu.edu:10443/ </VirtualHost>

Don't forget to restart the server:

sudo service apache2 restart

Bitbucket Import

  1. Enable OAuth2 integration on Bitbucket. I used my public settings http://swan.physics.wsu.edu/heptapod/users/sign_in.

Note: I had issues resulting in redirect_uri issue because I was using my alias http://swan.physics.wsu.edu/heptapod/users/auth but in my configuration, I used the http://swan.physics.wsu.edu:9080/users/auth form. If you look at the URL sent, it includes the redirect_uri which must match.

  1. Edit the /etc/gitlab/gitlab.rb file on the server. Since we mapped /etc/gitlab to ~docker/srv/config, we can edit it there without connecting.

vi ~docker/srv/heptapod/config/gitlab.rb
#/etc/gitlab/gitlab.rb gitlab_rails['omniauth_enabled'] = true ... gitlab_rails['omniauth_providers'] = [ { "name" => "bitbucket", "app_id" => "Rc...", "app_secret" => "hA...", "url" => "https://bitbucket.org/", } ]
  1. Start the public server, or reconfigure GitLab:

ssh docker@swan docker start heptapod-public

or

docker exec -it heptapod-public gitlab-ctl reconfigure
  1. Register for an account on our Heptapod instance.

  2. Login.

  3. Import new project from Bitbucket Cloud.

References

Issues

  • Some imports are broken.

  • Cloning links are incorrect: http://swan.physics.wsu.edu/mforbes/mmfutils mmfutils_heptapod. Probably need to update the hostname to include the port and/or the /heptapode alias.

  • Cloning from http://swan.physics.wsu.edu:9080/mforbes/mmfutils mmfutils_heptapod does not work.

  • Cloning from ssh://[email protected]:9022/mforbes/mmfutils works on swan but not from outside.

  • Cloning from ssh://git@localhost:9022/mforbes/mmfutils works with SSH tunnel.

Discourse

ssh docker@swan mkdir repositories cd repositories git clone https://github.com/discourse/discourse_docker.git cd discourse_docker

Edit the generated containers/app.yml file. I am trying to use Gmail with a custom alias [email protected] which I registered under my Gmail account settings.

DISCOURSE_DEVELOPER_EMAILS: '[email protected]' DISCOURSE_SMTP_ADDRESS: in-v3.mailjet.com DISCOURSE_SMTP_USER_NAME: ****** DISCOURSE_SMTP_PASSWORD: ********* ## The Docker container is stateless; all data is stored in /shared volumes: - volume: host: /home/docker/srv/discourse/shared/standalone guest: /shared - volume: host: /home/docker/srv/discourse/shared/standalone/log/var-log guest: /var/log ... ## If you want to set the 'From' email address for your first registration, uncomment and change: ## After getting the first signup email, re-comment the line. It only needs to run once. - exec: rails r "SiteSetting.notification_email='[email protected]'"

Mailjet: I managed to

Notes:

  • I could not use [email protected] as the SMTP user here since I cannot login to gmail with this.

  • Gmail did not work: probably have to use an App password since two-factor authentication is enabled.

  • I had to use an absolute path for the host /home/docker: ~ did not work.

  • I also tried using Mailjet with - exec: rails r "SiteSetting.notification_email='[email protected]'" but this did not seem to activate either (I was expecting Mailjet to send me an activation email to make sure...)

After editing these, I was able to continue after making these directories:

rm -rf ~/srv/discourse mkdir ~/srv/discourse
./launcher rebuild app

Not Working: Discourse is running, but not able to send emails.

HTTP Redirect

I have randomly chosen ports 10080 and 10433 for HTTP and HTTPS access. These are not very memorable, so it would be nice to redirect https://swan.physics.wsu.edu/discourse to https://swan.physics.wsu.edu:10443. To do this, we simply add a Redirect /discourse https://swan.physics.wsu.edu:10443/ statement to one of the Apache config files:

#/etc/apache2/sites-enabled/000-default.conf <Virtualhost *:443> ... Redirect /heptapod https://swan.physics.wsu.edu:11443/ Redirect /discourse https://swan.physics.wsu.edu:10443/ </VirtualHost>

Don't forget to restart the server:

sudo service apache2 restart

CoCalc

CoCalc can also be installed with docker. I created the images with the following file:

#!/bin/bash # initialize_cocalc.sh docker create \ --name cocalc-local \ --restart always \ --hostname localhost \ --publish 127.0.0.1:9443:443 \ --volume ~/srv/cocalc:/projects \ sagemathinc/cocalc docker create \ --name cocalc-public \ --restart always \ --hostname localhost \ --publish 9443:443 \ --volume ~/srv/cocalc:/projects \ sagemathinc/cocalc

These listen on port 9443. Note: you must connect with https://localhost:9443, not with HTTP.

Issues

New project stuck on "Loading..."

Nextcloud (incomplete)

Open source replacement for Google Cloud etc. There is a docker image.

Incomplete because this needs MySQL etc. and I don't want to figure this out yet.

ssh swandocker

AWS Command Line

Related to docker: if you need to build images for deployment on AWS, you will need the aws-cli:

su admin sudo apt-get update sudo aptitude install awscli

Disk Usage

To see how much disk space we have use df:

!ssh swan df -h
NBPORT=18888: For more information run 'j --help' Filesystem Size Used Avail Use% Mounted on udev 7.8G 0 7.8G 0% /dev tmpfs 1.6G 1.3M 1.6G 1% /run /dev/mapper/lubuntu--vg-root 219G 200G 7.7G 97% / tmpfs 7.9G 8.0K 7.9G 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup /dev/sda1 236M 133M 91M 60% /boot tmpfs 1.6G 0 1.6G 0% /run/user/106 tmpfs 1.6G 0 1.6G 0% /run/user/1000

To see where you are using disk space use du:

!ssh swan 'du -sh ~/* | sort -h | tail'
NBPORT=18888: For more information run 'j --help' 4.7M Outbox 24M QuantumTurbulence.key 38M DNP_2018 43M paper.tgz 132M Skype.7.43.241.app 651M tmp 1.1G smcbec 1.4G current 2.6G _trash 18G work

Partition Scheme

There are some important partitions and issues related to choice of partitions.

  • /boot: This is where the kernel lives. I originally made it 256MB, but then ran into issues when upgrading the kernel because I did not have enough space to download the new kernel while keeping the old kernel. I recommend using 512MB or 1GB if you have space instead so you can keep a few backup kernels. See What is the recommended size for a linux boot partitiona? for a discussion.

  • /: This is the root partition for the OS. It is where all of the operating system files get installed.

  • /swap: Ubuntu recommends that you include a swap partition that matches your RAM, but it seems that this recommendation is for systems that need to hibernate. For a desktop, swap files might be better since they can grow.

Installing a New Drive

I installed a new internal hardddive and decided on the following partition scheme:

  • /boot: 1GB. I intend to use this to tryout OS upgrades and so that this drive can be used as a bootable backup.

  • /mnt/hdclone: 256GB. Intended to be a backup clone of the internal harddive with the OS and home directories.

  • /mnt/data2: Remaining data partition.

To do this I first ran parted and created the partitions. Then ran mkfs.ext4 to format the partitions:

  1. Identify the appropriate disk

$ sudo fdisk -l Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectors ... Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 499711 497664 243M 83 Linux /dev/sda2 501758 500117503 499615746 238.2G 5 Extended /dev/sda5 501760 500117503 499615744 238.2G 8e Linux LVM ... Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors ...

Here we see the two internal drives: the original 256G drive /dev/sda and the new 6TB drive /dev/sdb. (Other drives are also listed but have been omitted.)

  1. Create the partitions

$ sudo parted /dev/sdb GNU Parted 3.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? Yes (parted) unit GB (parted) mkpart boot ext4 0 1 # Boot partition at start (parted) set 1 boot on # Set the boot flag (parted) mkpart hdclone ext4 1 257 # Backup for /dev/sda (parted) mkpart data ext4 257 -0 # Remaining partition (parted) print # Check Model: ATA WDC WD60EFRX-68L (scsi) Disk /dev/sdb: 6001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: pmbr_boot Number Start End Size File system Name Flags 1 0.00GB 1.00GB 1.00GB ext4 boot boot, esp 2 1.00GB 257GB 256GB ext4 hdclone 3 257GB 6001GB 5744GB ext4 data (parted) quit Information: You may need to update /etc/fstab. $ sudo fdisk -l ... Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors ... Device Start End Sectors Size Type /dev/sdb1 2048 1953791 1951744 953M EFI System /dev/sdb2 1953792 501952511 499998720 238.4G Linux filesystem /dev/sdb3 501952512 11721043967 11219091456 5.2T Linux filesystem
  1. Format the partitions after double

$ sudo mkfs.ext4 /dev/sdb1 mke2fs 1.44.4 (18-Aug-2018) Creating filesystem with 243968 4k blocks and 61056 inodes Filesystem UUID: 23e3fcf9-2e01-4869-8bc4-3069e122d5d3 Superblock backups stored on blocks: 32768, 98304, 163840, 229376 Allocating group tables: done Writing inode tables: done Creating journal (4096 blocks): done Writing superblocks and filesystem accounting information: done $ sudo mkfs.ext4 /dev/sdb2 mke2fs 1.44.4 (18-Aug-2018) Creating filesystem with 62499840 4k blocks and 15630336 inodes Filesystem UUID: 7e5eefff-68d7-40e0-a4c6-9cbdd65a6263 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information: done $ sudo mkfs.ext4 /dev/sdb3 mke2fs 1.44.4 (18-Aug-2018) Creating filesystem with 1402386432 4k blocks and 175300608 inodes Filesystem UUID: 0971a9ee-aeab-4e76-b45a-fc63103ca489 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544 Allocating group tables: done Writing inode tables: done Creating journal (262144 blocks): done Writing superblocks and filesystem accounting information:
  1. Make mount points

sudo mkdir /mnt/data2 sudo mkdir /mnt/hdclone sudo mount /dev/sdb2 /mnt/hdclone sudo mount /dev/sdb3 /mnt/data2
  1. Add the mounting information to /etc/fstab.

$ sudo blkid ... /dev/sdb1: UUID="23e3fcf9-2e01-4869-8bc4-3069e122d5d3" TYPE="ext4" PARTLABEL="boot" PARTUUID=... /dev/sdb2: UUID="7e5eefff-68d7-40e0-a4c6-9cbdd65a6263" TYPE="ext4" PARTLABEL="hdclone" PARTUUID=... /dev/sdb3: UUID="0971a9ee-aeab-4e76-b45a-fc63103ca489" TYPE="ext4" PARTLABEL="data" PARTUUID=... $ sudo vi /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> ... # New internal 6TB drive on /dev/sdb/ #UUID="23e3fcf9-2e01-4869-8bc4-3069e122d5d3" /boot ext4 defaults 0 2 UUID="7e5eefff-68d7-40e0-a4c6-9cbdd65a6263" /mnt/hdclone ext4 defaults 0 2 UUID="0971a9ee-aeab-4e76-b45a-fc63103ca489" /mnt/data2 ext4 defaults 0 2

References

Currently I have the following partitions:

$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 238.5G 0 disk |-sda1 8:1 0 243M 0 part /boot |-sda2 8:2 0 1K 0 part `-sda5 8:5 0 238.2G 0 part |-lubuntu--vg-root 253:0 0 222.3G 0 lvm / `-lubuntu--vg-swap_1 253:1 0 16G 0 lvm [SWAP] sdb 8:16 0 931.5G 0 disk `-sdb1 8:17 0 931.5G 0 part /mnt/Khalids_usb_drive sr0 11:0 1 1024M 0 rom

The internal hard drive is sda which is a ~240GB drive. There is a /boot partition with the kernel and then a physical partition sda2 which is subdivided into several partitions. We also see an externally mounted USB drive sdb. More information can be obtained using fdisk:

$ sudo fdisk -l Disk /dev/sda: 238.5 GiB, 256060514304 bytes, 500118192 sectors ... Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 499711 497664 243M 83 Linux /dev/sda2 501758 500117503 499615746 238.2G 5 Extended /dev/sda5 501760 500117503 499615744 238.2G 8e Linux LVM ... Disk /dev/sdb: 931.5 GiB, 1000204885504 bytes, 1953525167 sectors ... Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 1953521663 1953519616 931.5G 7 HPFS/NTFS/exFAT ... Disk /dev/mapper/lubuntu--vg-root: 222.3 GiB, 238681063424 bytes, 466173952 sectors ... Disk /dev/mapper/lubuntu--vg-swap_1: 16 GiB, 17116954624 bytes, 33431552 sectors ...

Remote Drives (NFS etc.)

Backup

One should always create backups of one's computer. This includes backups of the data and bootable backups. Here are some options.

OneDrive Free Client (incomplete)

This client is like Dropbox but integrates with OneDrive. It is currently approved for use at WSU.

sudo aptitude install -y libcurl4-openssl-dev libsqlite3-dev sudo snap install --classic dmd && sudo snap install --classic dub su admin . /etc/profiled # Path to /snap/bin set here: needed for dmd cd ~/repositories git clone https://github.com/skilion/onedrive.git cd onedrive make sudo make install

This installs the following files:

install -D onedrive /usr/local/bin/onedrive install -D -m 644 onedrive.service /usr/lib/systemd/user/onedrive.service

Users then configure it as follows:

mkdir -p ~/.config/onedrive cp ~admin/repositories/onedrive/config ~/.config/onedrive/config mkdir ~/data2/OneDriveSync ln -s ~/data2/OneDriveSync/ OneDrive echo > ~/.config/onedrive/sync_list <<EOF OneDriveSync/ EOF

This makes a shared folder in my home directory to ~/OneDrive (which is the default configuration in ~/.config/onedrive/config. At this point, one should apparently be able to sync with a command like onedrive --resync. This appears to work, but ends up just returning a blank webpage rather than an appropriate "response uri":

$ onedrive --resync Authorize this app visiting: https://login.microsoftonline.com/common/oauth2/v2.0/authorize?client_id=22c49a0d-d21c-4792-aed1-8f163c982546&scope=files.readwrite%20files.readwrite.all%20offline_access&response_type=code&redirect_uri=https://login.microsoftonline.com/common/oauth2/nativeclient Enter the response uri:

Duply/Duplicity

Duply is a frontend to Duplicity, which is recommended here.

TL;DR

  • Identify your partitions and drives.

    sudo fdisk -l
  • Copy:

    sudo rsync -vaxhWE --no-compress --progress --delete --ignore-errors / /mnt/data2/swan_backup/14Mar2020

To partition the hard drives, you can use fdisk.

Timeshift (incomplete)

Timeshift: is a GUI application for backuping system files (only the OS). Does not seem to work headless.

sudo apt-add-repository -y ppa:teejee2008/ppa sudo apt-get update sudo apt-get install timeshift

References

Mounting External Drives

sudo mkdir -p /mnt/Khalids_usb_drive sudo mount -t ntfs /dev/sdb1 /mnt/Khalids_usb_drive sudo mkdir -p /data/external_data sudo mount -t ext4 /dev/sdc5 /data/external_data/

Duplicity (incomplete)

Duplicity is a command-line backup tool that works with Google Drive and Onedrive (though probably not without enable apps for the latter). If you want a GUI front-end, install Déjà-Dup.

  • Enable a Google Drive API app.

    • Log in to console.developers.google.com with the appropriate account.

    • Create a new project. (I called mine Duplicity Backup.)

    • Select and Enable the Google Drive API.

    • Create Credentials:

      • Select OAuth client ID from the Create credentials menu. (Do not create a Service account key.)

      • Give your project a name as needed. (I use Duplicity Backup Client.)

      • Copy the credentials.

  • Create a .duplicity configuration directory. I am doing this as the admin user:

    su admin cd ~ mkdir .duplicity cd .duplicity touch ignore touch credentials
  • Create an appropriate credentials file:

    # Duplicity Credentials file; -*-Shell-script-*- # dest = ~/.duplicity/credentials # Keep this as the 2nd line for mmf_init_setup client_config_backend: settings client_config: client_id: xxx.apps.googleusercontent.com client_secret: yyyy save_credentials: True save_credentials_backend: file save_credentials_file: gdrive.cache get_refresh_token: True
  • Install duplicity. I do this with a conda environment:

    # Environment for Duplicity # dest = ~/.duplicity/environment.yml name: app_duplicity channels: - defaults - conda-forge dependencies: - python - setuptools_scm - pydrive - pip - pip: - duplicity
    sudo apt-get install librsync-dev unset CFLAGS conda env update -f environment.yml --prune
  • Create a backup script. This one backs up my home directory to my google drive.

    #!/bin/bash # dest = ~/usr/local/bin/duplicity_backup_home # Keep this as the 2nd line for mmf_init_setup export GOOGLE_DRIVE_SETTINGS=~/.duplicity/credentials duplicity --exclude-filelist ~/.duplicity/ignore ~/ gdocs://m.forbes@wsu.edu/backups/swan_mforbes_current
  • Run the backup. I do this in a screen session:

    conda run -n app_duplicity duplicity_backup_home

    This will prompt you for a passphrase which will be used to encrypt your data.

RClone

Unlike Duplicity, RClone does not encrypt your data. This has the advantage that you can browse it online, but the disadvantage of lacking privacy. Apparently, Rclone v1.46 supports symlinks by copying them to text files (since Google Drive does not support symlinks).

Cheatsheet

  • rclone config: Configure remotes.

  • rclone listremotes: Show which remotes you have configured.

  • rclone ls <remote>:: List files on remote.

  • rclone sync -Pl <src> <dest>: Make <dest> match <src> changing only <dest>. Preserve/restore symlinks (-l).

Users

  • As a user, configure your backup. Here will copy my home directory ~ to my Google Drive.

    rclone config
    • I used a simple name (since you need to type this): gwsu.

    • For optimal performance, use a Google Application Client Id (see below).

    • I use permision 1, Full access (so rclone ls works).

    • I created a folder and specified the root_folder_id from the last part of the folder URL.

  • Backup:

    screen -R RClone # Optional, but prevents hangup when you logout rclone sync -Pl --exclude ~/_trash ~ gwsu:swan/mforbes_current

    The -P flag shows progress, and the -l flag copies symlinks as files with .rclonelink as an extension. These will be restored when you copy back.

  • Restore: (Here I am restoring to /tmp so I don't clobber my actual home directory by mistake!)

    screen -R RClone # Optional, but prevents hangup when you logout rclone sync -Pl gwsu:swan/mforbes_current /tmp/restored_home_directory
  • Here is an example of a script I include with a project to sync the contents of a mercurial repo to a team drive:

    # sync_to_google_team_drive.bash hg clone . /tmp/paper_soc_soliton rclone sync -Pl --exclude ".hg/**" /tmp/paper_soc_soliton \ gwsu_EngelsForbesCollaboration:paper_soc_soliton

    This will pull the contents from the drive, allowing you to compare it with the version control.

    # sync_from_google_team_drive.bash hg clone . /tmp/paper_soc_soliton rclone sync -Pl --exclude ".hg/**" \ gwsu_EngelsForbesCollaboration:paper_soc_soliton \ /tmp/paper_soc_soliton

Performance (Google Application Client Id)

Google limits the rate at which certain applications can query their systems. Since all RClone users share the same application by default, it is strongly encouraged for you to create your own authenticated client rather than using Rclones. Do this by following the instructions below:

Note: I could not do this with my WSU account since it has been disabled, so I had to do this with my personal Google account or my UW account. For Team Drives you do not need to specify the Root ID: it will default to the drive.

This improved my download performance from ~300KB/s to ~3MB/s.

Admin

  • Install for everyone:

    curl https://rclone.org/install.sh | sudo bash

Microsoft Teams

It is a little non-obvious how to connect to a Teams drive. I had to run the rclone authorize "onedrive" command on my Mac where I was authenticated to my teams folder.

$ rclone config ... name> mswsu_Phys521 ... Storage> onedrive ... # Blank client_id and client_secret. No advanced config. Remote config... y/n> n For this to work, you will need rclone available on a machine that has a web browser available. Execute the following on your machine: rclone authorize "onedrive" Then paste the result below: # Not sure when this will expire with MFA result> {"access_token":"eyJ0eXAiO...","expiry":"2021-04-21T11:47:35.052577-07:00"} Choose a number from below, or type in an existing value 1 / OneDrive Personal or Business \ "onedrive" 2 / Root Sharepoint site \ "sharepoint" 3 / Type in driveID \ "driveid" 4 / Type in SiteID \ "siteid" 5 / Search a Sharepoint site \ "search" Your choice> 5 # This was the key. Then search for something in you teams What to search for> Physics.521 Found 2 sites, please select the one you want to use: 0: Physics.521 (https://emailwsu.sharepoint.com/teams/PHYSICS.521)... ... Chose drive to use:> 0 Found 4 drives, please select the one you want to use: 0: Class Files (documentLibrary)... 1: Class Materials (documentLibrary)... 2: Documents (documentLibrary)... ... Chose drive to use:> 2 Found drive 'root' of type 'documentLibrary', URL: https://emailwsu.sharepoint.com/teams/PHYSICS.521/Shared%20Documents Is that okay? y) Yes n) No y/n> y

References

Network

To test connectivity, making a little echo server can be useful. This can be done with the netcat utility:

(server) $ nc -l 12345 (client) $ nc -c swan.physics.wsu.edu 12345

Now you can type on one or the other and should see messages if they are connected.

nc -l -p 2000 -c 'xargs -n1 echo'. Differences in versions (traditional vs BSD) make these solutions fragile (they don't work with the default versions installed in Ubuntu for example). Another answer demonstrates how to do this with socat:

socat TCP4-LISTEN:12345,fork EXEC:cat

Installing ncat, you can make a server that will accept multiple connections:

ncat -l 2000 -k -c 'xargs -n1 echo'

Fail2Ban

The fail2ban package implements a rather draconian policy of banning IP's that fail to authenticate properly. This greatly improves security by limiting the ability of hackers from brute-force trying to break in if users have not used secure passwords.

Unlocking

If a legitimate user accidentally triggers a ban, they can either wait, or an admin and unban them with the following commands:

fail2ban-client status # See if anyone is banned, and get the jailname fail2ban-client set <jailname> unbanip <ipaddress> # Unban them

You will need to know the IP address of the person attempting to login. They can find this with ip a or ifconfig -a or similar. You can also look on the server with iptables -n -L.

Example

As an example, I trigger a ban here from our cluster called kamiak:

(kamiak)$ ssh atacker@swan.physics.wsu.edu attacker@swan.physics.wsu.edu's password: Permission denied, please try again. [email protected]'s password: Permission denied, please try again. ... # Repeat until ban is triggered (kamiak)$ ssh attacker@swan.physics.wsu.edu # Now connections are refused ssh: connect to host swan.physics.wsu.edu port 22: Connection refused

Check to see if a ban is in effect:

(swan)$ sudo iptables -n -L ... Chain f2b-sshd (1 references) target prot opt source destination REJECT all -- 198.17.13.7 0.0.0.0/0 reject-with icmp-port-unreachable ...

See if 198.17.13.7 is our server (kamiak).

(kamiak)$ ip a ... inet 198.17.13.7...

It is. Check which jail - should be the sshd jail:

(swan)$ sudo fail2ban-client status Status |- Number of jail: 1 `- Jail list: sshd

Okay, now unban.

(swan)$ sudo fail2ban-client set sshd unbanip 198.17.13.7 1 (swan)$ sudo iptables -n -L ... Chain f2b-sshd (1 references) target prot opt source destination RETURN all -- 0.0.0.0/0 0.0.0.0/0

Misc. Software

VisIt

VisIt is a visualization toolkit using VTK for analyzing 3D data. We use it for analysis of superfluids, but we need a custom plugin which we must compile to use.

Resources

$ ssh swan $ su admin $ lsb_release -a ... Description: Ubuntu 20.04.1 LTS Release: 20.04 ... $ mkdir -p ~/zips/visit $ cd ~/zips/visit # Make sure you download an appropriate version here. I did 2.13.3 and 3.1.2 $ wget https://github.com/visit-dav/visit/releases/download/v3.1.2/visit3_1_2.linux-x86_64-ubuntu20.tar.gz $ wget https://github.com/visit-dav/visit/releases/download/v3.1.2/visit-install3_1_2 $ bash visit-install3_1_2 3.1.2 linux-x86_64-ubuntu20 /data/apps/visit $ wget https://github.com/visit-dav/visit/releases/download/v2.13.3/visit2_13_3.linux-x86_64-ubuntu18.tar.gz $ wget https://github.com/visit-dav/visit/releases/download/v2.13.3/visit-install2_13_3 $ bash visit-install2_13_3 2.13.3 linux-x86_64-ubuntu18 /data/apps/visit

I make this available through a module

sudo cat > /usr/share/modules/modulefiles/visit <<EOF #%Module4.4.1 proc ModulesHelp { } { puts stderr "\tVisIt visualization tool." } module-whatis "VisIt visualization tool." # Add to back too so that it is always available. We use this to provide # Mercurial for example which we install in the default environment prepend-path PATH /data/apps/visit/bin EOF

Plugins

To read custom data, one needs to write a plugin. We have one with the W-SLDA code which I install as follows:

# Missing dependencies sudo aptitude install cmake libpcre16-3 module load visit # Get W-SLDA repo git clone ssh://git@git2.if.pw.edu.pl/gabrielw/cold-atoms.git ~/repositories/WSLDA_cold-atoms cat ~/repositories/WSLDA_cold-atoms/lib-wdata/visit-plugin/README.txt cd ~/repositories/WSLDA_cold-atoms/lib-wdata/ # Build in a clean directory mkdir _build; cd _build rm -rf * g++ -O3 -c ../wdata.c -fPIC ar crf libwdata.a wdata.o xml2cmake -v 3.1 -public -clobber ../visit-plugin/wdata.xml mv CMakeLists.txt ../visit-plugin/ cmake -DCMAKE_BUILD_TYPE:STRING=Debug ../visit-plugin/ make

Mumble/Murmer

Mumble is an open source, low latency, high quality voice chat application, but needs a server running somewhere (called murmer). Here we install muble:

ssh swandocker sudo add-apt-repository ppa:mumble/release sudo apt-get update sudo aptitude install mumble-server sudo dpkg-reconfigure mumble-server

As discussed above, I added the user mumble-server to the ssl-cert group so that it can use the Let's Encrypt certificates. In the end, I have the following active lines in /etc/mumble-server.ini:

; /etc/mumble-server.ini ... database=/var/lib/mumble-server/mumble-server.sqlite ... ice="tcp -h 127.0.0.1 -p 6502" icesecretwrite= logfile=/var/log/mumble-server/mumble-server.log pidfile=/var/run/mumble-server/mumble-server.pid welcometext="<br />Welcome to the Forbes Group <b>Murmur</b> server.<br />Enjoy your stay!<br />" port=64738 serverpassword=<ask me> bandwidth=72000 users=100 messageburst=5 messagelimit=1 allowping=true sslCert=/etc/letsencrypt/live/swan.physics.wsu.edu/fullchain.pem sslKey=/etc/letsencrypt/live/swan.physics.wsu.edu/privkey.pem uname=mumble-server [Ice] Ice.Warn.UnknownProperties=1 Ice.MessageSizeMax=65536

Remote VNC

Running applications over X11 is painful if your network is slow. Remote desktop control with Tiger VNC provides a much better experience in most cases. This can be installed with:

sudo aptitude install tigervnc-standalone-server tigervnc-common

Then I created a password and set my configuration to listen locally:

vncpasswd

This will be the password you give to your VNC Client to connect. (Note: I think it should be possible to run without a password but my Mac OS X VNC client does not seem to work without one.)

Now create a ~/.vnc/config file:

# VNC Server config file; -*-Shell-script-*- # dest = ~/.vnc/config # Keep this as the 2nd line for mmf_init_setup session=lxqt geometry=1920x1080 localhost # Only listen locally - use an ssh tunnel to connect. alwaysshared

Finally, ssh into your computer, forwarding port 5901:

# ~/.ssh/config ... Host swanvnc Compression yes LocalForward 5901 localhost:5901 ...

and start the server. Here I am running firefox:

ssh swanvnc # Forwards port 5901 tigervncserver -xstartup firefox

On my Mac, I then connect with:

(Mac OS X)$ open vnc://localhost:5901

Eventually, I kill my server:

tigervncserver -kill :*

References

Current Configuration

from IPython.display import HTML def show(res): return HTML('<pre style="font-size: 6pt;line-height: normal;">{}</pre>'.format("\n".join(res)))

Network

Network config:

swan.physics.wsu.edu IP: 134.121.40.108 Gateway: 134.121.47.254 Subnet Mask: 255.255.248.0 DNS Servers: 134.121.139.10 and 134.121.80.36 WINS Servers: 134.121.143.28 and 134.121.143.29

Disks

Here is my working /etc/fstab. Note:

  • Use UUID's as listed by blkid since /dev/sd* values may change from run to run depending on how the devices are detected.

  • Make sure that the USB drives have nofail so they do not halt the boot process if they are missing.

# /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/lubuntu--vg-root / ext4 errors=remount-ro 0 1 UUID=10f14fbd-9d53-43f2-be32-c9ab8c6aab99 /boot ext2 defaults 0 2 /dev/mapper/lubuntu--vg-swap_1 none swap sw 0 0 # New internal 6TB drive on UUID=123e3fcf9-2e01-4869-8bc4-3069e122d5d /boot ext4 defaults 0 2 UUID=7e5eefff-68d7-40e0-a4c6-9cbdd65a6263 /mnt/hdclone ext4 defaults 0 2 UUID=0971a9ee-aeab-4e76-b45a-fc63103ca489 /mnt/data2 ext4 defaults 0 2 # Khalid's drive UUID=A6905C57905C3053 /mnt/Khalids_usb_drive auto nosuid,nodev,nofail 0 2 # MMF Seagate UUID=6163f638-a6ca-495b-abc7-232026b7d32e /mnt/MMF_External ext4 nosuid,nodev,nofail 0 2

Note: the actual device labels change from boot to boot

  • /dev/sda: (Khalid's Disk)

  • /dev/sdb: Original 250GB internal harddrive.

    • /dev/sdb1: Boot sector (243M)

    • /dev/sdb2: Extended (238.2G)

    • /dev/sdb5: Linux LVM (238.2G)

  • /dev/sdc: New 6TB internal harddrive.

    • /dev/sdc1: (953M) EFI System

    • /dev/sdc2: (238.4G) Linux filesystem

    • /dev/sdc3: (5.2T) Linux filesystem

  • /dev/sdd: (External Seagate)


$ sudo fdisk -l 
Disk /dev/sda: 931.5 GiB, 1000204885504 bytes, 1953525167 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes
Disklabel type: dos
Disk identifier: 0x542acb8f

Device     Boot Start        End    Sectors   Size Id Type
/dev/sda1        2048 1953521663 1953519616 931.5G  7 HPFS/NTFS/exFAT


Disk /dev/sdb: 238.5 GiB, 256060514304 bytes, 500118192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8f7f2947

Device     Boot  Start       End   Sectors   Size Id Type
/dev/sdb1  *      2048    499711    497664   243M 83 Linux
/dev/sdb2       501758 500117503 499615746 238.2G  5 Extended
/dev/sdb5       501760 500117503 499615744 238.2G 8e Linux LVM


Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E22CB7F1-B1F8-403F-9C09-6CF02E1392E1

Device         Start         End     Sectors   Size Type
/dev/sdc1       2048     1953791     1951744   953M EFI System
/dev/sdc2    1953792   501952511   499998720 238.4G Linux filesystem
/dev/sdc3  501952512 11721043967 11219091456   5.2T Linux filesystem




Disk /dev/mapper/lubuntu--vg-root: 222.3 GiB, 238681063424 bytes, 466173952 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 3.7 TiB, 4000752599040 bytes, 7813969920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 636CB648-76E8-4FFD-BA07-D5873D03596D

Device          Start        End    Sectors   Size Type
/dev/sdd1          40     409639     409600   200M EFI System
/dev/sdd2      409640 1133222135 1132812496 540.2G Apple HFS/HFS+
/dev/sdd3  1133484280 2266296775 1132812496 540.2G Apple HFS/HFS+
/dev/sdd4  2266558920 3399882359 1133323440 540.4G Apple HFS/HFS+
/dev/sdd5  3399882752 7813969886 4414087135   2.1T Linux filesystem


Disk /dev/mapper/lubuntu--vg-swap_1: 16 GiB, 17116954624 bytes, 33431552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

$ lsblk -f
NAME                   FSTYPE      LABEL                     UUID                                   MOUNTPOINT
sda                                                                                                 
`-sda1                 ntfs        Seagate Backup Plus Drive A6905C57905C3053                       
sdb                                                                                                 
|-sdb1                 ext2                                  10f14fbd-9d53-43f2-be32-c9ab8c6aab99   /boot
|-sdb2                                                                                              
`-sdb5                 LVM2_member                           HQtaOr-nUK1-cshj-lty2-ZDc8-MAum-MwMBBo 
  |-lubuntu--vg-root   ext4                                  afe81057-f92c-47a6-b724-26e0ad1b2880   /
  `-lubuntu--vg-swap_1 swap                                  c1da1cdc-718c-47e3-92ed-3f34e6fca27b   [SWAP]
sdc                                                                                                 
|-sdc1                 ext4                                  23e3fcf9-2e01-4869-8bc4-3069e122d5d3   
|-sdc2                 ext4                                  7e5eefff-68d7-40e0-a4c6-9cbdd65a6263   /mnt/hdclone
`-sdc3                 ext4                                  0971a9ee-aeab-4e76-b45a-fc63103ca489   /mnt/data2
sdd                                                                                                 
|-sdd1                 vfat        EFI                       67E3-17ED                              
|-sdd2                 hfsplus     KSB NFS Medtner           b64f66fc-0a37-3bd9-96df-a15b01b3860e   
|-sdd3                 hfsplus     MMF NFS Medtner           ea008a86-12be-3278-a913-5182ad395ea9   
|-sdd4                 hfsplus     Data NFS Medtner          f3013eb6-af22-37ef-bb78-92f8e7d4ff8e   
`-sdd5                 ext4                                  6163f638-a6ca-495b-abc7-232026b7d32e   
sr0                                                                                                 

$ blkid
/dev/sda1: LABEL="Seagate Backup Plus Drive" UUID="A6905C57905C3053" TYPE="ntfs" PTTYPE="atari" PARTUUID="542acb8f-01"
/dev/sdb1: UUID="10f14fbd-9d53-43f2-be32-c9ab8c6aab99" TYPE="ext2" PARTUUID="8f7f2947-01"
/dev/sdb5: UUID="HQtaOr-nUK1-cshj-lty2-ZDc8-MAum-MwMBBo" TYPE="LVM2_member" PARTUUID="8f7f2947-05"
/dev/sdc1: UUID="23e3fcf9-2e01-4869-8bc4-3069e122d5d3" TYPE="ext4" PARTLABEL="boot" PARTUUID="24d9965c-595d-413f-9d28-1e8238441970"
/dev/sdc2: UUID="7e5eefff-68d7-40e0-a4c6-9cbdd65a6263" TYPE="ext4" PARTLABEL="hdclone" PARTUUID="ecd06215-d0df-4407-9b59-144e81a9ee2c"
/dev/sdc3: UUID="0971a9ee-aeab-4e76-b45a-fc63103ca489" TYPE="ext4" PARTLABEL="data" PARTUUID="4545ada2-dc89-47d8-b1f8-6ac72b15f02f"
/dev/mapper/lubuntu--vg-root: UUID="afe81057-f92c-47a6-b724-26e0ad1b2880" TYPE="ext4"
/dev/sdd1: LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="4ca9a0a1-8b8f-4f13-819d-d28528a7e5e0"
/dev/sdd2: UUID="b64f66fc-0a37-3bd9-96df-a15b01b3860e" LABEL="KSB NFS Medtner" TYPE="hfsplus" PARTLABEL="Untitled" PARTUUID="2ad2eaa6-5064-4d18-8451-6797b426d2b7"
/dev/sdd3: UUID="ea008a86-12be-3278-a913-5182ad395ea9" LABEL="MMF NFS Medtner" TYPE="hfsplus" PARTLABEL="MMF NFS Medtner" PARTUUID="105629a2-a227-4aff-a726-85db2a167db1"
/dev/sdd4: UUID="f3013eb6-af22-37ef-bb78-92f8e7d4ff8e" LABEL="Data NFS Medtner" TYPE="hfsplus" PARTLABEL="Data NFS Medtner" PARTUUID="99311db6-7ca1-495b-88ff-5680325de291"
/dev/sdd5: UUID="6163f638-a6ca-495b-abc7-232026b7d32e" TYPE="ext4" PARTUUID="9a718cb6-d326-5a45-b4ca-e5748c45414d"

Software

from IPython.display import HTML def show(res): return HTML('<pre style="font-size: 6pt;line-height: normal;">{}</pre>'.format("\n".join(res))) res = !ssh swan dpkg -l | grep ii show(res)

Issues

22 Mar 2020: "failed to connect to lvmetad"

After upgrading the 18.10 kernel (Linux 4.18.0-25), the computer failed to reboot with the error message "failed to connect to lvmetad". Some searching suggested that the NVIDIA drivers might be an issue, so I tried removing them. sudo apt-get purge nvidia-* This did not help.

To debug this, I removed the splash screen and quiet option from grub by changing the following in /etc/default/grub:

# /etc/default/grub ... #GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX_DEFAULT=""

then: sudo update-grub.

After inspecting the logs, I noticed that there we issues with trying to mount the new disk and it turns out that the PARTUID's I had in /etc/fstab were incorrect. These are now fixed.