Draft Forbes Group Website (Build by Nikola). The official site is hosted at:
License: GPL3
ubuntu2004
Table of Contents
- 1Â Â Swan Policies
- 2Â Â Install
- 3Â Â Hardware Details
- 4Â Â OS
- 5Â Â CUDA
- 6Â Â Daemons
- 7Â Â Config Files
- 8Â Â Conda Environments
- 9Â Â Web Server
- 10Â Â Version Control Hosting
- 11Â Â Docker
- 12Â Â AWS Command Line
- 13Â Â Disk Usage
- 14Â Â Partition Scheme
- 15Â Â Remote Drives (NFS etc.)
- 16Â Â Backup
- 17Â Â Network
- 18Â Â Misc. Software
- 19Â Â Current Configuration
- 20Â Â Issues
Some notes about installing Linux on a Dell minitower with a GPU and user policy. Note: this file is evolving into the following collaborative set of notes on CoCalc:
Swan Policies
Disk
/home/${USER}
: Your home folder. Keep the size of your home directory minimal (<2GB). I would like to implement automatic backups of these (not yet implemented), so important information should be kept here, but no working or temporary data./data/users/${USER}
: Your personal user space on the main hard-drive. Keep this as small as possible. I would also like to keep this backed up (not yet implemented)./data2/users/${USER}
: Your personal user space on the large external hard-drive. This will not be backed up.
In addition there is shared space which should be accessible to everyone in the :student
group.
/data2/shared/
: Shared space (not backed up).
To find out which directories are taking up space, the following command is useful:
Please symlink your ~/.conda
directory to /data2
so that when you create environments, you don't overuse space:
Install
To install, I used a USB:
Download the Lubuntu ISO
Check for corruption (18.04 Release notes):
Make a bootable USB drive. (From my Mac I used Disk Utility to erase the drive,
Hardware Details
Dell Precision T1700 Minitower
Intel Xeon CPU E3-1241 v3 @ 3.50GHz Quad Core processor with Hyperthreading
OS
Decide which version of Linux you want.
Distribution: The first choice is which Linux Distribution you want to use. I chose [Ubuntu] since it is the most popular, and has lots of support. Another good option might be openSUSE.
Flavour: The next choice is Flavour, which mostly affects the GUI. Since I run a (mostly-headless) server, I chose to start with Lubuntu which I initially installed from a thumb-drive. This is fairly lightweight, and minimal.
Release: The next choice is which version of Ubuntu to upgrade to. As discussed here: "How do I decide what version of Ubuntu to install", you should basically choose either the highest revision or highest LTS revision as listed on Official Ubuntu Releases site. The LTS versions will have Long Term Support, and hence will require fewer updates. Another consideration might be which versions are supported by CUDA which would probably favour an LTS release.
After a few upgrades (18.04.2 LTS -> 18.10) I have the following:
There were a few things I had to do:
Manually configure the network interface (static IP in the department).
Install an ssh daemon:
Configuration
I added some modification to the default bash initialization files. These will apply for all users and add the following features (see the Config Files section for the exact implementation).
Alias
source
withsource_if_exists
which does not fail if file does not exist.Allow for tracing of init files by
touch ~/.trace
.Provide some useful bash completions (tab completion of various commands).
Sets up the modules system (see the Modules section).
Adds the system conda environments for the user and provides the alias
j
for starting Jupyter notebooks with appropriate port forwarding.Use
etckeeper
to keep track of configuration:
Software
For (un)installing software use apt
, apt-get
, or aptitude
:
aptitude
: Needs to be installed, but provide more support and some more user-friendly features.apt
: A nicer interface thanapt-get
that is apparently recommended overapt-get
, but maybe can't do all the little thingsapt-get
can.apt-get
: The low level too. I heard thatapt-get
apparently has better dependency resolution thanapt
, but I cannot find this reference now.
These all use dpkg
under the hood to do the actual installation. The latter is also useful if you want to see what is installed (see below). The list of sources is in /etc/apt/sources.list
which may need to be updated.
I created a user admin
for managing software (which is installed in /data
). This directory is owned by admin
.
Now I install conda as follows
I install Mercurial in this root conda environment so I can also install a few other tools with pip:
Now create the environments
Apt Repositories
Modules
I manage the software with modules. In particular, I provide the conda
modulefile.
Note: Ubuntu also has update-alternatives
. see here: https://askubuntu.com/a/26518
Users and Groups
To list groups:
Details about who belongs to which group can be found in the file /etc/group
.
To create new users:
sudo useradd -m <name>
: The-m
option creates their home directory.Give the user a unique port in
/etc/nbports
for them to use when running jupyter.
To create a new group:
sudo addgroup students
To add users to a group:
sudo usermod -a -G students <name>
To make a shared directory for a group:
Similarly, to make user folders, but which are not shared by default:
*Notes: Users need to log out and log back in to enable these permissions to become effective. The +X
here sets execute permision for directories but not files. The +s
will cause users who make new files etc. to create them from the appropriate group.
Personal Configuration
Update Kernel
The recommended way to update your kernel is:
During the upgrade to, I was asked to choose between LightDM and SDDM. I chose the former.
Difficulties
Apparently, upgrading your release is difficult. The recommended do-release-upgrade
command only works in certain cases while the end version is current.
If you run into difficulties, here is a more systematic way to proceed. I was having difficulty because my release was not supported. I am following the discussion here to upgrade from 18.10 to 20.04LTS:
Note eoan
is 19.10 which I don't want. See Ubuntu Releases for the name you want. I want 20.04LTS which is called focal fossa
.
(optional) Backup:
First find out which version you have installed:
Update
/etc/apt/sources.list
to include the correct sources:
(optional) Cleanup any unused or obsolete kernels on
/boot
: (I started with a small boot partition... there is not space for more than a couple images).
Update your current os:
The last command gave some errors. I had to do the following:
sudo vi /etc/mercurial/hgrc.d/mmfhg.rc
: Remove any extensions that were causing problems:sudo rm /etc/apt/apt.conf.d/50unattended-upgrades.ucftmp
: See here.Preparing to unpack .../at_3.1.23-1ubuntu1_amd64.deb ... Failed to reload daemon: Access denied
. See here and here.WARNING: PV /dev/sdb5 in VG lubuntu-vg is using an old PV header, modify the VG to update.
:Error 24 : Write error : cannot write compressed block
: See here. I again ran out of space on/boot
so had to remove old images (that were somehow regenerated):
After this, I did a cleanup:
This removed some important packages, so I reinstalled them after. Then rebooted.
After this was done, I was still getting an incorrect MOTD and had to remove the cached file:
Old
If you run into difficulties, here is a more systematic way to proceed. I was having difficulty because my release was not supported:
First find out which version you have installed:
Update
/etc/apt/sources.list
to include the correct sources:
Update your current os:
Old
I used the following notes to upgrade 2 Ways to Upgrade From Ubuntu 18.04 To 18.10:
Note: My /boot
partition is small, so I needed to remove old kernels first. Be sure to uninstall the packages rather than just deleting the files.
See also:
dd
CUDA
The simplest solution is to use Conda to install the appropriate toolkit:
This will bring in the best supported version of [CUDA], needed if you want to use CuPy, which does not always support the latest toolkit.
If you need to install it for your operating system, then the next easiest might be
but, this might install an older version of the toolkit though.
You can add the NVIDA repository as:
Another option is to try to get the driver directly from Nvidia, but I have found some conflicts with this. Here is my attempt:
I installed the CUDA toolkit as follows as directed on the CUDA Website. Note: you must choose a version compatibile with your kernel corresponding to the table listed there:
This means I have Ubuntu 18.10 with kernel 4.18.0. Follow the instructions on the CUDA Toolkit Download page to figure out which version you should get:
Note: please follow our restart policy:
Optional: remove old versions
sudo apt-get purge "^cuda" "^nvidia" "^libnvidia" "^libcuda" sudo apt-get autoremove
Optional: Update packages
sudo apt-get update sudo apt-get upgrade
sudo dpkg -i cuda-repo-ubuntu1810_10.1.168-1_amd64.deb sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1810/x86_64/7fa2af80.pub sudo apt-get update sudo apt-get install cuda
Restart as suggested
sudo shutdown -r now
Even Older
The correct version of GCC etc. were already installed.
Now install the various python tools:
Now you can check which version you have:
Daemons
The service
wrapper allows you to inspect and control services or daemons such as Docker docker
or your HTTP server apache2
. This adds entries to /etc/init.d
.
Most linux servers are run with systemctl
. To see which services are enabled:
Config Files
The user's default shell is specified in /etc/passwd
and can be set with
The default is /bin/bash
but /bin/sh
might be useful for users such as docker
which are intended to run services. To provide a sane initial environment, I define the following initialization files which go in
These then source files in /etc/profile.d/
. One feature of these files is that they look for a .trace
file in the user's home directory and show the startup sequence.
Upon login, I now see:
Bash
Modules
The module command is setup by some scripts deposited in /etc/profile.d/modules.sh
which does not get executed for non-login shells. This can be a problem, so we explicitly source this in the /etc/bash.bashrc
.
Mercurial
I provide some mercurial goodies with my mmfhg package which I install as follows:
Python, PIP etc.
Conda Environments
In install a base conda environment as the admin
user for everyone to use. This is a Python 3 environment with the following packages installed:
mercurial
: Now supports Python 3 (as of version 5.2)git-annex
: Support for archiving large files.nbstripout
: Cleaning notebooks - will replace withjupytext
soon which needs to be installed in thejupyter
environment.argcomplete
: Conda tab completion. Add the following to your.bashrc
file:conda-devenv
: Allows includingenvironment.yml
files.conda-tree
: Allows you to visualize dependenices.conda-verify
,conda-build
,anaconda-client
: Building conda recipes and uploading them to anaconda cloud.
Here are some custom project-specific environments that required more recent versions of packages than the generic environments listed above.
Web Server
To host a website I followed the instructions on this page:
To start I just installed apache:
As pointed out in the comments, if you need the full LAMP stack, the configuration process can be simplified:
This points the server to /var/www/html
. I then created a personal space:
The server can be restarted using service
:
Apache Configuration
The configuration files for Apache are in the following location:
Let's Encrypt and Certbot
Let's Encrypt uses Certbot as a tool for enabling free site certification. This requires using a supported Ubuntu release.
The commented out commands were needed earlier (pre Ubuntu 19.10) but should no longer be needed. Without removing this repo I ran into the following error) during apt update
:
Once the certificates are created, we might like to make them accessible to other applications like murmerd
. To do this, we add the appropriate users (which run the services) to the ssl-cert
group and change the permissions of the certificates. I did this following the suggestion here by modifying /etc/letsencrypt/cli.ini
.
To add the user running mumbled
to this group:
To find the user, look in sudo cat /etc/passwd
.
References
I would like to be able to host files that can be served by other sites like https://viewer.pyvista.org. This requires enabling cross-origin resource sharing (CORS) which can be done by:
Make sure that
mod_headers
is loaded:Adding the
Header set Access-Control-Allow-Origin "*"
to the appropriate directories in/etc/apached2/mods-enabled/headers.conf
or the corresponding/var/www/html/Public/.htaccess
file (the former is recommended):Checking the configuration and restarting the service:
This allows the following to work for example:
Version Control Hosting
With BitBucket "sunsetting" their support for Mercurial, we needed to find a new option. Here we explore options for self-hosting.
Kallithea (incomplete)
Kallithea needs a database, so we make a directory for this in /data/kalithea
and store it there.
This runs Kallithea on port 5000 which could be accessed by users with ssh tunelling.
Another alternative is Heptapod. This is a mercurial interface to a friendly fork of GitLab CE intended to ultimately bring mercurial support to GitLab. This can be installed with Docker as discussed below.
As of 1 July 2020: This is my primary alternative to Bitbucket hosted as discussed below. We will probably ultimately host this on an AWS instance.
As of 14 March 2020:
Heptapod is implemented under the hood with git.
Heptapod 0.8 supports import of Bitbucket Pull Requests.
Docker
Several packages (such as Heptapod and CoCalc) require a rather complete system, so are easiest to install using Docker containers. Here we discuss how to set these up. We are using Rootless mode which seems to work well and prevents the need for providing docker with root access.
Note: Be sure to completely purge any previous root-enabled version of Docker before proceeding.
This allows the docker
user to add processes to start services that will start at login.
Add the appropriate environmental variables to ~docker/.bashrc
:
Docker Cheatsheet
Here are some useful commands:
docker pull
: Pulls an image.docker create
: Creates a container from an image.docker start
: Starts running a container.docker stop
: Stops ...docker attach
: Attach to a running container.docker ps -a
: List all containers (both running and stopped).docker images
: List all images.docker rm
: Remove a container.docker rmi
: Remove an image.docker inspect
: Lots of information about a container.docker exec -it <name> /bin/bash
: Connect to the specified container and runbash
(like ssh-ing into the VM).
These appear in documentation, but I do not use them:
docker run
: This is equivalent todocker create
+docker start
+docker attach
. This can only be executed once. After the container is created, one cannot use subsequent calls torun
to change, for example, port assignments. It is probably most useful for short foreground processes in conjunction with the--rm
option.
Issues: I originally had a bunch of errors because of interference with the previously installed docker version (not rootless). These went away once I did sudo apt-get purge docker docker.io
.
Aborting because rootful Docker is running and accessible. Set FORCE_ROOTLESS_INSTALL=1 to ignore.
Failed to start docker.service: Unit docker.socket failed to load: No such file or directory.
So I stopped the root docker service (from a previous install) and removed this file:
After resolving these issues, I was having the following issue when trying to run the server with systemctl
:
Heptapod
Heptapod is a service providing Mercurial access to GitLab CE. When running the public server, we host it here:
Here we describe how to run Heptapod in a Docker container. This is a service based on GitLab CE that provides a backend with issue tracking etc. for Mercurial. As above, I have created a docker
user account on swan
. First I login to this, then make some directories for the server data in /data2/docker/heptapod
. Then I pull the docker image.
Now we pull the heptapod image and start a couple of containers:
heptapod-local
: Only listens on local ports. To use this, users must login with ssh and forward ports appropriately so they can connect (see below).heptapod-public
: Listens on public ports. This exposes Heptapod to the world, which may be a security risk. We do this to allow "weak" collaborators access, or to enable transferring repositories from Bitbucket.
Now we can run whichever one we want:
Once started, I initialized a mercurial repository in the configuration directory so I can keep track of configuration changes:
Debugging
Look at the current logs with the following:
Heptapod Backup (incomplete)
This will put a file on the image which we exported:
/var/opt/gitlab/backups/1593678341_2020_07_02_12.10.11_gitlab_backup.tar
~/srv/data/backups/1593678341_2020_07_02_12.10.11_gitlab_backup.tar
Optionally, the backup program can upload this to a remote storage location.
Another option is to backup the repositories, I use rclone
to copy these to my Google Drive to a remote called gwsu_backups
from my docker
account on swan
using the root_folder_id
corresponding to a folder My Drive/backups/RClone/swan/repo_backup
.
HTTP Redirect
Note: SSL does not yet work with non-standard ports... so I am using HTTP only. I have randomly chosen ports 11080, 11443 and 11022 for HTTP, HTTPS, and SSH access. These are not very memorable, so it would be nice to redirect https://swan.physics.wsu.edu/heptapod
to https://swan.physics.wsu.edu:11443
. To do this, we simply add a Redirect /heptapod https://swan.physics.wsu.edu:11443/
statement to one of the Apache config files:
Don't forget to restart the server:
Bitbucket Import
Enable OAuth2 integration on Bitbucket. I used my public settings
http://swan.physics.wsu.edu/heptapod/users/sign_in
.
Note: I had issues resulting in redirect_uri
issue because I was using my alias http://swan.physics.wsu.edu/heptapod/users/auth
but in my configuration, I used the http://swan.physics.wsu.edu:9080/users/auth
form. If you look at the URL sent, it includes the redirect_uri
which must match.
Edit the
/etc/gitlab/gitlab.rb
file on the server. Since we mapped/etc/gitlab
to~docker/srv/config
, we can edit it there without connecting.
Start the public server, or reconfigure GitLab:
or
Register for an account on our Heptapod instance.
Login.
Import new project from Bitbucket Cloud.
References
Omnibus GitLab Instructions: These are the instructions for running the GitLab Docker container. The Heptapod container is based on this, so should function similarly.
Heptapod as a Bitbucket replacement: Instructions on how to import projects from bitbucket.
Issues
Some imports are broken.
Cloning links are incorrect:
http://swan.physics.wsu.edu/mforbes/mmfutils mmfutils_heptapod
. Probably need to update the hostname to include the port and/or the/heptapode
alias.Cloning from
http://swan.physics.wsu.edu:9080/mforbes/mmfutils mmfutils_heptapod
does not work.Cloning from
ssh://[email protected]:9022/mforbes/mmfutils
works onswan
but not from outside.Cloning from
ssh://git@localhost:9022/mforbes/mmfutils
works with SSH tunnel.
Discourse
Edit the generated containers/app.yml
file. I am trying to use Gmail with a custom alias [email protected]
which I registered under my Gmail account settings.
Mailjet: I managed to
Notes:
I could not use
[email protected]
as the SMTP user here since I cannot login to gmail with this.Gmail did not work: probably have to use an App password since two-factor authentication is enabled.
I had to use an absolute path for the host
/home/docker
:~
did not work.I also tried using Mailjet with
- exec: rails r "SiteSetting.notification_email='[email protected]'"
but this did not seem to activate either (I was expecting Mailjet to send me an activation email to make sure...)
After editing these, I was able to continue after making these directories:
Not Working: Discourse is running, but not able to send emails.
HTTP Redirect
I have randomly chosen ports 10080 and 10433 for HTTP and HTTPS access. These are not very memorable, so it would be nice to redirect https://swan.physics.wsu.edu/discourse
to https://swan.physics.wsu.edu:10443
. To do this, we simply add a Redirect /discourse https://swan.physics.wsu.edu:10443/
statement to one of the Apache config files:
Don't forget to restart the server:
References:
CoCalc
CoCalc can also be installed with docker. I created the images with the following file:
These listen on port 9443. Note: you must connect with https://localhost:9443, not with HTTP.
Issues
New project stuck on "Loading..."
Nextcloud (incomplete)
Open source replacement for Google Cloud etc. There is a docker image.
Incomplete because this needs MySQL etc. and I don't want to figure this out yet.
AWS Command Line
Related to docker: if you need to build images for deployment on AWS, you will need the aws-cli
:
Disk Usage
To see how much disk space we have use df
:
To see where you are using disk space use du
:
Partition Scheme
There are some important partitions and issues related to choice of partitions.
/boot
: This is where the kernel lives. I originally made it 256MB, but then ran into issues when upgrading the kernel because I did not have enough space to download the new kernel while keeping the old kernel. I recommend using 512MB or 1GB if you have space instead so you can keep a few backup kernels. See What is the recommended size for a linux boot partitiona? for a discussion./
: This is the root partition for the OS. It is where all of the operating system files get installed./swap
: Ubuntu recommends that you include a swap partition that matches your RAM, but it seems that this recommendation is for systems that need to hibernate. For a desktop, swap files might be better since they can grow.
Installing a New Drive
I installed a new internal hardddive and decided on the following partition scheme:
/boot
: 1GB. I intend to use this to tryout OS upgrades and so that this drive can be used as a bootable backup./mnt/hdclone
: 256GB. Intended to be a backup clone of the internal harddive with the OS and home directories./mnt/data2
: Remaining data partition.
To do this I first ran parted
and created the partitions. Then ran mkfs.ext4
to format the partitions:
Identify the appropriate disk
Here we see the two internal drives: the original 256G drive /dev/sda
and the new 6TB drive /dev/sdb
. (Other drives are also listed but have been omitted.)
Create the partitions
Format the partitions after double
Make mount points
Add the mounting information to
/etc/fstab
.
References
Currently I have the following partitions:
The internal hard drive is sda
which is a ~240GB
drive. There is a /boot
partition with the kernel and then a physical partition sda2
which is subdivided into several partitions. We also see an externally mounted USB drive sdb
. More information can be obtained using fdisk:
Remote Drives (NFS etc.)
Backup
One should always create backups of one's computer. This includes backups of the data and bootable backups. Here are some options.
OneDrive Free Client (incomplete)
This client is like Dropbox but integrates with OneDrive. It is currently approved for use at WSU.
This installs the following files:
Users then configure it as follows:
This makes a shared folder in my home directory to ~/OneDrive
(which is the default configuration in ~/.config/onedrive/config
. At this point, one should apparently be able to sync with a command like onedrive --resync
. This appears to work, but ends up just returning a blank webpage rather than an appropriate "response uri":
TL;DR
Identify your partitions and drives.
Copy:
To partition the hard drives, you can use fdisk
.
Timeshift (incomplete)
Timeshift: is a GUI application for backuping system files (only the OS). Does not seem to work headless.
References
Mounting External Drives
Duplicity (incomplete)
Duplicity is a command-line backup tool that works with Google Drive and Onedrive (though probably not without enable apps for the latter). If you want a GUI front-end, install Déjà -Dup.
Enable a Google Drive API app.
Log in to console.developers.google.com with the appropriate account.
Create a new project. (I called mine
Duplicity Backup
.)Select and
Enable
theGoogle Drive API
.Create
Credentials
:Select
OAuth client ID
from theCreate credentials
menu. (Do not create aService account key
.)Give your project a name as needed. (I use
Duplicity Backup Client
.)Copy the credentials.
Create a
.duplicity
configuration directory. I am doing this as theadmin
user:Create an appropriate
credentials
file:Install duplicity. I do this with a conda environment:
Create a backup script. This one backs up my home directory to my google drive.
Run the backup. I do this in a screen session:
This will prompt you for a passphrase which will be used to encrypt your data.
RClone
Unlike Duplicity, RClone does not encrypt your data. This has the advantage that you can browse it online, but the disadvantage of lacking privacy. Apparently, Rclone v1.46 supports symlinks by copying them to text files (since Google Drive does not support symlinks).
Cheatsheet
rclone config
: Configure remotes.rclone listremotes
: Show which remotes you have configured.rclone ls <remote>:
: List files on remote.rclone sync -Pl <src> <dest>
: Make<dest>
match<src>
changing only<dest>
. Preserve/restore symlinks (-l
).
Users
As a user, configure your backup. Here will copy my home directory
~
to my Google Drive.I used a simple name (since you need to type this):
gwsu
.For optimal performance, use a Google Application Client Id (see below).
I use permision 1, Full access (so rclone ls works).
I created a folder and specified the
root_folder_id
from the last part of the folder URL.
Backup:
The
-P
flag shows progress, and the-l
flag copies symlinks as files with.rclonelink
as an extension. These will be restored when you copy back.Restore: (Here I am restoring to
/tmp
so I don't clobber my actual home directory by mistake!)Here is an example of a script I include with a project to sync the contents of a mercurial repo to a team drive:
This will pull the contents from the drive, allowing you to compare it with the version control.
Performance (Google Application Client Id)
Google limits the rate at which certain applications can query their systems. Since all RClone users share the same application by default, it is strongly encouraged for you to create your own authenticated client rather than using Rclones. Do this by following the instructions below:
Note: I could not do this with my WSU account since it has been disabled, so I had to do this with my personal Google account or my UW account. For Team Drives you do not need to specify the Root ID: it will default to the drive.
This improved my download performance from ~300KB/s to ~3MB/s.
Admin
Install for everyone:
Microsoft Teams
It is a little non-obvious how to connect to a Teams drive. I had to run the rclone authorize "onedrive"
command on my Mac where I was authenticated to my teams folder.
References
Network
To test connectivity, making a little echo server can be useful. This can be done with the netcat utility:
Now you can type on one or the other and should see messages if they are connected.
nc -l -p 2000 -c 'xargs -n1 echo'
. Differences in versions (traditional vs BSD) make these solutions fragile (they don't work with the default versions installed in Ubuntu for example). Another answer demonstrates how to do this with socat
:
Installing ncat
, you can make a server that will accept multiple connections:
Fail2Ban
The fail2ban package implements a rather draconian policy of banning IP's that fail to authenticate properly. This greatly improves security by limiting the ability of hackers from brute-force trying to break in if users have not used secure passwords.
Unlocking
If a legitimate user accidentally triggers a ban, they can either wait, or an admin and unban them with the following commands:
You will need to know the IP address of the person attempting to login. They can find this with ip a
or ifconfig -a
or similar. You can also look on the server with iptables -n -L
.
Example
As an example, I trigger a ban here from our cluster called kamiak
:
Check to see if a ban is in effect:
See if 198.17.13.7
is our server (kamiak).
It is. Check which jail - should be the sshd jail:
Okay, now unban.
References
How to test the network speed/throughput between two Linux servers: Walks through using
iperf3
to test network speed.
Misc. Software
VisIt
VisIt is a visualization toolkit using VTK for analyzing 3D data. We use it for analysis of superfluids, but we need a custom plugin which we must compile to use.
Resources
User Wiki: No idea how to create an account - the email [email protected] bounces.
User Forums: Need to create an account to see useful stuff. Note: search defaults to within the last week, so you need to do advanced search for anything useful.
I make this available through a module
Plugins
To read custom data, one needs to write a plugin. We have one with the W-SLDA code which I install as follows:
Mumble/Murmer
Mumble is an open source, low latency, high quality voice chat application, but needs a server running somewhere (called murmer
). Here we install muble:
As discussed above, I added the user mumble-server
to the ssl-cert
group so that it can use the Let's Encrypt certificates. In the end, I have the following active lines in /etc/mumble-server.ini
:
Remote VNC
Running applications over X11 is painful if your network is slow. Remote desktop control with Tiger VNC provides a much better experience in most cases. This can be installed with:
Then I created a password and set my configuration to listen locally:
This will be the password you give to your VNC Client to connect. (Note: I think it should be possible to run without a password but my Mac OS X VNC client does not seem to work without one.)
Now create a ~/.vnc/config
file:
Finally, ssh into your computer, forwarding port 5901:
and start the server. Here I am running firefox:
On my Mac, I then connect with:
Eventually, I kill my server:
References
Current Configuration
Network
Network config:
Disks
Here is my working /etc/fstab
. Note:
Use UUID's as listed by
blkid
since/dev/sd*
values may change from run to run depending on how the devices are detected.Make sure that the USB drives have
nofail
so they do not halt the boot process if they are missing.
Note: the actual device labels change from boot to boot
/dev/sda
: (Khalid's Disk)/dev/sdb
: Original 250GB internal harddrive./dev/sdb1
: Boot sector (243M)/dev/sdb2
: Extended (238.2G)/dev/sdb5
: Linux LVM (238.2G)
/dev/sdc
: New 6TB internal harddrive./dev/sdc1
: (953M) EFI System/dev/sdc2
: (238.4G) Linux filesystem/dev/sdc3
: (5.2T) Linux filesystem
/dev/sdd
: (External Seagate)
$ sudo fdisk -l Disk /dev/sda: 931.5 GiB, 1000204885504 bytes, 1953525167 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 33553920 bytes Disklabel type: dos Disk identifier: 0x542acb8f Device Boot Start End Sectors Size Id Type /dev/sda1 2048 1953521663 1953519616 931.5G 7 HPFS/NTFS/exFAT Disk /dev/sdb: 238.5 GiB, 256060514304 bytes, 500118192 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x8f7f2947 Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 499711 497664 243M 83 Linux /dev/sdb2 501758 500117503 499615746 238.2G 5 Extended /dev/sdb5 501760 500117503 499615744 238.2G 8e Linux LVM Disk /dev/sdc: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: E22CB7F1-B1F8-403F-9C09-6CF02E1392E1 Device Start End Sectors Size Type /dev/sdc1 2048 1953791 1951744 953M EFI System /dev/sdc2 1953792 501952511 499998720 238.4G Linux filesystem /dev/sdc3 501952512 11721043967 11219091456 5.2T Linux filesystem Disk /dev/mapper/lubuntu--vg-root: 222.3 GiB, 238681063424 bytes, 466173952 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 3.7 TiB, 4000752599040 bytes, 7813969920 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 636CB648-76E8-4FFD-BA07-D5873D03596D Device Start End Sectors Size Type /dev/sdd1 40 409639 409600 200M EFI System /dev/sdd2 409640 1133222135 1132812496 540.2G Apple HFS/HFS+ /dev/sdd3 1133484280 2266296775 1132812496 540.2G Apple HFS/HFS+ /dev/sdd4 2266558920 3399882359 1133323440 540.4G Apple HFS/HFS+ /dev/sdd5 3399882752 7813969886 4414087135 2.1T Linux filesystem Disk /dev/mapper/lubuntu--vg-swap_1: 16 GiB, 17116954624 bytes, 33431552 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes $ lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sda `-sda1 ntfs Seagate Backup Plus Drive A6905C57905C3053 sdb |-sdb1 ext2 10f14fbd-9d53-43f2-be32-c9ab8c6aab99 /boot |-sdb2 `-sdb5 LVM2_member HQtaOr-nUK1-cshj-lty2-ZDc8-MAum-MwMBBo |-lubuntu--vg-root ext4 afe81057-f92c-47a6-b724-26e0ad1b2880 / `-lubuntu--vg-swap_1 swap c1da1cdc-718c-47e3-92ed-3f34e6fca27b [SWAP] sdc |-sdc1 ext4 23e3fcf9-2e01-4869-8bc4-3069e122d5d3 |-sdc2 ext4 7e5eefff-68d7-40e0-a4c6-9cbdd65a6263 /mnt/hdclone `-sdc3 ext4 0971a9ee-aeab-4e76-b45a-fc63103ca489 /mnt/data2 sdd |-sdd1 vfat EFI 67E3-17ED |-sdd2 hfsplus KSB NFS Medtner b64f66fc-0a37-3bd9-96df-a15b01b3860e |-sdd3 hfsplus MMF NFS Medtner ea008a86-12be-3278-a913-5182ad395ea9 |-sdd4 hfsplus Data NFS Medtner f3013eb6-af22-37ef-bb78-92f8e7d4ff8e `-sdd5 ext4 6163f638-a6ca-495b-abc7-232026b7d32e sr0 $ blkid /dev/sda1: LABEL="Seagate Backup Plus Drive" UUID="A6905C57905C3053" TYPE="ntfs" PTTYPE="atari" PARTUUID="542acb8f-01" /dev/sdb1: UUID="10f14fbd-9d53-43f2-be32-c9ab8c6aab99" TYPE="ext2" PARTUUID="8f7f2947-01" /dev/sdb5: UUID="HQtaOr-nUK1-cshj-lty2-ZDc8-MAum-MwMBBo" TYPE="LVM2_member" PARTUUID="8f7f2947-05" /dev/sdc1: UUID="23e3fcf9-2e01-4869-8bc4-3069e122d5d3" TYPE="ext4" PARTLABEL="boot" PARTUUID="24d9965c-595d-413f-9d28-1e8238441970" /dev/sdc2: UUID="7e5eefff-68d7-40e0-a4c6-9cbdd65a6263" TYPE="ext4" PARTLABEL="hdclone" PARTUUID="ecd06215-d0df-4407-9b59-144e81a9ee2c" /dev/sdc3: UUID="0971a9ee-aeab-4e76-b45a-fc63103ca489" TYPE="ext4" PARTLABEL="data" PARTUUID="4545ada2-dc89-47d8-b1f8-6ac72b15f02f" /dev/mapper/lubuntu--vg-root: UUID="afe81057-f92c-47a6-b724-26e0ad1b2880" TYPE="ext4" /dev/sdd1: LABEL="EFI" UUID="67E3-17ED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="4ca9a0a1-8b8f-4f13-819d-d28528a7e5e0" /dev/sdd2: UUID="b64f66fc-0a37-3bd9-96df-a15b01b3860e" LABEL="KSB NFS Medtner" TYPE="hfsplus" PARTLABEL="Untitled" PARTUUID="2ad2eaa6-5064-4d18-8451-6797b426d2b7" /dev/sdd3: UUID="ea008a86-12be-3278-a913-5182ad395ea9" LABEL="MMF NFS Medtner" TYPE="hfsplus" PARTLABEL="MMF NFS Medtner" PARTUUID="105629a2-a227-4aff-a726-85db2a167db1" /dev/sdd4: UUID="f3013eb6-af22-37ef-bb78-92f8e7d4ff8e" LABEL="Data NFS Medtner" TYPE="hfsplus" PARTLABEL="Data NFS Medtner" PARTUUID="99311db6-7ca1-495b-88ff-5680325de291" /dev/sdd5: UUID="6163f638-a6ca-495b-abc7-232026b7d32e" TYPE="ext4" PARTUUID="9a718cb6-d326-5a45-b4ca-e5748c45414d"
Software
Issues
22 Mar 2020: "failed to connect to lvmetad"
After upgrading the 18.10 kernel (Linux 4.18.0-25), the computer failed to reboot with the error message "failed to connect to lvmetad"
. Some searching suggested that the NVIDIA drivers might be an issue, so I tried removing them. sudo apt-get purge nvidia-*
This did not help.
To debug this, I removed the splash screen and quiet
option from grub by changing the following in /etc/default/grub
:
then: sudo update-grub
.
After inspecting the logs, I noticed that there we issues with trying to mount the new disk and it turns out that the PARTUID's I had in /etc/fstab
were incorrect. These are now fixed.