I recently moved to Linux and have all my hard drives Luks encrypted, including the primary. I decided to convert my ext4 partitions to Btrfs recently, which I’m totally loving. I also decided to grab another nvme drive and use it as a RAID1 (mirror) drive against my primary drive, using Btrfs’ RAID mechanics. Below are the instructions to accomplish this.
Do note that this is for a situation where you already have a BTRFS volume and want to add a device as RAID1. This assumes you already have your system booting to the LUKS encrypted drive with the root being btrfs. Many modern Linux OS installers can do this for you automatically. Parts of these instructions can still be used in other situations.
Hopefully you also have a swap partition under the same LVM as your LUKS root (the Linux Mint installer does this by default), as we’ll be using it. If not, you’ll need to modify the instructions. This script resizes the swap partition and adds an “extra” partition to hold your drive key. This is required because a drive key cannot be loaded off your btrfs volume as both drives need to be unlocked first.
This should be ran from another operating system. I would recommend using Universal USB Installer to do this. It allows you to put multiple OS live cds on a USB key, including optional persistence.
Run the following script as root (you can use sudo). Make sure to fill in the variables section first. Or even better, run the script 1 line at a time to make sure there are no problems.
#!/bin/bash
#-----------------------------------Variables----------------------------------
#Current root drive
CurPart="nvme0n1p3" #The current drive partition in /dev. This example uses nvme disk #0 partition #3
CurCryptVol="vgmint" #What you named your LVM under LUKS
CurCryptRoot="root" #What you named your root partition under the LVM
CurCryptRootSubVol="/" #The path of the subvolume that is used as the root partition. For example, I use “@”
CurCryptSwap="swap_1" #What you named your swap partition under the LVM
CurCryptExtra="extra" #What you WANT to name your extra partition under the LVM
CurCryptExtraSize="100M" #How big you want your extra partition that will hold your key file
CurKeyPath="" #The path to a key file that will unlock both drives. If left blank then one will be created
#New drive
NewDrive="nvme1n1" #The new drive in /dev. This example uses nvme disk #1
NewPart="nvme1n1p3" #The new partition in /dev. You will be creating this with the cfdisk. This example uses nvme disk#1 partition#3
NewCryptName="raid1_crypt" #What we’ll name the root LUKS partition (no LVM)
#Other variables you do not need to set
CurMount="/mnt/primary"
ExtraMountPath="$CurMount/mnt/extra"
BtrfsReleasePath="kdave/btrfs-progs"
BtrfsReleaseFile="btrfs.box.static"
DriveKeyName="drivekey"
echo "---------------------------------Update BTRFS---------------------------------"
echo "Make sure you are using the latest btrfs-progs"
cd "$(dirname "$(which btrfs)")"
LATEST_RELEASE=$(curl -s "https://api.github.com/repos/$BtrfsReleasePath/releases/latest" | grep tag_name | cut -d \" -f4)
wget "https://github.com/$BtrfsReleasePath/releases/download/$LATEST_RELEASE/$BtrfsReleaseFile"
chmod +x "$BtrfsReleaseFile"
echo "Link all btrfs programs to btrfs.box.static. Rename old files as .old.FILENAME"
if ! [ -L ./btrfs ]; then
for v in $(\ls btrfs*); do
if [ "$v" != "$BtrfsReleaseFile" ]; then
mv "$v" ".old.$v"
ln -s "$BtrfsReleaseFile" "$v"
fi
done
fi
echo "--------------------------Current drive and key setup-------------------------"
echo "Mount the current root partition"
cryptsetup luksOpen "/dev/$CurPart" "$CurCryptVol"
vgchange -ay "$CurCryptVol"
mkdir -p "$CurMount"
mount -o "subvol=$CurCryptRootSubVol" "/dev/$CurCryptVol/$CurCryptRoot" "$CurMount"
echo "If the extra volume has not been created, then resize the swap and create it"
if ! [ -e "/dev/$CurCryptVol/$CurCryptExtra" ]; then
lvremove -y "/dev/$CurCryptVol/$CurCryptSwap"
lvcreate -n "$CurCryptExtra" -L "$CurCryptExtraSize" "$CurCryptVol"
mkfs.ext4 "/dev/$CurCryptVol/$CurCryptExtra"
lvcreate -n "$CurCryptSwap" -l 100%FREE "$CurCryptVol"
mkswap "/dev/$CurCryptVol/$CurCryptSwap"
fi
echo "Make sure the key file exists, if it does not, either copy it (if given in $CurKeyPath) or create it"
mkdir -p "$ExtraMountPath"
mount "/dev/$CurCryptVol/$CurCryptExtra" "$ExtraMountPath"
if ! [ -e "$ExtraMountPath/$DriveKeyName" ]; then
if [ "$CurKeyPath" != "" ]; then
if ! [ -e "$CurKeyPath" ]; then
echo "Not found: $CurKeyPath"
exit 1
fi
cp "$CurKeyPath" "$ExtraMountPath/$DriveKeyName"
else
openssl rand -out "$ExtraMountPath/$DriveKeyName" 512
fi
chmod 400 "$ExtraMountPath/$DriveKeyName"
chown root:root "$ExtraMountPath/$DriveKeyName"
fi
echo "Make sure the key file works on the current drive"
if cryptsetup --test-passphrase luksOpen --key-file "$ExtraMountPath/$DriveKeyName" "/dev/$CurPart" test; then
echo "Keyfile successfully opened the LUKS partition."
#cryptsetup luksClose test #This doesn’t seem to be needed
else
echo "Adding keyfile to the LUKS partition"
cryptsetup luksAddKey "/dev/$CurPart" "$ExtraMountPath/$DriveKeyName"
fi
echo "--------------------------------New drive setup-------------------------------"
echo "Use cfdisk to set the new disk as GPT and add partitions."
echo "Make sure to mark the partition you want to use for the raid disk as type “Linux Filesystem”."
echo "Also make it the same size as /dev/$CurPart to avoid errors"
cfdisk "/dev/$NewDrive"
echo "Encrypt the new partition"
cryptsetup luksFormat "/dev/$NewPart"
echo "Open the encrypted partition"
cryptsetup luksOpen "/dev/$NewPart" "$NewCryptName"
echo "Add the key to the partition"
cryptsetup luksAddKey "/dev/$NewPart" "$ExtraMountPath/$DriveKeyName"
echo "Add the new partition to the root btrfs file system"
btrfs device add "/dev/mapper/$NewCryptName" "$CurMount"
echo "Convert to RAID1"
btrfs balance start -dconvert=raid1 -mconvert=raid1 "$CurMount"
echo "Confirm both disks are in use"
btrfs filesystem usage "$CurMount"
echo "--------------------Booting script to load encrypted drives-------------------"
echo "Get the UUID of the second btrfs volume"
Drive2_UUID=$(lsblk -o UUID -d "/dev/$NewPart" | tail -n1)
echo "Create a script to open your second luks volumes before mounting the partition"
echo "Note: In some scenarios this may need to go into “scripts/local-premount” instead of “scripts/local-bottom”"
cat <<EOF > "$CurMount/etc/initramfs-tools/scripts/local-bottom/unlock_drive2"
#!/bin/sh
PREREQ=""
prereqs()
{
echo "\$PREREQ"
}
case "\$1" in
prereqs)
prereqs
exit 0
;;
esac
. /scripts/functions
cryptroot-unlock
vgchange -ay "$CurCryptVol"
mkdir -p /mnt/keyfile
mount "/dev/$CurCryptVol/$CurCryptExtra" /mnt/keyfile
cryptsetup luksOpen /dev/disk/by-uuid/$Drive2_UUID "$NewCryptName" "--key-file=/mnt/keyfile/$DriveKeyName"
umount /mnt/keyfile
rmdir /mnt/keyfile
mount -t btrfs -o "subvol=$CurCryptRootSubVol" "/dev/$CurCryptVol/$CurCryptRoot" /root
#If you are weird like me and /usr is stored elsewhere, here is where you would need to mount it.
#It cannot be done through your fstab in this setup.
#mount --bind /root/sub/sys/usr /root/usr
mount --bind /dev /root/dev
mount --bind /proc /root/proc
mount --bind /sys /root/sys
EOF
chmod 755 "$CurMount/etc/initramfs-tools/scripts/local-bottom/unlock_drive2"
echo "--------------------Setup booting from the root file system-------------------"
echo "Prepare a chroot environment"
for i in dev dev/pts proc sys run tmp; do
mount -o bind /$i "$CurMount/$i"
done
echo "Run commands in the chroot environment to update initramfs and grub"
chroot "$CurMount" <<EOF
echo "Mount the other partitions (specifically for “boot” and “boot/efi”)"
mount -a
echo "Update initramfs and grub"
update-initramfs -u -k all
update-grub
EOF
echo "-----------------------------------Finish up----------------------------------"
echo "Reboot and pray"
reboot
Yesterday I moved my Plex installation from a Windows machine to a Linux machine. The primary data folders that needed to be copied over were Media, Metadata, Plug-ins, and Plug-in Support. It doesn't hurt to copy over some of the folders in Cache too. It's possible there may be some more data that needs to be moved over, but I don't have documented what.
After moving all the data, I updated the paths in the database for the new machine. Doing this allowed me to keep everything as it was and no new refreshes/scans needed to be done.
The location of the Plex modified SQLite for me was at /usr/lib/plexmediaserver/Plex SQLite. So the following is the bash commands to stop the plex server and open SQLite editor on Linux Mint 21.3.
service plexmediaserver stop
/usr/lib/plexmediaserver/Plex\ SQLite '/var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db'
And the following is the SQL I used to replace a D: drive with a path of /home/plex/drives/d/. You can replace these strings in the below code with your custom drives and paths.
#Replace backslashes with forward slash in paths
UPDATE media_parts SET file=REPLACE(file, '\', '/');
UPDATE section_locations SET root_path=REPLACE(root_path, '\', '/');
UPDATE media_streams SET url=REPLACE(url, '\', '/') WHERE url LIKE 'file://%';
#Replace root paths
UPDATE media_parts SET file=REPLACE(file, 'D:/', '/home/plex/drives/d/');
UPDATE section_locations SET root_path=REPLACE(root_path, 'D:/', '/home/plex/drives/d/');
UPDATE media_streams SET url=REPLACE(url, 'file://D:/', 'file:///home/plex/drives/d/') WHERE url LIKE 'file://%';
UPDATE media_streams SET url=REPLACE(url, 'file:///D:/', 'file:///home/plex/drives/d/') WHERE url LIKE 'file://%';
UPDATE metadata_items SET guid=REPLACE(guid, 'file:///D:/', 'file:///home/plex/drives/d/');
The 2 primary problems were that:
1) The browsers’ sandboxes did not have the QT libs.
2) The interprocess communication pipe socket had been renamed from kpxc_server to org.keepassxc.KeePassXC.BrowserServer.
The following are the instructions to get KeePassXC running in Flatpak versions of both Chrome and Firefox. This was tested on Linux Mint 21.3 with both Ungoogled Chromium and Firefox. You will need to change the KP_FLATPAK_PACKAGE if you use other versions of Chrome.
Run the relevant below environment variables in your command shell before running the commands in the following steps:
Install and enable the browser extension: KeePassXD > Tools > Settings > Browser Integration:
Check “Enable Browser Integration”
Check “Chromium” and/or “Firefox”
Download the plugin listed on this screen in your browser
Click "OK"
Note: This creates $KP_JSON_START/$KP_JSON_NAME
Set up the needed files in the sandbox:
#Put KeePass proxy and needed library files in user directory
mkdir -p $KP_CUSTOM/lib
mkdir -p $KP_JSON_END #Needed for firefox
cp /usr/bin/keepassxc-proxy $KP_CUSTOM/
rsync -a /usr/lib/x86_64-linux-gnu/libicudata* /usr/lib/x86_64-linux-gnu/libicuuc* /usr/lib/x86_64-linux-gnu/libicui* /usr/lib/x86_64-linux-gnu/libdouble* /usr/lib/x86_64-linux-gnu/libsodium* /usr/lib/x86_64-linux-gnu/libQt5* $KP_CUSTOM/lib
#Copy the JSON file to the Flatpak app directory and change the executable path in the file
cp $KP_JSON_START/$KP_JSON_NAME $KP_JSON_END/
sed -i "s/\/usr\/bin\//"$(echo $KP_CUSTOM | sed 's_/_\\/_g')"\//" $KP_JSON_END/$KP_JSON_NAME
Add permissions to the Flatpak:
flatpak override --user --filesystem=$KP_CUSTOM:ro $KP_FLATPAK_PACKAGE #Only required if home directory is not shared to the Flatpak
flatpak override --user --filesystem=xdg-run/org.keepassxc.KeePassXC.BrowserServer:ro $KP_FLATPAK_PACKAGE
flatpak override --user --env=LD_LIBRARY_PATH=$(flatpak info --show-permissions $KP_FLATPAK_PACKAGE | grep -oP '(?<=LD_LIBRARY_PATH=).*')";$KP_CUSTOM/lib" $KP_FLATPAK_PACKAGE
I’ve been moving from Windows to Linux recently and my latest software move attempt is Roboform, a password manager that I use in offline mode. Just running it under Wine works fine for the primary software interaction, but I was unable to get it fully integrated with its chrome extension. Below is the information for my attempt to get it working, in case someone could use the information or wanted to try continuing my attempts.
To move your RoboForm profile to Linux, copy the data from C:\Users\USERNAME\AppData\Local\RoboForm\Profiles\Default Profile to ~/.wine/drive_c/users/USERNAME/Local Settings/Application Data/RoboForm/Profiles/Default Profile.
Part 1: Redirect the extension to the executable
The chrome extension talks to its parent, rf-chrome-nm-host.exe, through the native messaging API. To direct the extension to talk to the windows executable you have to edit ~/.config/chromium/NativeMessagingHosts/com.siber.roboform.json and change the path inside it to /home/USERNAME/chrome-robo.sh, a script file that you will create. You can’t link directly to the rf-chrome-nm-host.exe because it has to be run through Wine, and the path cannot contain arguments.
Create a file with executable permissions at ~/chrome-robo.sh and set its contents to:
cd "/home/USERNAME/.wine/drive_c/Program Files (x86)/Siber Systems/AI RoboForm/9.6.1.1/";
/usr/bin/wine ./rf-chrome-nm-host.exe chrome-extension://pnlccmojcmeohlpggmfnbbiapkmbliob/ --parent-window=0
Part 2: Debugging why it isn’t working
It should have worked at this point, but it still wasn’t, so I had to go further into debug mode. The full copy of the C source code can be found at the bottom of this post. Make sure to replace USERNAME in the source (and instructions in this post) with your username, as using “~” to specify your home directory often doesn’t work in this setup. You may also need to replace version numbers (9.6.1.1 for this post).
First, I created a simple C program to sit in between the chrome extension and the rf-chrome-nm-host.exe. All it did was forward the stdin/stdout between both programs (left=chromium, right=rf-chrome-nm-host.exe) and output their crosstalk to a log file (./log) that I monitored with tail -f. I pointed to the generated executable in the ~/chrome-robo.sh file.
All that was generated on the Linux config was: left=ping, right=confirm ping, left=get product info, END.
I then modified the program to specifically handle chrome native messaging packets, which are always a 4 byte number specifying the packet length, followed by the packet data (IsChrome=1). If this variable is turned off there is a place in the code where you can set a different executable with parameters to run.
Next, I ran the program in Windows with a working left+right config so I could see what packets were expected.
I then added a hack to the program (AddRoboformAnswerHacks=1) to respond to the 2nd-4th packets sent from the left (get-product-info, get-product-info, Initialize2) with what was expected from the right. rf-chrome-nm-host.exe crashed on the 5th packet, and debugging further from there would have taken too more time than I was willing to put in, so at that point I gave up.
Part 3: Trying with windows
I next decided to see if I could get the chromium extension to play nicely by talking directly to the rf-chrome-nm-host.exe on my Windows machine via an SSH tunnel. To do this, I changed the char *cmd[]= line to:
While the first 4 packets succeeded in this setup, the following packets (rf-api-request) were all met with: pipe open error: The system cannot find the file specified. (error 2).
This was another stopping point because debugging this would also have taken too long. Though I did do some initial testing using Process Monitor and handle lookups in Process Explorer.
Part 4: The C source code
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/wait.h>
#include <string.h>
FILE *SideLog; //This logs all events to the file "log"
FILE *RecordLeft; //This logs the exact packets that the left side sends to the file "left"
int IsChrome=1; //True for roboform chrome compatibility, false to just pass through data as is
int DisplayStatus=1; //Whether to show the "Status: x" updates
int AddRoboformAnswerHacks=0; //Whether to force send fake responses that the rf-chrome server is not responding to
const char* HomePath="/home/USERNAME/.wine/drive_c/Program Files (x86)/Siber Systems/AI RoboForm/9.6.1.1/"; //This must be set.
const char* ExePath="/home/USERNAME/.wine/drive_c/Program Files (x86)/Siber Systems/AI RoboForm/9.6.1.1/rf-chrome-nm-host.exe"; //This must be set. You can make sure rf-chrome-nm-host.exe is running with a 'ps'
//Send back a custom packet to the left. This will show in the log file BEFORE the packet it is responding to.
void SendLeft(char *str)
{
char Buffer[255];
int StrSize=strlen(str);
*(int*)Buffer=StrSize;
strcpy(Buffer+4, str);
write(STDOUT_FILENO, Buffer, StrSize+4);
fprintf(SideLog, "OVERRIDE REQUEST: %s\n\n", Buffer+4);
fflush(SideLog);
}
//Forward data from left to right or vice versa. Also saves to logs and "left" file
void ForwardSide(int InFileHandle, int OutFileHandle, fd_set* ReadFDS, int IsLeft)
{
//Exit here if no data
if(!FD_ISSET(InFileHandle, ReadFDS))
return;
//Create a static 1MB+1K buffer - Max packet with chrome is 1MB
const int BufferLen=1024*(1024+1);
static char* Buffer=0;
if(!Buffer)
Buffer=malloc(BufferLen);
//If not chrome, just pass the data as is
char* Side=(IsLeft ? "left" : "right");
if(!IsChrome) {
int ReadSize=read(InFileHandle, Buffer, BufferLen);
write(OutFileHandle, Buffer, ReadSize);
if(IsLeft)
fwrite(Buffer, ReadSize, 1, RecordLeft);
Buffer[ReadSize]=0;
fprintf(SideLog, "%s (%d): %s\n\n", Side, ReadSize, Buffer);
fflush(SideLog);
return;
}
//Read the 4 byte packet size and store it at the beginning of the buffer
unsigned int PacketSize;
read(InFileHandle, &PacketSize, 4);
*(unsigned int*)Buffer=PacketSize;
//Read in the packet and zero it out at the end for the string functions
read(InFileHandle, Buffer+4, PacketSize);
Buffer[PacketSize+4]=0;
//Send fake product-info packet since rf-chrome-nm-host.exe was not responding to it
if(AddRoboformAnswerHacks && IsLeft && strstr(Buffer+4, "\"name\":\"getProp\"") && strstr(Buffer+4, "\"args\":\"product-info\"")) {
//The return packet with header at the front
char VersionInfo[]="{\"callbackId\":\"2\",\"result\":\"{\\\"version\\\":\\\"9-6-1-1\\\",\\\"haveReportAnIssue\\\":true,\\\"haveBreachMon\\\":true,\\\"haveLoginIntoAccount\\\":true}\"}";
//Simplistic version counter hack since chrome always sends 2 version info requests at the beginning
static int VersionCount=2;
VersionInfo[15]='0'+VersionCount;
VersionCount++;
SendLeft(VersionInfo);
//Send fake initialization info packet since rf-chrome-nm-host.exe was not responding to it
} else if(AddRoboformAnswerHacks && IsLeft && strstr(Buffer+4, "\"name\":\"Initialize2\"")) {
SendLeft("{\"callbackId\":\"4\",\"result\":\"rf-api\"}");
//Forward the packet to the other side and store in the "left" file if left side
} else {
write(OutFileHandle, Buffer, PacketSize+4);
if(IsLeft)
fwrite(Buffer, PacketSize+4, 1, RecordLeft);
}
//Output the packet to the log
fprintf(SideLog, "%s (%d): %s\n\n", Side, PacketSize, Buffer+4);
fflush(SideLog);
}
int main(void) {
//Create pipes
int pipe1[2]; //Parent writes to child
int pipe2[2]; //Child writes to parent
if(pipe(pipe1)==-1 || pipe(pipe2)==-1) {
perror("pipe");
exit(EXIT_FAILURE);
}
//Fork the current process
pid_t pid = fork();
if(pid==-1) {
perror("fork");
exit(EXIT_FAILURE);
}
//New (child) process
if(pid == 0) {
//Close unused ends of the pipes
close(pipe1[1]); // Close write end of pipe1
close(pipe2[0]); // Close read end of pipe2
//Redirect stdin to the read end of pipe1
dup2(pipe1[0], STDIN_FILENO);
close(pipe1[0]);
//Redirect stdout to the write end of pipe2
dup2(pipe2[1], STDOUT_FILENO);
close(pipe2[1]);
//Move to the roboform home directory
if(IsChrome) {
if(chdir(HomePath) == -1) {
perror("chdir");
exit(EXIT_FAILURE);
}
}
//Execute a command that reads from stdin and writes to stdout. The default is the chrome command. If not in chrome, you can fill in the exe and parameter you wish to use
char *cmd[] = {"/usr/bin/wine", (char*)ExePath, "chrome-extension://pnlccmojcmeohlpggmfnbbiapkmbliob/", "--parent-window=0", NULL};
if(!IsChrome) {
cmd[0]="/usr/bin/php";
cmd[1]="echo.php";
}
execvp(cmd[0], cmd);
perror("execlp");
exit(EXIT_FAILURE);
}
//---Parent process - forwards both sides---
//Close unused ends of the pipes
close(pipe1[0]); // Close read end of pipe1
close(pipe2[1]); // Close write end of pipe2
//Open the log files
SideLog = fopen("./log", "w+");
RecordLeft = fopen("./left", "w+");
//Run the main loop
int max_fd=pipe2[0]+1; //Other pipe is STDIN which is 0
while(1) {
//Create the structures needed for select
fd_set read_fds;
FD_ZERO(&read_fds);
FD_SET(STDIN_FILENO, &read_fds);
FD_SET(pipe2[0], &read_fds);
struct timeval timeout;
timeout.tv_sec = 10;
timeout.tv_usec = 0;
//Listen for an update
int status = select(max_fd, &read_fds, NULL, NULL, &timeout);
//Display "Status: x" if its setting is true
if(DisplayStatus) {
fprintf(SideLog, "Status: %d\n", status);
if(status==0)
fflush(SideLog);
}
//Exit on bad status
if (status==-1) {
perror("select");
break;
}
//Check both sides to see if they need to forward a packet
ForwardSide(STDIN_FILENO, pipe1[1], &read_fds, 1);
ForwardSide(pipe2[0], STDOUT_FILENO, &read_fds, 0);
}
//Close pipes
close(pipe1[1]);
close(pipe2[0]);
//Wait for the child process to finish
wait(NULL);
return EXIT_SUCCESS;
}
After years of saying I’d do it, I'm finally moving my ecosystem from Windows to Linux. After some experimenting and soul searching, I've decided to go with Linux Mint, as it's not exactly Ubuntu with their atrociously horrible decisions, but it provides stability, ease of setup, and a similar enough interface to Windows so as to not lower my productivity.
Getting all of my legacy hardware working, including my 6-monitor setup, was mostly painless, but my beloved Babyface Pro (external professional audio mixer hardware) has been absolute hell to get working. It only natively supports Windows, OSX, and iOS for the “PC” (USB audio passthrough) mode, and the CC mode (class compliant) does not offer my custom waveform transforms. So, my only real option was to use the other analog input interfaces on the Babyface (XLR, SPDIF, or quarter inch audio).
The first hurdle was the power. Normally the device is powered through USB, however if I was going to be using the other audio inputs, I didn't want to leave the USB plugged in all the time, and the Babyface doesn't come with a power adapter. Fortunately, I had a 12V 1A+ power adapter in my big box of random power adapters. The second hurdle was when I discovered that the Babyface does not store the mixer settings when it's powered off. So, every time it gets powered on, it needs to be hooked to another (Windows) machine that can push the mixer settings to it. This also isn't too big a deal as I keep it on a UPS so it generally won't lose power, and if it does, I can use a VM to push the settings.
The next problem was deciding what interface to go through. I was really hoping to use SPDIF/optical since it is a digital signal that does not suffer from interference and degradation, but all the SPDIF interfaces I tried (4 in total) all sounded like garbage. I guess the SPDIF interface on the Babyface is a piece of Junk, which was very disheartening.
My only remaining option was using the analog inputs. I decided to use a mini (3.5mm; 1/8") stereo to quarter inch (6.35mm) mono splitter cord to run into “IN 3/4” and this worked perfectly. However, if the USB interface is plugged in at the same time then this actually creates very audible line noise on the analog inputs within the Babyface itself! This is a horrible design flaw of the device that I was shocked to run into. Fortunately, as mentioned in step 1, I already planned on having the USB cord unplugged, so not a deal breaker.
I first tried the headphone and line out jacks on my motherboard, but the audio quality was only at about 90%. I next tried the line out on my Creative Sound Blaster Audigy from 2014 and the audio was at about 95% quality. It also felt like a cardinal sin to plug in a PCIE 1.0 1x device (0.250 GB/s) into a PCIE 5.0 16x slot (63 GB/s) - lol. So, I bought a Sound Blaster Play 3! USB to mini audio adapter and the audio was perfect! I finally had my setup figured out.
As a fun note, I went to an audiologist a few days ago to have my hearing tested, and the waveform I had devised (through brute force testing) that I had been using through the Babyface for the last 7 years was the exact inverse of the results on the hearing loss frequency chart.
The following is a tutorial on mounting a dd image of a TrueCrypt system-level-encrypted volume. This tutorial was tested and written against Ubuntu 16.04.2 LTS.
Trying to mount your loopback device with losetup or mount doesn’t quite work. If you tried, you’d get an error like the following:
No such file or directory:
/sys/block/loop2/loop2p2/start
VeraCrypt::File::Open:276
Instead, use sudo kpartx -va IMAGE_FILENAME.
This will give you something like the following:
add map loop2p1 (253:0): 0 204800 linear 7:2 2048
add map loop2p2 (253:1): 0 976564224 linear 7:2 206848
This shows you the partitions in the image and which loopback devices they are mounted to. In my case, loop2 and loop2p2, which I will continue using for the rest of this tutorial.
So this mounts the following:
/dev/loop2: The primary loopback device
/dev/mapper/loop2p*: The partition loopback devices (in my case, loop2p1 and loop2p2)
If you attempt to mount loop2p2 with TrueCrypt or VeraCrypt as a system partition, no matter the password, you will get the error “Partition device required”.
To fix this we need to get the loop2p2 to show up in /dev and make an edit to the VeraCrypt source code.
You can run the following command to see the loopback partition devices and their sizes. This is where I am pulling loop2p2 from.
lsblk /dev/loop2
This will give the following:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop2 7:2 0 465.8G 1 loop
├─loop2p2 253:1 0 465.7G 1 part
└─loop2p1 253:0 0 100M 1 part
Run the following command to create /dev/loop2p* block devices:
sudo partx -a /dev/loop2
Run the following commands to download and compile VeraCrypt:
sudo apt-get install git yasm libfuse-dev libwxgtk3.0-dev #yasm requires universe repository
git clone https://github.com/veracrypt/VeraCrypt
cd VeraCrypt/src
nano Platform/Unix/FilesystemPath.cpp#You can use the editor of your choice for this
In Platform/Unix/FilesystemPath.cpp make the following change:
After the following 2 lines of code in the FilesystemPath::ToHostDriveOfPartition() function, currently Line 78:
Then continue to run the following commands to finish up:
make
Main/veracrypt -m system,ro,nokernelcrypto -tc /dev/loop2p2YOUR_MOUNT_LOCATION
VeraCrypt parameter information:
If you don’t include the “nokernelcrypto” option, you will get the following error:
device-mapper: reload ioctl on veracrypt1 failed: Device or resource busy
Command failed
the “ro” is if you want to mount in readonly
“-tc” means the volume was created in TrueCrypt (not VeraCrypt)
Doing this in Windows is a lot easier. You just need to use a program called Arsenal Image Mounter to mount the drive, and then mount the partition in TrueCrypt (or VeraCrypt).
I recently tried to install Slackware 4.2 64-bit (Linux) onto a new mini PC I just bought. The new PC only supports UEFI so I had major issues getting the darn setup on the install cd to actually run. I never DID actually get the install cd to boot properly on the system, so I used an alternative. While the slack install usb key was in, I also added and loaded up an ubuntu live cd usb key. The following is what I used to run the slackware setup in Ubuntu.
#Login as root
#sudo su
#Settings
InstallDVDName=SlackDVD #This is whatever you named your slackware usb key
#/mnt will contain the new file system for running the setup
cd /mnt
#Extract the initrd.img from the slackware dvd into /mnt
cat /media/ubuntu/$InstallDVDName/isolinux/initrd.img | gzip -d | cpio -i
#Bind special linux directories into the /mnt folder
for i in proc sys dev tmp; do mount -o bind /$i ./$i; done
#Mount the cdrom folder into /mnt/cdrom
rm cdrom
mount -o bind /media/ubuntu/$InstallDVDName/ ./cdrom
#Set /mnt as our actaul (ch)root
chroot .
#Run the slackware setup
usr/lib/setup/setup
#NOTE: When installing, your package source directory is at /cdrom/slackware64
Amazon EC2 is a great resource for cheap virtual servers to do simple things, like DNS or (low bandwidth) VPNs. I had the need this morning to set up a DNS server for a company which needed to blacklist a list of domains. The simplest way to do this is by editing all the computers’ hostfiles, but that method leaves a lot to be desired. Namely, blocking entire domains (as opposed to single subdomains), and deploying changes. Centralizing in a single place makes the job instant, immediate, and in the end, faster.
The following are the steps I used to set this up on an EC2 server. All command line instructions are followed by a single command you can run to execute the step. There is a full script below, at the end of the post, containing all steps from when you first login to SSH ("Login to root") to the end.
I am not going to go into the details of setting up an EC2 instance, as that information can be found elsewhere. I will also be skipping over some of the more obvious steps. Just create a default EC2 instance with the “Amazon Linux AMI”, and I will list all the changes that need to be made beyond that.
Creating the instance
For the first year, for the instance type, you might as well use a t2.micro, as it is free. After that, a t2.nano (which is a new lower level) currently at $56.94/year ($0.0065/Hour), should be fine.
After you select your instance type, click “Review and Launch” to launch the instance with all of the defaults.
After the confirmation screen, it will ask you to create a key pair. You can see other tutorials about this and how it enables you to log into your instance.
Edit the security group
Next, you need to edit the security group for your instance to allow incoming connections.
Go to “Instances” under the “Instances” group on the left menu, and click your instance.
In the bottom of the window, in the “Descriptions” tab, click the link next to “Security Groups”, which will bring you to the proper group in the security groups tab.
Right click it and “Edit inbound Rules”.
Make sure it has the following rules with Source=Anywhere: ALL ICMP [For pinging], SSH, HTTP, DNS (UDP), DNS (TCP)
Assign a permanent IP to your instance
To do this, click the “Elastic IPs” under “Network & Security” in the left menu.
Click “Allocate New Address”.
After creating it, right click the new address, then “Associate Address”, and assign it to your new instance.
You should probably set this IP up as an A record somewhere. I will refer to this IP as dns.yourdomain.com from now on.
Login to root
SSH into your instance as the ec2-user via “ssh ec2-user@dns.yourdomain.com”. If in windows, you could also use putty.
Sudo into root via “sudo su”.
Allow root login
At this point, I recommend setting it up so you can directly root into the server. Warning: some people consider this a security risk.
Copy your key pair(s) to the root user via “cat /home/ec2-user/.ssh/authorized_keys > /root/.ssh/authorized_keys”
Set SSHD to permit root logins by changing the PermitRootLogin variable to “yes” in /etc/ssh/sshd_config. A quick command to do this is “perl -pi -e 's/^\s*#?\s*PermitRootLogin.*$/PermitRootLogin yes/igm' /etc/ssh/sshd_config”, and then reload the SSHD config with “service sshd reload”. Make sure to attempt to directly log into SSH as root before exiting your current session to make sure you haven’t locked yourself out.
Install apache (the web server), bind/named (the DNS server), and PHP (a scripting language)
yum -y install bind httpd php
Start and set services to run at boot
service httpd start; service named start; chkconfig httpd on; chkconfig named on;
Set the DNS server to be usable by other computers
Edit /etc/named.conf and change the 2 following lines to have the value “any”: “listen-on port 53” and “allow-query”
perl -pi -e 's/^(\s*(?:listen-on port 53|allow-query)\s*{).*$/$1 any; };/igm' /etc/named.conf; service named reload;
Point the DNS server to the blacklist files
This is done by adding “include "/var/named/blacklisted.conf";” to /etc/named.conf
Put the following into /var/named/blacklisted.db . Make sure to change dns.yourdomain.com to your domain (or otherwise, “localhost”), and 1.1.1.1 to dns.yourdomain.com’s (your server’s) IP address. Make sure to keep all periods intact.
$TTL 14400
@ IN SOA dns.yourdomain.com. dns.yourdomain.com ( 2003052800 86400 300 604800 3600 )
@ IN NS dns.yourdomain.com.
@ IN A 1.1.1.1
* IN A 1.1.1.1
The first 2 lines tell the server the domains belong to it. The 3rd line sets the base blacklisted domain to your server’s IP. The 4th line sets all subdomains of the blacklisted domain to your server’s IP.
This can be done via (Update the first line with your values)
YOURDOMAIN="dns.yourdomain.com"; YOURIP="1.1.1.1";
echo -ne "\$TTL 14400\n@ IN SOA $YOURDOMAIN. $YOURDOMAIN ( 2003052800 86400 300 604800 3600 )\n@ IN NS $YOURDOMAIN.\n@ IN A $YOURIP\n* IN A $YOURIP" > /var/named/blacklisted.db;
Fix the permissions on the blacklist files
chgrp named /var/named/blacklisted.*; chmod 660 /var/named/blacklisted.*;
Set the server’s domain resolution name servers
The server always needs to look at itself before other DNS servers. To do this, comment out everything in /etc/resolv.conf and add to it “nameserver localhost”. This is not the best solution. I’ll find something better later.
At this point, it’s a good idea to make sure the DNS server is working as intended. So first, we’ll add an example domain to the DNS server.
Add the following to /var/named/blacklisted.conf and restart named to get the server going with example.com: “zone "example.com" { type master; file "blacklisted.db"; };”
echo 'zone "example.com" { type master; file "blacklisted.db"; };' >> /var/named/blacklisted.conf; service named reload;
Ping “test.example.com” and make sure it’s IP is your server’s IP
Set your computer’s DNS to your server’s IP in your computer’s network settings, ping “test.example.com” from your computer, and make sure the returned IP is your server’s IP. If it works, you can restore your computer’s DNS settings.
Have the server return a message when a blacklisted domain is accessed
Add your message to /var/www/html
echo 'Domain is blocked' > /var/www/html/index.html
Set all URL paths to show the message by adding the following to the /var/www/html/.htaccess file
Turn on AllowOverride in the /etc/httpd/conf/httpd.conf for the document directory (/var/www/html/) via “ perl -0777 -pi -e 's~(<Directory "/var/www/html">.*?\n\s*AllowOverride).*?\n~$1 All~s' /etc/httpd/conf/httpd.conf”
Start the server via “service httpd graceful”
Create a script that allows apache to refresh the name server’s settings
Create a script at /var/www/html/AddRules/restart_named with “/sbin/service named reload” and set it to executable
Allow the user to run the script as root by adding to /etc/sudoers “apache ALL=(root) NOPASSWD: /var/www/html/AddRules/restart_named” and “Defaults!/var/www/html/AddRules/restart_named !requiretty”
Create a script that allows the user to add, remove, and list the blacklisted domains
Add the following to /var/www/html/AddRules/index.php (one line command not given. You can use “nano” to create it)
<?php//Get old domains$BlockedFile='/var/named/blacklisted.conf';$CurrentZones=Array();foreach(explode("\n", file_get_contents($BlockedFile)) as$Line)if(preg_match('/^zone "([\w\._-]+)"/', $Line, $Results))$CurrentZones[]=$Results[1];//List domainsif(isset($_REQUEST['List']))returnprintimplode('<br>', $CurrentZones);//Get new domainsif(!isset($_REQUEST['Domains']))returnprint'Missing Domains';$Domains=$_REQUEST['Domains'];if(!preg_match('/^[\w\._-]+(,[\w\._-]+)*$/uD', $Domains))returnprint'Invalid domains string';$Domains=explode(',', $Domains);//Remove domainsif(isset($_REQUEST['Remove'])){$CurrentZones=array_flip($CurrentZones);foreach($Domainsas$Domain)unset($CurrentZones[$Domain]);$FinalDomainList=array_keys($CurrentZones);}else//Combine domains$FinalDomainList=array_unique(array_merge($Domains, $CurrentZones));//Output to the file$FinalDomainData=Array();foreach($FinalDomainListas$Domain)$FinalDomainData[]=
"zone \"$Domain\" { type master; file \"blacklisted.db\"; };";file_put_contents($BlockedFile, implode("\n", $FinalDomainData));//Reload namedprint`sudo /var/www/html/AddRules/restart_named`;?>
Add the “apache” user to the “named” group so the script can update the list of domains in /var/named/blacklisted.conf via “usermod -a -G named apache; service httpd graceful;”
Run the domain update script
To add a domain (separate by commas): http://dns.yourdomain.com/AddRules/?Domains=domain1.com,domain2.com
To remove a domain (add “Remove&” after the “?”): http://dns.yourdomain.com/AddRules/?Remove&Domains=domain1.com,domain2.com
To list the domains: http://dns.yourdomain.com/AddRules/?List
Warning: Putting the password file in an http accessible directory is a security risk. I just did this for sake of organization.
Create the user+password via “htpasswd -bc /var/www/html/AddRules/.htpasswd USERNAME” and then entering the password
[Edit on 2016-01-30 @ noon]
To permanently set “localhost” as the resolver DNS, add “DNS1=localhost” to “/etc/sysconfig/network-scripts/ifcfg-eth0”. I have not yet confirmed this edit.
Security Issue
Soon after setting up this DNS server, it started getting hit by a DNS amplification attack. As the server is being used as a client’s DNS server, turning off recursion is not available. The best solution is to limit the people who can query the name server via an access list (usually a specific subnet), but that would very often not be an option either. The solution I currently have in place, which I have not actually verified if it works, is to add a forced-forward rule which only makes external requests to the name server provided by Amazon. To do this, get the name server’s IP from /etc/resolv.conf (it should be commented from an earlier step). Then add the following to your named.conf in the “options” section.
forwarders {
DNS_SERVER_IP;
};
forward only;
After I added this rule, external DNS requests stopped going through completely. To fix this, I turned “dnssec-validation” to “no” in the named.conf. Don’t forget to restart the service once you have made your changes.
Make sure to run this as root (login as root or sudo it)
Download the script here. Make sure to chmod and sudo it when running. “chmod +x dnsblacklist_install.sh; sudo ./dnsblacklist_install.sh;”
#User defined variables
VARIABLES_SET=0; #Set this to 1 to allow the script to run
YOUR_DOMAIN="localhost";
YOUR_IP="1.1.1.1";
BLOCKED_ERROR_MESSAGE="Domain is blocked";
ADDRULES_USERNAME="YourUserName";
ADDRULES_PASSWORD="YourPassword";#Confirm script is ready to runif [ $VARIABLES_SET != 1 ];thenecho'Variables need to be set in the script';exit 1;fiif [ `whoami`!='root' ];thenecho'Must be root to run script. When running the script, add "sudo" before it to' \
'run as root';exit 1;fi#Allow root login
cat /home/ec2-user/.ssh/authorized_keys > /root/.ssh/authorized_keys;
perl -pi -e 's/^\s*#?\s*PermitRootLogin.*$/PermitRootLogin yes/igm' /etc/ssh/sshd_config;
service sshd reload;#Install services
yum -y install bind httpd php;
chkconfig httpd on;
chkconfig named on;
service httpd start;
service named start;#Set the DNS server to be usable by other computers
perl -pi -e 's/^(\s*(?:listen-on port 53|allow-query)\s*{).*$/$1 any; };/igm' \
/etc/named.conf;
service named reload;#Create/link the blacklist filesecho -ne '\ninclude "/var/named/blacklisted.conf";'>> /etc/named.conf;
touch /var/named/blacklisted.conf;#Create the blacklist zone fileecho -ne "\$TTL 14400@ IN SOA $YOUR_DOMAIN. $YOUR_DOMAIN ( 2003052800 86400 300 604800 3600 )@ IN NS $YOUR_DOMAIN.@ IN A $YOUR_IP* IN A $YOUR_IP"> /var/named/blacklisted.db;#Fix the permissions on the blacklist files
chgrp named /var/named/blacklisted.*;
chmod 660 /var/named/blacklisted.*;#Set the server’s domain resolution name servers
perl -pi -e 's/^(?!;)/;/gm' /etc/resolv.conf;echo -ne '\nnameserver localhost'>> /etc/resolv.conf;#Run a testecho'zone "example.com" { type master; file "blacklisted.db"; };'>> \
/var/named/blacklisted.conf;
service named reload;
FOUND_IP=`dig -t A example.com | grep -ioP "^example\.com\..*?"'in\s+a\s+[\d\.:]+'| \
grep -oP '[\d\.:]+$'`;if [ "$YOUR_IP"=="$FOUND_IP" ]
thenecho'Success: Example domain matches your given IP'> /dev/stderr;elseecho'Warning: Example domain does not match your given IP'> /dev/stderr;fi#Have the server return a message when a blacklisted domain is accessedecho"$BLOCKED_ERROR_MESSAGE"> /var/www/html/index.html;
perl -0777 -pi -e 's~(<Directory "/var/www/html">.*?\n\s*AllowOverride).*?\n~$1 All~s' \
/etc/httpd/conf/httpd.conf;echo -n 'RewriteEngine onRewriteCond %{REQUEST_URI} !index.htmlRewriteCond %{REQUEST_URI} !AddRules/RewriteRule ^(.*)$ /index.html [L]'> /var/www/html/.htaccess;
service httpd graceful;#Create a script that allows apache to refresh the name server’s settings
mkdir /var/www/html/AddRules;echo'/sbin/service named reload'> /var/www/html/AddRules/restart_named;
chmod 755 /var/www/html/AddRules/restart_named;echo'apache ALL=(root) NOPASSWD:/var/www/html/AddRules/restart_namedDefaults!/var/www/html/AddRules/restart_named !requiretty'>> /etc/sudoers;#Create a script that allows the user to add, remove, and list the blacklisted domainsecho -n $'<?php//Get old domains$BlockedFile=\'/var/named/blacklisted.conf\';$CurrentZones=Array();foreach(explode("\\n", file_get_contents($BlockedFile)) as $Line) if(preg_match(\'/^zone "([\\w\\._-]+)"/\', $Line, $Results)) $CurrentZones[]=$Results[1];//List domainsif(isset($_REQUEST[\'List\'])) return print implode(\'<br>\', $CurrentZones);//Get new domainsif(!isset($_REQUEST[\'Domains\'])) return print \'Missing Domains\';$Domains=$_REQUEST[\'Domains\'];if(!preg_match(\'/^[\\w\\._-]+(,[\\w\\._-]+)*$/uD\', $Domains)) return print \'Invalid domains string\';$Domains=explode(\',\', $Domains);//Remove domainsif(isset($_REQUEST[\'Remove\'])){ $CurrentZones=array_flip($CurrentZones); foreach($Domains as $Domain) unset($CurrentZones[$Domain]); $FinalDomainList=array_keys($CurrentZones);}else //Combine domains $FinalDomainList=array_unique(array_merge($Domains, $CurrentZones));//Output to the file$FinalDomainData=Array();foreach($FinalDomainList as $Domain) $FinalDomainData[]="zone \\"$Domain\\" { type master; file \\"blacklisted.db\\"; };";file_put_contents($BlockedFile, implode("\\n", $FinalDomainData));//Reload namedprint `sudo /var/www/html/AddRules/restart_named`;?>'> /var/www/html/AddRules/index.php;
usermod -a -G named apache;
service httpd graceful;#Password protect the domain update scriptecho -n 'AuthType BasicAuthName "Admins Only"AuthUserFile "/var/www/html/AddRules/.htpasswd"require valid-user'> /var/www/html/AddRules/.htaccess;
htpasswd -bc /var/www/html/AddRules/.htpasswd "$ADDRULES_USERNAME""$ADDRULES_PASSWORD";echo'Script complete';
When a good idea is still considered too much by some
While UTF-8 has almost universally been accepted as the de-facto standard for Unicode character encoding in most non-Windows systems (mmmmmm Plan 9 ^_^), the BOM (Byte Order Marker) still has large adoption problems. While I have been allowing my text editors to add the UTF8 BOM to the beginning of all my text files for years, I have finally decided to rescind this practice for compatibility reasons.
While the UTF8 BOM is useful so that editors know for sure what the character encoding of a file is, and don’t have to guess, they are not really supported, for their reasons, in Unixland. Having to code solutions around this was becoming cumbersome. Programs like vi and pico/nano seem to ignore a file’s character encoding anyways and adopt the character encoding of the current terminal session.
The main culprit in which I was running into this problem a lot with is PHP. The funny thing about it too was that I had a solution for it working properly in Linux, but not Windows :-).
Web browsers do not expect to receive the BOM marker at the beginning of files, and if they encounter it, may have serious problems. For example, in a certain browser (*cough*IE*cough*) having a BOM on a file will cause the browser to not properly read the DOCTYPE, which can cause all sorts of nasty compatibility issues.
Something in my LAMP setup on my cPanel systems was removing the initial BOM at the beginning of outputted PHP contents, but through some preliminary research I could not find out why this was not occurring in Windows. However, both systems were receiving multiple BOMs at the beginning of the output due to PHP’s include/require functions not stripping the BOM from those included files. My solution to this was a simple overload of these include functions as follows (only required when called from any directly opened [non-included] PHP file):
<?
/*Safe include/require functions that make sure UTF8 BOM is not output
Use like: eval(safe_INCLUDETYPE($INCLUDE_FILE_NAME));
where INCLUDETYPE is one of the following: include, require, include_once, require_once
An eval statement is used to maintain current scope
*/
//The different include type functions
function safe_include($FileName) { return real_safe_include($FileName, 'include'); }
function safe_require($FileName) { return real_safe_include($FileName, 'require'); }
function safe_include_once($FileName) { return real_safe_include($FileName, 'include_once'); }
function safe_require_once($FileName) { return real_safe_include($FileName, 'require_once'); }
//Start the processing and return the eval statement
function real_safe_include($FileName, $IncludeType)
{
ob_start();
return "$IncludeType('".strtr($FileName, Array("\\"=>"\\\\", "'", "\\'"))."'); safe_output_handler();";
}
//Do the actual processing and return the include data
function safe_output_handler()
{
$Output=ob_get_clean();
while(substr($Output, 0, 3)=='?') //Remove all instances of UTF8 BOM at the beginning of the output
$Output=substr($Output, 3);
print $Output;
}
?>
I would have like to have used PHP’s output_handlerini setting to catch even the root file’s BOM and not require include function overloads, but, as php.net puts it “Only built-in functions can be used with this directive. For user defined functions, use ob_start().”.
As a bonus, the following bash command can be used to find all PHP files in the current directory tree with a UTF8 BOM:
I think it was actually much higher than this, but it wouldn’t let me log in to find out! >:-( . Wish I could easily make SSH and everything I do in it have priority over other process... but then again I probably wouldn’t be able to do anything to fix the load when this sometimes happens anyways. *sighs*
I’ll explain more about “load” in an upcoming post.
I am still, very unfortunately, looking into the problem I talked about way back here :-( [not a lot, but it still persists]. This time I decided to try and boot the OS into a “Safe Mode” with nothing running that could hinder performance tests (like hundreds of HTTP and MySQL sessions). Fortunately, my friend whom is a Linux server admin for a tech firm was able to point me in the right direction after researching the topic was proving frustratingly fruitless.
Linux has “runlevels” it can run at, which are listed in “/etc/inittab” as follows:
# Default runlevel. The runlevels used by RHS are:
# 0 - halt (Do NOT set initdefault to this)
# 1 - Single user mode
# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
# 3 - Full multiuser mode
# 4 - unused
# 5 - X11
# 6 - reboot (Do NOT set initdefault to this)
So I needed to get into “Single user mode” to run the tests, which could be done two ways. Before I tell you how though, it is important to note that if you are trying to do something like this remotely, normal SSH/Telnet will not be accessible, so you will need either physical access to the computer, or something like a serial console connection, which can be routed through networks.
So the two ways are:
Through the “init” command. Running “init #” at the console, where # is the runlevel number, will bring you into that runlevel. However, this might not kill all currently unneeded running processes when going to a lower level, but it should get the majority of them, I believe.
Append “s” (for single user mode) to the grub configuration file (/boot/grub/grub.conf on my system) at the end of the line starting with “kernel”, then reboot. I am told appending a runlevel number may also work.
If you ever find a file named “core.#” when running Linux, where # is replaced by a number, it means something crashed at some point. Most of the time, you will probably just want to delete the file, but sometimes you may wonder what crashed. To do this, you use gdb (The GNU debugger), a very power tool, to analyze the core dump file.
gdb --core=COREFILENAME
Near the very bottom of the blob of outputted text after running this command, you should see a line that says “Core was generated by `...'.”. This tells you the command line of what crashed. To exit gdb, enter “quit”. You can also use gdb to find out what actually happened and troubleshoot/debug the problem, but that’s a very long and complex topic.
Recently, I started seeing hundreds of core dump files taking up gigabytes of space showing up in “/usr/local/cpanel/whostmgr/docroot/” on multiple of our web servers. According to several online sources, it seems cPanel (web hosting made easy!) likes to dump many, if not all, of its programs' core files into this directory. In our case, it has been “dnsadmin” doing the crashing. We’ve been having some pretty major DNS problems lately, this kind on the name server level, so I may have to rebuild our DNS cluster in the next few days. Joy.
First, to find out more about any bash command, use
man COMMAND
Now, a primer on the three most useful bash commands: (IMO) find:
Find will search through a directory and its subdirectories for objects (files, directories, links, etc) satisfying its parameters.
Parameters are written like a math query, with parenthesis for order of operations (make sure to escape them with a “\”!), -a for boolean “and”, -o for boolean “or”, and ! for “not”. If neither -a or -o is specified, -a is assumed.
For example, to find all files that contain “conf” but do not contain “.bak” as the extension, OR are greater than 5MB:
-maxdepth & -mindepth: only look through certain levels of subdirectories
-name: name of the object (-iname for case insensitive)
-regex: name of object matches regular expression
-size: size of object
-type: type of object (block special, character special, directory, named pipe, regular file, symbolic link, socket, etc)
-user & -group: object is owned by user/group
-exec: exec a command on found objects
-print0: output each object separated by a null terminator (great so other programs don’t get confused from white space characters)
-printf: output specified information on each found object (see man file)
For any number operations, use:
+n
for greater than n
-n
for less than n
n
for exactly than n
For a complete reference, see your find’s man page.
xargs:
xargs passes piped arguments to another command as trailing arguments.
For example, to list information on all files in a directory greater than 1MB: (Note this will not work with paths with spaces in them, use “find -print0” and “xargs -0” to fix this)
find -size +1024k | xargs ls -l
Some useful parameters include:
-0: piped arguments are separated by null terminators
-n: max arguments passed to each command
-i: replaces “{}” with the piped argument(s)
So, for example, if you had 2 mirrored directories, and wanted to sync their modification timestamps:
GREP is used to search through data for plain text, regular expression, or other pattern matches. You can use it to search through both pipes and files.
For example, to get your number of CPUs and their speeds:
cat /proc/cpuinfo | grep MHz
Some useful parameters include:
-E: use extended regular expressions
-P: use perl regular expression
-l: output files with at least one match (-L for no matches)
-o: show only the matching part of the line
-r: recursively search through directories
-v: invert to only output non-matching lines
-Z: separates matches with null terminator
So, for example, to list all files under your current directory that contain “foo1”, “foo2”, or “bar”, you would use:
grep -rlE "foo(1|2)|bar"
For a complete reference, see your grep’s man page.
And now some useful commands and scripts: List size of subdirectories:
du --max-depth=1
The --max-depth parameter specifies how many sub levels to list.
-h can be added for more human readable sizes.
List number of files in each subdirectory*:
#!/bin/bash
export IFS=$'\n' #Forces only newlines to be considered argument separators
for dir in `find -type d -maxdepth 1`
do
a=`find $dir -type f | wc -l`;
if [ $a != "0" ]
then
echo $dir $a
fi
done
and to sort those results
SCRIPTNAME | sort -n -k2
List number of different file extensions in current directory and subdirectories:
If you want to make pre-edit backups, include an extension after “-i” like “-i.orig”
Perform operations in directories with too many files to pass as arguments: (in this example, remove all files from a directory 100 at a time instead of using “rm -f *”)
find -type f | xargs -n100 rm -f
Force kill all processes containing a string:
killall -9 STRING
Transfer MySQL databases between servers: (Works in Windows too)
Some lesser known commands that are useful: screen: This opens up a virtual console session that can be disconnected and reconnected from without stopping the session. This is great when connecting to console through SSH so you don’t lose your progress if disconnected. htop: An updated version of top, which is a process information viewer. iotop: A process I/O (input/output - hard drive access) information viewer. Requires Python ? 2.5 and I/O accounting support compiled into the Linux kernel. dig: Domain information retrieval. See “Diagnosing DNS Problems” Post for more information.
More to come later...
*Anything staring with “#!/bin/bash” is intended to be put into a script.
So I have been having major speed issues with one of our servers. After countless hours of diagnoses, I determined the bottle neck was always I/O (input/output, accessing the hard drive). For example, when running an MD5 hash on a 600MB file load would jump up to 31 with 4 logical CPUs and it would take 5-10 minutes to complete. When performing the same test on the same machine on a second drive it finished within seconds.
Replacing the hard drive itself is a last resort for a live production server, and a friend suggested the drive controller could be the problem, so I confirmed that the drive controller for our server was not on-board (on its own card), and I attempted to convince the company hosting our server of the problem so they would replace the drive controller. I ran my own tests first with an iostat check while doing a read of the main hard drive (cat /etc/sda > /dev/null). This produced steadily worsening results the longer the test went on, and always much worse than our secondary drive. I passed these results on to the hosting company, and they replied that a “badblocks –vv” produced results that showed things looked fine.
So I was about to go run his test to confirm his findings, but decided to check parameters first, as I always like to do before running new Linux commands. Thank Thor I did. The admin had meant to write “badblocks –v” (verbose) and typoed with a double key stroke. The two v’s looked like a w due to the font, and had I ran a “badblocks –w” (write-mode test), I would have wiped out the entire hard drive.
Anyways, the test outputted the same basic results as my iostat test with throughput results very quickly decreasing from a remotely acceptable level to almost nil. Of course, the admin only took the best results of the test, ignoring the rest.
I had them swap out the drive controller anyways, and it hasn’t fixed things, so a hard drive replace will probably be needed soon. This kind of problem would be trivial if I had access to the server and could just test the hardware myself, but that is a price to pay for proper security at a server farm.