SysRq on Arch Linux Mac Mini

This post documents my adventures of getting the SysRQ key working on my Mac Mini and Macbook (both running Arch Linux). The suggestions of loadkeys and keyfuzz that are the first search entries don’t work for me, so some more sophisticated black magic was necessary.

Remapping the Fn keys

This step is technically optional, but I did it because the function keys are a pain anyways. Normally on Apple keyboards one needs to use the Fn key to get the normal Fn keys to behave as a F<n> keystroke. I prefer to reverse this behavior, so that the SysRq combinations is Alt+F13+F rather than Fn+Alt+F13+F, say.

For this, the advice on the Arch Wiki worked, although it is not thorough on some points that I think should’ve been said. On newer kernels, one does this by creating the file /etc/modprobe.d/hid_apple.conf and writing

options hid_apple fnmode=2

Then I edited the file /etc/mkinitcpio.conf to include the new file:

...
BINARIES=""

# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way.  This is useful for config files.
FILES="/etc/modprobe.d/hid_apple.conf"

# HOOKS
...

Finally, recompile the kernel for this change to take effect. On Arch Linux one can just do this by issuing the command

$ sudo pacman -S linux

which will reinstall the entire kernel.

Obtaining the keystroke

Next, I needed to get the scancode of the key I wanted to turn into the SysRQ key. For me attempting showkey -s did not work so I instead had to use evtest, as described in this Arch Wiki.

$ sudo pacman -S evtest
$ sudo evtest
No device specified, trying to scan all of /dev/input/event*
Available devices:
/dev/input/event0:  Logitech USB Receiver
/dev/input/event1:  Logitech USB Receiver
/dev/input/event2:  Apple, Inc Apple Keyboard
/dev/input/event3:  Apple, Inc Apple Keyboard
/dev/input/event4:  Apple Computer, Inc. IR Receiver
/dev/input/event5:  HDA NVidia Headphone
/dev/input/event6:  HDA NVidia HDMI/DP,pcm=3
/dev/input/event7:  Power Button
/dev/input/event8:  Sleep Button
/dev/input/event9:  Power Button
/dev/input/event10: Video Bus
/dev/input/event11: PC Speaker
/dev/input/event12: HDA NVidia HDMI/DP,pcm=7
/dev/input/event13: HDA NVidia HDMI/DP,pcm=8
Select the device event number [0-13]: 2
Input driver version is 1.0.1
Input device ID: bus 0x3 vendor 0x5ac product 0x220 version 0x111
Input device name: "Apple, Inc Apple Keyboard"

This is on my Mac Mini; the list of devices looks different on my laptop. After this pressing the desired key yields something which looked like

Event: time 1456870457.844237, -------------- SYN_REPORT ------------
Event: time 1456870457.924097, type 4 (EV_MSC), code 4 (MSC_SCAN), value 70068
Event: time 1456870457.924097, type 1 (EV_KEY), code 183 (KEY_F13), value 1

This is the F13 key which I want to map into a SysRq — the keycode 70068 above (which is in fact a hex code) is the one I wanted.

Using udev

Now that I had the scancode, I cd’ed to /etc/udev/hwdb.d and added a file
90-keyboard-sysrq.hwdb with the content

evdev:input:b0003*
  KEYBOARDKEY_70068=sysrq

One then updates hwdb.bin by running the command

$ sudo udevadm hwdb --update
$ sudo udevadm trigger

The latter command makes the changes take effect immediately. You should be able to test this by running sudo evtest again; evtest should now report the new keycode (but the same scancode).

One can test the SysRQ key by running Alt+SysRq+H, and then checking the dmesg output to see if anything happened:

$ dmesg | tail -n 1
[  283.001240] sysrq: SysRq : HELP : loglevel(0-9) reboot(b) crash(c) ...

Enable SysRq

It remains to actually enable SysRQ, according to the bitmask described here. My system default was apparently 16:

$ sysctl kernel.sysrq
kernel.sysrq = 16

For my purposes, I then edited /etc/sysctl.d/99-sysctl.conf and added the line

kernel.sysrq=254

This gave me everything except the nicing of real-time tasks. Of course the choice of value here is just personal preference.

Personally, my main use for this is killing Chromium, which has a bad habit of freezing up my computer (especially if Firefox is open too). I remedy the situation by repeatedly running Alt+SysRq+F to kill off the memory hogs. If this doesn’t work, just Alt+SysRq+K kills off all the processes in the current TTY.

Advertisements

DNSCrypt Setup with PDNSD

Here are notes for setting up DNSCrypt on Arch Linux, using pdnsd as a DNS cache, assuming the use of NetworkManager. I needed it one day since the network I was using blocked traffic to external DNS servers (parental controls), and the DNS server provided had an outdated entry for hmmt.co. (My dad then pointed out to me I could have just hard-coded the necessary IP address in /etc/hosts, oops.)

For the whole process, useful commands to test with are:

  • nslookup hmmt.co will tell you the IP used and the server from which it came.
  • dig http://www.hmmt.co gives much more detailed information to this effect. (From bind-tools.)
  • dig @127.0.0.1 http://www.hmmt.co lets you query a specific DNS server (in this case 127.0.0.1).
  • drill @127.0.0.1 http://www.hmmt.co behaves similarly.

First, pacman -S pdnsd dnscrypt-proxy (with sudo ostensibly, but I’ll leave that out here and henceforth).

Run systemctl edit dnscrypt-proxy.socket and fill in override.conf with

[Socket]
ListenStream=
ListenDatagram=
ListenStream=127.0.0.1:40
ListenDatagram=127.0.0.1:40

Optionally, one can also specify which server which DNS serve to use with systemctl edit dnscrypt-proxy.service. For example for cs-uswest I write

[Service]
ExecStart=
ExecStart=/usr/bin/dnscrypt-proxy \
      -R cs-uswest

The empty ExecStart= is necessary, since otherwise systemctl will complain about multiple ExecStart commands.

This thus configures dnscrypt-proxy to listen on 127.0.0.1, port 40.

Now we configure pdnsd to listen on port 53 (default) for cache, and relay cache misses to dnscrypt-proxy. This is accomplished by using the following for /etc/pdnsd.conf:

global {
    perm_cache = 1024;
    cache_dir = "/var/cache/pdnsd";
    run_as = "pdnsd";
    server_ip = 127.0.0.1;
    status_ctl = on;
    query_method = udp_tcp;
    min_ttl = 15m;       # Retain cached entries at least 15 minutes.
    max_ttl = 1w;        # One week.
    timeout = 10;        # Global timeout option (10 seconds).
    neg_domain_pol = on;
    udpbufsize = 1024;   # Upper limit on the size of UDP messages.
}

server {
    label = "dnscrypt-proxy";
    ip = 127.0.0.1;
    port = 40;
    timeout = 4;
    proxy_only = on;
}

source {
    owner = localhost;
    file = "/etc/hosts";
}

Now it remains to change the DNS server from whatever default is used into 127.0.0.1. For NetworkManager users, it is necessary to edit /etc/NetworkManager/NetworkManager.conf to prevent it from overriding this file:

[main]
...
dns=none

This will cause resolv.conf to be written as an empty file by NetworkManager: in this case, the default 127.0.0.1 is used as the nameserver, which is what we want.

Needless to say, one finishes with

systemctl enable dnscrypt-proxy
systemctl start dnscrypt-proxy
systemctl enable pdnsd
systemctl start pdnsd

Shifting PDF’s using gs

Some time ago I was reading the 18.785 analytic NT notes
to try and figure out some sections of Davenport that I couldn’t understand.
These notes looked nice enough that I decided I should probably print them out,
But much to my annoyance, I found that almost all the top margins were too tiny, and the bottom margins too big.
(I say “almost all” since the lectures 19 and 24 (Bombieri proof and elliptic curves) were totally fine, for inexplicable reasons).

Thus, instead of reading Davenport like I told myself to, I ended up learning enough GhostScript flags to write the following short script,
which I’m going to share today so that other people can find better things to do with their time.

    #!/bin/bash
    for file in $@
    do
        echo "Shifting $file ..."
        gs \
            -sDEVICE=pdfwrite \
            -o shifted-$file \
            -dPDFSETTINGS=/prepress \
            -c "<</PageOffset [0 -36]>>; setpagedevice" \
            -f $file
    done
    

The arguments 0 and -36 indicate to not change the left/right margins, but to shift the content vertically downwards by 36pt (a half inch).
Of course, they can and should be adjusted depending on specific task.
Invocation is the standard ./script-name.sh *.pdf (or whatever).

(Aside: ironically, this decreased the file sizes of the affected PDF’s.)

Git Aliases

For Git users:

I’ve recently discovered the joy that is git aliases, courtesy of this blog post. To return to the favor, I thought I’d share the ones that I came up with.

For those of you that don’t already know, Git allows you to make aliases — shortcuts for commands. Specifically, if you add the following lines to your .gitconfig:

[alias]
    cm = commit
    co = checkout
    br = branch

Then running git cm will expand as git commit, and git co master is git checkout master, and so on. You can see how this might make you happy because it could save a few keystrokes. But I think it’s more useful than that — let me share what I did.

The first thing I did was add

pu = pull origin
psh = push origin

and permanently save myself the frustration of forgetting to type origin. Not bad. Even more helpful was the command

undo = reset --soft HEAD~1

Thus if I make a commit and then decide I want to undo it, rather than having to remember (or Google) what the correct incantations were, I just have to type git undo. It’s really an undo button!

Now for the fun part — some of Git’s useful commands are pretty verbose and take up lots of space. For example, here’s what git status looks like:
git-status

Kind of verbose if you ask me, and by now I know what “git pull” does. Fortunately, it turns out that there are some options you can run to make this look nicer. All you have to do is say git status -s -b, or in the context of this post, set the alias

ss = status -s -b

Then you get
git-alias-ss

which is much cooler.

Similarly, git log takes up a lot of space. I have the following format, which I’ve edited from the above blog post to suit my own tastes.

ls = log -n 16 --pretty=format:"%C(yellow)%h\\ %C(cyan)[%cn]\\ %C(reset)%s\\ %C(red)%d" --decorate 
ll = log -n 6 --pretty=format:"%C(yellow)%h\\ %C(cyan)[%cn]\\ %C(reset)%s\\ %C(red)%ad" --decorate --date=short --stat

These give in my opinion the much more readable format
git-alias-l

If you’re on a branch that does merges, you might also have fun with

tree = log -n 16 --pretty=format:"%C(yellow)%h\\ %C(cyan)[%cn]\\ %C(reset)%s\\ %C(red)%d" --decorate --graph

which will put these into a graphical tree for your viewing pleasure.

And finally a few more that I find nice, some again taken directly from the link above:

fail = commit --amend # to avoid stupid "oops typo" commits
rb = rebase
rbc = rebase --continue
bis = bisect
dc = checkout --
assume = update-index --assume-unchanged
unassume = update-index --no-assume-unchanged
assumed = "!git ls-files -v | grep ^h | cut -c 3-"

(Here “dc” is short for “discard”, since git dc file discards the changes to that file.) And that’s just the beginning of what you can do!

Pre-emptive answer: I’m also using git-completion (for tab-completing in git) and git-prompt with the line

export PS1='\[33[0;32m\]${debian_chroot:+($debian_chroot)}\u@\h \[33[0;33m\]\w$(__git_ps1 " \[33[1;31m\]#%s")\n\[33[0m\]\$ '

in my bashrc. That’s where the branch indicators are coming from. The terminal is XFCE4.

Arch Linux on a Mac Mini

This post briefly outlines the process of setting up a dual boot OSX and Arch Linux on a Mac Mini. This is mostly for my reference in the likely event that I will be doing anything similar in some years, so it assumes some competence; fortunately, the Arch Wiki’s Beginner’s Guide probably fills in any gaps I left out. Obligatory Disclaimer: Use at your own risk or not at all.

This is almost the same as any other installation of Arch Linux, with a few changes that took some hours of frustration to figure out because of the EFI booter. My method is to create the partitions in Disk Utility, install rEFInd, and then install the grub bootloader into /dev/sda1.

Setup done in OSX

  1. First, install rEFInd. This worked out of the box for me, and makes it possible to boot via USB.
  2. Create a Arch Linux installer USB by dd-ing (or anything else) the latest installation medium onto a USB drive.
  3. Set up the partitions; I find it easier (and less dangerous) to just use the OSX Disk Utility to do this. See, for example, the procedure here. My OSX installations appear to come with three partitions, a small one called “EFI”, a main “OS X HD” partition, and then a small “Recovery HD”, like so:
    NAME   LABEL       TYPE   SIZE
    sda                disk 931.5G 
    |-sda1 EFI         part   200M
    |-sda2 OS X HD     part 927.9G
    `-sda3 Recovery HD part 619.9M

    (This output is from lsblk, and is not what Disk Utility looks like).
    I like to create a partition for my Arch Linux system (which I name “Arch”) and a fifth partition just for the /home directory (which I name Home”). This leaves me to something like

    NAME   LABEL       TYPE   SIZE
    sda                disk 931.5G
    |-sda1 EFI         part   200M
    |-sda2 OS X HD     part 476.9G
    |-sda3 Recovery HD part 619.9M
    |-sda4 Arch        part   179G
    `-sda5 Home        part   272G

Booting into the USB and finishing up the partitions

Now that the partitions and rEFInd is set up, and the USB is written, we can proceed with the actual installation.
At this point, one can basically follow the standard procedure with a few changes.

  1. Reboot the device into the USB. Since rEFInd is installed, it should give you the option of booting into the USB.
  2. Establish an Internet as required.
  3. We’ve already created the partitions in Disk Utility above, so there is no need to change the partitions themselves now. However, it is necessary to format the newcly created partitions above. In my case, the relevant commands are
    # mkfs.ext4 /dev/sda4
    # mkfs.ext4 /dev/sda5
    

    Warning: Please, please make sure that you are formatting the right partitions. The command lsblk -f will print out the partitions and their labels.

  4. Now we need to mount the directories. There are three directories we need to mount, the main filesystem and the home directory, as well as the EFI boot directory. The part that was non-obvious to me is that the boot directory we want is actually the “EFI” directory (likely /dev/sda1) that OSX already provides. The relevant commands in my case were
    # mount /dev/sda4 /mnt
    # mkdir /mnt/home
    # mount /dev/sda5 /mnt/home
    # mkdir /mnt/boot
    # mount /dev/sda1 /mnt/boot
    
  5. Now you can happily install the base system and generate an fstab file:
    # genfstab -U -p /mnt >> /mnt/etc/fstab
    

Configuring the base system and installing the bootloader

  1. Now we can chroot into the system and follow all the directions, up to (but not including) installing the bootloader.
  2. I could not get gummiboot to work but maybe you will have better luck. Fortunately, with /dev/sda1/ mounted as /boot, I got GRUB to work nicely.
    # pacman -S grub
    # grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=arch_grub --recheck
    # grub-mkconfig -o /boot/grub/grub.cfg
    
  3. Now we can exit the chroot environment and power down the system.

If all goes well, upon rebooting, rEFInd will now boot into the complete Arch Linux system.

PDF Compression

I always scan copies of letters into my computer before I send them out. So I had a bunch of large PDF’s sitting around hogging my Dropbox space.

One day I found this blog post which claimed that simply running (in Bash) the commands
$ pdf2ps original.pdf temp.ps
$ ps2pdf temp.ps new.pdf

would decrease the file size. (The two commands are part of GhostScript, which I had installed on my Linux boxes anyways.) I couldn’t resist trying it — and miraculously, it worked. It generally decreases my scans by a factor of 10 (from 20MB to 2MB or so).

I have no clue why this works, although it probably has something to do with the fact that the PDF’s are scanned pages . Anyone care to enlighten me?

Email, JetPack, and Wintermelon

So I guess I can resume blogging now, seeing that I’m done with college applications (at last!). I’m not sure what I plan to blog about in general, but I figured I might as well put this domain name to good use :) I also realized that writing things out helped me clarify my thinking a lot (actually Qiaochu Yuan recommended this for math in particular), so I’ll be trying to do that more often this 2014 = 2 * 19 * 53 and onwards.

Onto the actual content, anyways. In this post I’ll talk about the inspiration and development for one of my afternoon projects, which I’ve named wintermelon for no good reason.

A while back Jacob Steinhardt recommended to the SPARC alumni list that we check our email at most twice a day. I was able to follow this suggestion for a day, and really was impressed by the feeling — I realized that I had started to use email as a distraction, something to prevent my brain from realizing it wasn’t do anything. The same went for the Art of Problem Solving forums (which I frequently visit) as well as Facebook, so I also tried limiting the number of times I checked each of those each day. Unfortunately, old habits do not die easily, and I found myself automatically visiting those sites when I wasn’t doing anything.

A couple days ago while I was reviewing my goals and realizing that I wasn’t following this one, I remembered the title text of XKCD 862.
After years of trying various methods, I broke this habit by pitting my impatience against my laziness. I decoupled the action and the neurological reward by setting up a simple 30-second delay I had to wait through, in which I couldn't do anything else, before any new page or chat client would load (and only allowed one to run at once). The urge to check all those sites magically vanished--and my 'productive' computer use was unaffected.

After years of trying various methods, I broke this habit by pitting my impatience against my laziness. I decoupled the action and the neurological reward by setting up a simple 30-second delay I had to wait through, in which I couldn’t do anything else, before any new page or chat client would load (and only allowed one to run at once). The urge to check all those sites magically vanished–and my ‘productive’ computer use was unaffected.

Sounded like fun! The XKCD version seemed a little extreme, but I could definitely do with a script that would make me wait 50 seconds before reading Facebook. I estimated it would take me about two hours to read/learn the API and write the code to put this together; it turns out my estimate was roughly correct.

I’m a Firefox user, so it made sense for me to try and put this together as a Firefox extension. A quick Google search led me to Jetpack, which offered to let me build an FF extension quickly using just JS. They had very nice tutorials, too.

Drilling down, the things I needed to make this thing fly were:

  1. Something to trigger every time a webpage was launched. This was conveniently covered under “Listen for page load”.
  2. Something to actually lock the webpage. This was easy, I just put body.style.visibility = "hidden"; in JS.
  3. Timers for a delay. This was handled by the JS window.setTimeout().
  4. Something to store the websites and their associated delays. I used regular expressions to specify the domain. This I did kind of painlessly through the Jetpack simple-prefs, but it was kind of an ugly hack in that I manually defined six settings for up to six websites. Maybe sometime when I’m bored I will take the time to make this work for arbitrarily many websites.
  5. A way for the individual lockdown scripts to communicate with the main script and vice-versa. This took me a while to figure out, but it is essentially a bunch of emit/on hooks provided in Jetpack. I would inject a script lockdown.js into the page and the send it a signal with the amount of time to lock the page.

It was actually very straightforward in retrospect, and took only a couple files of actual code. The project (which is very small) is posted on my GitHub. My estimate was about right; it took me approximately 2.5 hours from start to finish, although I admit that I was also chatting on Google Talk in the meantime. Actually I’m embarrassed it took as long as that.

The core of the program really is just two files. Here is lib/main.js, which is run from the start.

var widgets = require("sdk/widget");
var tabs = require("sdk/tabs");
var self = require("sdk/self");
var prefs = require("sdk/simple-prefs").prefs

// TODO make these not suck
var regex_strings = new Array();
regex_strings[0] = prefs.regex1;
regex_strings[1] = prefs.regex2;
regex_strings[2] = prefs.regex3;
regex_strings[3] = prefs.regex4;
regex_strings[4] = prefs.regex5;
regex_strings[5] = prefs.regex6;

var lock_times = new Array();
lock_times[0] = prefs.time1;
lock_times[1] = prefs.time2;
lock_times[2] = prefs.time3;
lock_times[3] = prefs.time4;
lock_times[4] = prefs.time5;
lock_times[5] = prefs.time6;

// Create regular expressions
var N = regex_strings.length;
var regexes = new Array();
for (var i=0; i<regex_strings.length; i++) {
  regexes[i] = new RegExp(regex_strings[i]);
}
  
var prev_hit = -1; 
var lockdown = false; // Are we currently in a lockdown?

function lock(time) {
  worker = tabs.activeTab.attach({
    contentScriptFile: self.data.url("lockout.js")
    });
  worker.port.emit("lock", time); // tell the worker to lock
  worker.port.on("unlock", unlock);
  lockdown = true; // prevent side loading
}

function gateway(tab) {
  url = tab.url;
  if (lockdown) {
    // Currently under a lockdown
    // Do not allow any other tabs to load
    lock(lock_times[prev_hit]);
    return;
  }
  for (var i=0; i<N; i++) {
    var regex = regexes[i];
    if (regex.test(url) && regex_strings[i] != "") {
      if (prev_hit != i) {
        // Test positive, we are going to block
        lock(lock_times[i]);
        prev_hit = i; // Remember prev hit
        return;
      }
	  else {
        prev_hit = i; // Still remember prev hit
		return;
	  }
    }
  }
  prev_hit = -1; // Release
}

function unlock() {
  lockdown = false;
}

tabs.on("ready", gateway);

and here is the data/lockout.js that is called by the lock function:

function lock(time) {
	document.getElementsByTagName("body")[0].style.visibility = "hidden";
	if (time >= 0) {
		window.alert("Locking for " + time + " seconds.");
		window.setTimeout(unlock, time * 1000);
	}
	else {
		window.alert("Locking indefinitely.");
	}
}

function unlock() {
	window.alert("Done");
	document.getElementsByTagName("body")[0].style.visibility = "visible";
	self.port.emit("unlock");
}

self.port.on("lock", lock);

More pragmatically, I’ve been using it for only a couple days, but it seems to be working! Blank pages are not very good distractions. We’ll see if this holds up.