rutorrent + Pine64
Mon, 02/19/2018 - 20:19

Problem

I've got Pine64 Single Board Computer (the first generation) and it has not so low memory, compared to Raspberry Pi, mine has 2GB. But still, when having there Prometheus.io, Nextcloud and rTorrent (with ruTorrent for web interface), plus samba share, it suddenly isn't that much, those 2GB of memory.

Problem I've had was, when I started to download larger torrent, or multiple at once, pine just froze and I had to restart (unplug, plug). This is anything but pleasant.

Clueless investigation

It took me a while to realize what was the real issue. First I thought that writing to external drive via USB was bottleneck and that somehow cased the freeze. I used Deluge before and thought that this is the issue, so I switched to ruTorrent, but this didn't help. Then I finally managed to see in htop what's going on, when rTorrent started to download full speed. Memory usage started to rise, most of it was kernel memory and then it froze. Not IO was the issue, but memory.

Quite simple solution

Then I stared to dig into rTorrent configuration options and performance tuning. It was not to increase, but to decrease performance, to use as few HW resources as possible. I realized that keeping hundreds of file chunks opened and serving them was kind memory heavy. So I tried to limit what was possible, but also to not limit download speed too much.

An here is the configuration I ended up with. Performance looks good, multiple torrents download at once works. and I'm happy, that I don't need to hard-reboot my Pine64 anymore.

min_peers = 99
max_peers = 32

min_peers_seed = 99
max_peers_seed = 16

max_uploads = 2

download_rate = 16364
upload_rate = 1024

system.file_allocate.set = yes
max_downloads = 2

max_memory_usage = 256M
system.file.allocate = 1
network.max_open_files = 8
throttle.max_downloads.set = 16
throttle.max_uploads.set = 8
throttle.global_down.max_rate.set_kb = 16364
throttle.global_up.max_rate.set_kb   = 1024
network.receive_buffer.size.set = 1M
network.send_buffer.size.set = 2M

Update [2018-03-02]

What I did before didn't help enough, there might be another solution, of course with performance drawback: run cronjob that checks free memory and if below threshold, just drop all caches. Hopefully this helps.

Create bash file with script

#!/bin/bash

FREE=$(cat /proc/meminfo | grep MemFree | awk '{print $2}')

FREELIMIT=50000

if [ "$FREE" -lt "$FREELIMIT" ]; then
  echo "Low memory"
  echo 3 > /proc/sys/vm/drop_caches
else
  echo "Memory is ok"
fi

and run it in chronjob

echo "* * * * * root    bash /home/pi/Devel/clearcache > /dev/null 2>&1" > /etc/cron.d/clearcache

Conlusion

If there is a problem really try to find out what is the root cause. Don't stick to your assumptions and look around.