UPDATE: My autocomplete script now ships with Rclone as of version 1.46. So you’re probably better off using that modified, maintained version than this old one here. This post will remain up as a reference. 🙂
—
I started playing with rclone in the interest of easily editing files from Google Drive with Vim. As it turns out, you can’t edit native Google Docs files in this manner, so that was a wash!
But a side benefit of this misadventure is this bash autocomplete script which I wrote to auto-complete remote paths for rclone with a Google Drive target. It probably works for others too, I haven’t tested them. 🙂
This implementation is in pure bash all except for the call to rclone itself to check against known remote targets. I appreciated An introduction to bash completion to get me started on the concepts, and the rest of what anyone needs is all in the bash manpage under complete.
As part of our podcast website migration from WordPress to Jekyll, I’m coding in a bunch of functionality that I’ve wanted for a while. The first one: a cli utility to fetch movie metadata into (semi-)structured data.
"overview":"In the post-apocalyptic future, reigning tyrannical supercomputers teleport a cyborg assassin known as the \"Terminator\" back to 1984 to kill Sarah Connor, whose unborn son is destined to lead insurgents against 21st century mechanical hegemony. Meanwhile, the human-resistance movement dispatches a lone warrior to safeguard Sarah. Can he stop the virtually indestructible killing machine?",
"release_date":"1984-10-26",
"release_year":"1984"
}
}
Check the help output for full and up-to-date functionality details, including a flag for interactive mode for when you’re less than confident that you’ll get the right answer. 🤓
You may be a Linux podcasting person looking for some ideas after a couple of years of Linux podcast production. You may be a fan of Decipher SciFi who wants to see what it looks like on the back end. In either case, welcome!
This is not meant to be the guide to doing it right, but just a record of the hackey way that we do it for reference. It’s like looking at someone’s Vim config; you don’t just copy the thing, but take what is useful.
Our usual recording setup is the two of us in the same room with sporadic remote guest appearances via Skype. This is the stuff we use and how (links to products on Amazon may include an affiliate code 🙂 ).
My partner and I both use ATR-2100 microphones. We settled on these for a few reasons:
Each has both XLR and USB interfaces, so we were able to continue to use them when graduating from computer to dedicated recorder.
They’re not overly sensitive so they work well when recording two people in the same room.
They’re affordable!
But, there is another. Since our mics and the system output from the Skype machine both go directly into the recorder, we need to have another microphone for our guests to hear us. We keep a Blue Yeti (which we do not recommend for the actual podcast recording) sitting in the middle of the table for this purpose, but whatever you have will probably do.
We started out recording directly into the computer with Audacity which proved to ultimately be less than reliable. Maybe this could be a fine enough affordable option for one person, I suppose, but if you’re recording two people in the same room you have a lot of pain-in-the-ass fiddling with audio configurations to look forward to, iirc (to aggregate separate USB audio sources). After a while and a few disastrous software failures in this mode, we finally got a dedicated recorder. In fact, I would actually recommend a dedicated recorder no matter what platform you’re on.
Zoom H5 recorder
So, this thing. This thing is pretty great! It can record four channels – two single-channel XLR inputs on the bottom, and a combined stereo input from whatever “capsule” is plugged into the top. With the X/Y capsule (it ships with this one) on top we are able to plug in remote guests from a Skype session on the nearest computer and record all three tracks separately.
Our recording setup
Editing
Okay, finally on to some software stuff.
Convert
The H5 records in WAV, and can basically only record input from the capsule (Skype) in stereo. So I want to get these tracks into mono FLAC files because it is lossless without being huge.
Here is how I do this conversion (the current “production” version of this script can be found in my dotfiles).
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/usr/bin/env bash
# Requires:
# ffmpeg
# Usage
if[$# -eq 0 ]; then
cat<<-EOF
Usage:$0episode_dir file1 file2 file3...
Convertabunch of WAVs toFLAC with names tomatch the target dir.
(Forconverting WAVs from the Zoom H5 into podcast tracks forediting by my
naming scheme)
episode_dir|Adirectory named asthe title ofapodcast episode.
file1...n|WAV files named with eitheranumber or'LR'asproduced by
With the edit done, I can then run it through a final set of filters and cut in some intro and outro music (generously provided by Caelum Rale). Final export to 112kbps mono CBR LAME MP3 (plenty good enough for voice) with our id3 tags, add the thumbnail (Audacity doesn’t support this), and we’re done.
Final filters
We use Levelator because it works, still, and is at least free as in beer. It’s super old and out of support but it works just fine for us, so here it stays. You can get Levelator here, but the link to the Linux version is broken. The Windows version works superbly in Wine though!
Levelator is pretty easy to use
The alternative these days seems to be Auphonic, a web-based service that gives you a few free sessions per month or somesuch and is reasonably priced for more. It seems popular and I hear good things.
Thumbnail/cover art
Audacity doesn’t know how to write the cover art id3 tags, but it’s easily done on the cli anyway. eyeD3 is a good tool for this.
Our finalized MP3s get uploaded to Libsyn. Another good option I am happy to recommend is Blubrry. Either way, as an enterprising Linux nerd, you’d only be using it for file hosting because you’ll control the RSS feed itself from your own site? Right?
All kidding aside, people in the space continue to fight over whether you should control the feed from your own domain or relinquish control to the hosting service itself. You do you, but I prefer the former. And there’s a great WordPress plugin for that too mentioned below!
Website hosting
Our post for the episode then goes up on Decipher Scifi. The one good option here is WordPress(.org), really. How and where you do this is up to you but we run a multisite WordPress install on Linux on AWS and manage it ourselves. I do have a couple of tips re link shortening and Let’s Encrypt though if you’re into that sort of thing.
And then within WordPress, we use the only (and really good!) plugin for serving up podcast RSS feeds, Blubrry PowerPress. It’s never done us wrong.
Backups
Yeah don’t forget these! Both the audio and the website.
Audio backups
When the episode is up and out, we back up the raw tracks and the separate final edits of the tracks in FLAC, and the final cut in MP3. They go to Dropbox, Google Drive, S3, whatever, and a portable hard drive or two. The idea is to have both a local and remote copy if you value your backups.
The final edit files for one of our episodes
Website backups
I wrote a small hacky BASH script that does my WordPress backups. It tars and gzips both the entire WordPress directory and the database dump and then moves it around with Rsync. One day I’ll do the work of finding/creating a better, more robust solution. Let me know if you can recommend one?
Simple text posts are one thing, but I like to use images or video when possible. I use the following tools to help me with this.
GIMP. Here is my GIMP config, if you’re interested. As of writing, It is set to behave mostly sorta roughly like Photoshop, but I still haven’t tuned this up as much as I’d like. Caveat emptor.
Maybe I should just skip straight from GIF to MP4 for all the platforms? I’ll need to test to see if the viewing experience is diminished by this switch on Facebook and Twitter. Will report back.
The free tier of this wonderful service allows for scheduling up to 30 posts, which is more than I ever do at one time. And now it can even take advantage of Instagram’s direct-posting feature for business accounts, if you have one.
Neewer NW-35 table-mounting mic arm. Super cheap and has been serving us with various heavy microphones for a good long while. It even holds up the super-heavy Blue Yeti which we used for a minute.
Doing a conversion of GIF to MP4 with ffmpeg seems like it should be simple enough:
Shell
1
ffmpeg-isomething.gifout.mp4
But it isn’t! This can be insufficient in a couple of ways..
Problem 1: Video too short
Solution: Use a filter to loop the input enough times to meet the 3s minimum time requirement
Shell
1
-filter_complex"loop=<NUMBER_OF_LOOPS>"
Problem 2: Wrong color encoding
Given a color encoding that it doesn’t understand, Instagram just kinda poops out
Solution: By default, my ffmpeg used yuv444p, which Instagram wasn’t happy with. I haven’t done an exhaustive survey of the color encoding that Instagram will accept, but here is one: yuv420p.
Shell
1
-pix_fmt yuv420p
In addition, the conversion requires the file’s height to be divisible by 2, so we need yet another filter:
Now since so many GIFs that I wish to post to Instagram are actually shorter than 3s, I automated everything above and here is the script. To see if I made any changes since posting this, check the version I’m currently using in my dotfiles.
Shell
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
#!/usr/bin/env bash
# Required:
# bc
# ffmpeg
# wget
set-e
# Usage
if[$# -eq 0 ]; then
cat<<-EOF
Usage:$0infile outfile
Convertagif tomp4 with ffmpeg,looping it enough times toensure it meets
Instagram'sminimum video length limit.
infile|Avalid gif filetoconvert.IfgivenaURI,thisscript will
At work we noticed how the LastPass plugin on the new Firefox Quantum (due to limitations of the new plugin architecture?) no longer has copy a “Copy Password” button. LastPass support suggested that this might not be changing anytime soon. We rely on this a lot for different terminal-based work, so it was a sad revelation.
This bothered me enough that I made my own LastPass popup thing for my Linux desktop and as a bonus the workflow is also much faster than it was in the browser.
# Login to lastpass-cli one time and it will remember your email for the
# future
lpass login myuser@domain.tld
And then run it there(no) or bind it to a hotkey(yes) in your window manager/whatever (I think this is under Keyboard settings in e.g. Gnome?).
When the menu pops up, just start typing the entry you want. It searches through both entry Name and Username. Whichever entry you select in the dmenu popup, the corresponding password will drop into your clipboard. Easy peasy.
Security
By default as of this writing, lpass seems to just leave your password in the X primary clipboard forever (or until overwritten). It recognizes an environment variable however, LPASS_CLIPBOARD_COMMAND, where you can specify your clipboard command and arguments. This allows for a setting like the following
Which will allow one X selection request (i.e. a paste action) before this value is cleared from the clipboard. Hopefully the default will change in the future to be more secure? But there go in the meantime.
<edit> After further consideration it seems the environment variable trickery above will remain the only solution, so get used to it 😉
<edit2> I’ve fleshed the script out a little bit more to also support secure notes and then I put it in its own repo.