Practical cheatsheet for day-to-day Linux command line — shell idioms, text processing, filesystem work, process management, networking, and common admin tasks. Favors modern commands (ss over netstat, ip over ifconfig, systemctl over service) and the invocations you actually reach for daily. Scan the quick reference; jump to a section for details.
| Task | Command |
|---|---|
| List files (all, detailed) | ls -al |
| List sorted by mtime (newest first) | ls -lt |
| Current directory / go back | pwd / cd - |
| Disk free / memory free | df -h / free -h |
| Directory sizes (1 level) | du -h --max-depth=1 |
| Find files by name | find . -iname '*.txt' |
| Recursive text search | grep -rni 'pattern' dir/ |
| What's listening on ports | ss -tlnp |
| Show IP addresses | ip a |
| Processes (interactive) | htop (or top) |
| Kill process | kill PID / kill -9 PID |
| Last N log lines (follow) | journalctl -u SERVICE -fn 100 |
| Service status / restart | systemctl status SERVICE / systemctl restart SERVICE |
| SSH with config alias | ssh myserver (see ~/.ssh/config) |
| Mirror directory (delete extras) | rsync -avz --delete src/ dst/ |
| Compress / extract tar.gz | tar -czvf a.tgz dir/ / tar -xzvf a.tgz |
| Run script with strict mode | set -euo pipefail |
| Reverse-search history | Ctrl-R then type |
ls # basic listing
ls -al # all (incl. hidden) + detailed
ls -lt # sort by mtime, newest first
ls -lS # sort by size, largest first
pwd # print working directory
cd dir # change directory
cd .. # up one level
cd ~ # home directory
cd - # previous directory (also prints its path)
history # numbered list of recent commands
!42 # re-run history entry 42
!! # re-run the last command
!$ # the last arg of the previous command
sudo !! # repeat last command with sudo
In the prompt itself:
| Shortcut | Action |
|---|---|
Ctrl-R |
reverse incremental search of history (press again to cycle) |
Ctrl-A / Ctrl-E |
jump to start / end of line |
Ctrl-W |
delete previous word |
Ctrl-U / Ctrl-K |
delete to start / end of line |
Ctrl-L |
clear screen (same as clear) |
Alt-. |
insert last arg of previous command |
Ctrl-C / Ctrl-D |
interrupt / EOF (exit shell if line empty) |
Ctrl-Z |
suspend foreground job (resume with fg) |
Ctrl-Alt-F3..F6 # switch to TTY (varies by distro; on modern Ubuntu/Zorin
# the GUI is typically on F1 or F2, TTYs on F3-F6)
type command # show what kind: file, builtin, alias, function, keyword
type -a command # show ALL matches in PATH order (useful for shadowing)
which command # path of the executable (external commands only)
command -v command # POSIX-portable "is it available?" — works in scripts
ldd /path/to/binary # shared library dependencies of a dynamic executable
Four types a command can be:
$PATH.cd, echo, [, …).~/.bashrc.# aliases — simple text substitution, no arguments
alias ll='ls -la'
alias # list all
unalias ll
# functions — full power, take arguments, have local vars
greet() {
local name=$1
echo "Hello, $name"
}
unset -f greet # remove function
# persist across sessions by putting them in ~/.bashrc
source ~/.bashrc # reload without restarting the shell (`. ~/.bashrc` also works)
Prefer functions over aliases when you need arguments or multiple commands — aliases can't take args.
uname -a # kernel + arch + hostname
lsb_release -a # distro + version (where available)
uptime # load averages + how long since boot
cal # calendar
df -h # disk usage, human-readable
free -h # memory usage
du -h --max-depth=1 # size of each subdir at depth 1
cp file1 file2 # copy
cp -r dir1 dir2 # copy directory recursively
cp -u src dst # copy only when source is newer
cp -a src dst # archive mode (preserve perms/times/links) — most "faithful" copy
cp -i src dst # prompt before overwriting
mv file1 file2 # rename / move
mkdir -p path/to/dir # create dirs (and any missing parents)
rm file # remove file
rm -r dir # remove directory recursively
rm -rf dir # force, no prompts — dangerous, read twice before running
diff file1 file2 # line-by-line differences
diff -u file1 file2 # unified (patch-style) — far easier to read
diff -rq dir1 dir2 # recursive, brief: lists files that differ; no output = identical
diff -ru dir1 dir2 # recursive, full unified diff
# bigger diffs with color + pagination
git diff --no-index dir1 dir2 # works even outside a repo
Nautilus has no built-in "Copy Path" right-click option. Add one via a user script:
sudo apt install xclip
mkdir -p ~/.local/share/nautilus/scripts
cat > ~/.local/share/nautilus/scripts/"Copy Path" <<'EOF'
#!/bin/bash
echo -n "$NAUTILUS_SCRIPT_SELECTED_FILE_PATHS" | head -1 | tr -d '\n' | xclip -selection clipboard
EOF
chmod +x ~/.local/share/nautilus/scripts/"Copy Path"
Then right-click any file → Scripts → Copy Path.
ln file link # hard link (same filesystem only, not for directories)
ln -s target link # symbolic link (cross-filesystem OK, can link directories)
ls -li # show inodes (hard links share the same inode)
readlink -f path # resolve a symlink chain to its canonical target
echo 'text' > file # overwrite (stdout to file)
echo 'text' >> file # append
: > file # truncate to zero bytes (`> file` also works)
command 2> errors.log # stderr to file
command &> all.log # both stdout and stderr (bash shortcut)
command > /dev/null 2>&1 # discard everything
cat a b > combined # concatenate into one file
cmd1 | cmd2 # stdout of cmd1 becomes stdin of cmd2
ls -l | sort | less # chain freely
sort -u file # sort + dedupe (prefer this over `sort | uniq`)
cmd | tee file | next # save the stream to `file` AND pass it down the pipe
cmd | tee -a file # append (rather than overwrite)
-rwxrwxrwx
│└┬┘└┬┘└┬┘
│ │ │ └── other
│ │ └───── group
│ └──────── owner
└────────── type: - file, d dir, l symlink
id # your user/group IDs and memberships
ls -l # mode bits
stat file # full metadata (owner, perms, times, inode)
getent passwd username # user's passwd entry (works with LDAP/NIS too)
getent group groupname # group membership
# octal — compact and scriptable
chmod 755 file # rwxr-xr-x
chmod 644 file # rw-r--r--
chmod 600 file # rw------- (private files, SSH keys)
# symbolic — incremental
chmod u+x file # add execute for owner
chmod g-w file # remove write for group
chmod o=r file # set other to read-only
chmod a+r file # add read for everyone
chmod u=rwx,go=rx file # multiple scopes at once
chmod -R 755 dir # recursive
# tip: to give dirs +x but not files, use `find ... -type d -exec chmod ...`
chown user file # change owner
chown :group file # change group only
chown user:group file # change both
chown -R user:group dir # recursive
chmod 4755 file # setuid: run as file owner (used by `sudo`, `passwd`)
chmod 2775 dir # setgid on dir: new files inherit the dir's group
chmod 1777 dir # sticky bit: only the owner can delete their own files
# (this is how /tmp is protected)
ps # processes in current terminal
ps aux # everybody's processes, all details
ps -ef # same info, different format
pgrep -fl pattern # PIDs of processes matching pattern (with command line)
pidof program # PIDs of a named program
top # interactive viewer
htop # nicer interactive viewer (install separately)
kill PID # TERM (graceful)
kill -HUP PID # reload config (many daemons reload on SIGHUP)
kill -9 PID # KILL — last resort, no cleanup
kill -STOP PID # pause
kill -CONT PID # resume
kill -l # list all signal names/numbers
pkill -f 'pattern' # kill by name/command-line pattern
killall program # kill every instance of `program` by exact name
Try
kill PID(TERM) beforekill -9. TERM lets the process flush buffers, release locks, and clean up temp files;-9skips that and can leave corruption behind.
command & # run in background
jobs # list this shell's jobs
fg %1 # bring job 1 to foreground
bg %1 # resume job 1 in background
disown %1 # detach from terminal (survives logout)
Ctrl-Z # suspend foreground process
Ctrl-C # interrupt foreground process
# run a command that survives logout + captures output
nohup long-running &> out.log &
command
echo $? # 0 = success, non-zero = failure
cmd1 && cmd2 # run cmd2 only if cmd1 succeeded
cmd1 || cmd2 # run cmd2 only if cmd1 failed
cmd1; cmd2 # run cmd2 regardless
cleanup() { rm -f "$TMPFILE"; }
trap cleanup EXIT # run cleanup when the script exits for any reason
trap 'echo interrupted; exit 130' INT TERM
#!/usr/bin/env bash
# Short description of what this script does.
set -euo pipefail # fail fast: error, undefined var, or any pipe stage failing
# set -x # uncomment to trace every command (great for debugging)
main() {
echo "Hello, ${1:-world}"
}
main "$@"
set -euo pipefail is the modern safer default — -e exits on error, -u errors on unset vars, and pipefail makes a pipeline fail if any stage fails (not just the last one).
NAME="value" # assign — NO spaces around =
echo "$NAME" # use
echo "${NAME}" # explicit boundary (needed when followed by letters/digits)
readonly CONST="value" # cannot be reassigned
local var="value" # function-scoped (only valid inside a function)
unset NAME # remove
export NAME # export to child processes (environment variable)
# single-bracket test — POSIX, portable
if [ "$a" -eq "$b" ]; then
echo "equal"
elif [ "$a" -gt "$b" ]; then
echo "greater"
else
echo "less"
fi
# double-bracket — bash-only, but supports regex and && || without quoting
if [[ "$str" =~ ^[0-9]+$ ]]; then
echo "numeric"
fi
# arithmetic context
if (( num > 5 && num < 10 )); then
echo "between 5 and 10"
fi
Numeric: -eq, -ne, -lt, -le, -gt, -ge
String: = / ==, !=, -z (empty), -n (not empty)
File: -e (exists), -f (regular file), -d (directory), -L (symlink), -r / -w / -x (readable/writable/executable), -s (non-empty)
# for
for item in a b c; do
echo "$item"
done
# C-style for
for (( i=0; i<5; i++ )); do
echo "$i"
done
# while
count=0
while [ "$count" -lt 10 ]; do
echo "$count"
count=$((count + 1))
done
# read a file line by line — `read -r` prevents backslash mangling; `IFS=` preserves whitespace
while IFS= read -r line; do
echo "$line"
done < file.txt
case "$input" in
start) echo "Starting" ;;
stop|quit) echo "Stopping" ;;
*.txt) echo "text file" ;;
*) echo "Unknown" ;;
esac
greet() {
local name=$1
echo "Hello, $name"
return 0
}
greet "World"
POSIX form
greet() { ... }works in any shell; bash also acceptsfunction greet { ... }. Stick with the POSIX form for portability.
$0 # script name
$1..$9 # args 1-9
${10} # arg 10+ (braces required past 9)
$# # number of args
"$@" # all args, each preserved as a separate word (use this)
"$*" # all args joined into a single word (rarely what you want)
Quote "$@" almost always — unquoted, arg splitting and globbing break any args containing spaces or wildcards.
# arithmetic
echo $((2 + 2)) # 4
# brace expansion (done by the shell, before the command runs)
echo {a,b,c} # a b c
echo {1..5} # 1 2 3 4 5
mv file.{txt,txt.bak} # rename in one line
# command substitution
files=$(ls) # modern form — nestable
files=`ls` # legacy — avoid
# parameter expansion
echo "${var:-default}" # use default if var is unset/empty
echo "${var:=default}" # ALSO assigns if unset/empty
echo "${var:?message}" # error and exit if unset/empty
echo "${#var}" # length
echo "${var%.ext}" # remove shortest matching suffix
echo "${var%%.*}" # remove longest matching suffix
echo "${var#prefix}" # remove shortest matching prefix
echo "${var//old/new}" # replace all occurrences
* # any string (but not starting with . by default)
? # any single character
[abc] # any one of a, b, or c
[!abc] # any char except a, b, c
{jpg,png} # brace expansion — any of the listed
shopt -s globstar # enable ** (recursive match)
ls **/*.py # all .py files at any depth
shopt -s nullglob # expand to empty when nothing matches (safer in scripts)
shopt -s dotglob # * matches dotfiles too
find . -name '*.txt' # by name (case sensitive)
find . -iname '*.txt' # case insensitive
find . -type f # files only
find . -type d # directories only
find . -mtime -7 # modified in last 7 days
find . -size +100M # larger than 100MB
# run a command on each result
find . -name '*.log' -delete
find . -name '*.sh' -exec chmod +x {} \;
# with xargs — more efficient, handles huge result sets
find . -name '*.txt' -print0 | xargs -0 grep 'pattern'
# recursive newest-first with readable timestamps
find . -iname '*.png' -printf '%TY-%Tm-%Td %TH:%TM %p\n' | sort -r
# add %s for size: -printf '%TY-%Tm-%Td %TH:%TM %10s %p\n'
# raw epoch for strict numeric sort: -printf '%T@ %p\n' | sort -rn | cut -d' ' -f2-
# handy function — drop-in find that always prints newest first with timestamps
findrecent() { find "$@" -printf '%TY-%Tm-%Td %TH:%TM %p\n' | sort -r; }
# usage: findrecent . -iname '*.log'
# counts
find . -type f | wc -l # count files
find . -type d | wc -l # count dirs
# all files, sorted by size descending, human-readable
find . -type f -exec ls -lh {} + | sort -k5 -rh
# -exec ... + batches paths into a single ls invocation (much faster than \;)
# sort -k5 -rh — key column 5 (size), reverse, human-numeric (handles K/M/G)
# every unique file extension in a tree
find . -type f -name '*.*' | sed 's/.*\.//' | sort -u
# filter by extension, excluding noisy paths
find . -type f -name '*.ipynb' \
-not -path '*/.ipynb_checkpoints/*' \
-not -path '*/.git/*' \
-exec ls -lht {} +
grep 'pattern' file # basic
grep -i 'pattern' file # case insensitive
grep -r 'pattern' dir/ # recursive
grep -rni 'pattern' dir/ # recursive + case-insens + line numbers (common combo)
grep -v 'pattern' file # invert (non-matching lines)
grep -l 'pattern' *.txt # list matching filenames only
grep -L 'pattern' *.txt # list non-matching filenames
grep -c 'pattern' file # count matches
grep -E 'ab|cd' file # extended regex (|, +, ?)
grep -P '\d+' file # Perl-compatible regex (\d, \w, lookbehind, …)
grep -o 'pattern' file # print only the matched portion
grep -A 3 -B 1 'pattern' file # 3 lines after, 1 before
# restrict to certain file types
grep -r --include='*.js' 'pattern' ./
grep -r --exclude-dir='node_modules' 'pattern' ./
sed 's/old/new/' file # replace first occurrence per line (to stdout)
sed 's/old/new/g' file # replace all occurrences
sed -i 's/old/new/g' file # in-place edit (no backup) — GNU sed
sed -i.bak 's/old/new/g' file # in-place with .bak backup
sed -n '10,20p' file # print only lines 10-20
sed '/^#/d' file # delete lines starting with #
sed '/^$/d' file # delete empty lines
awk '{print $1}' file # first whitespace-delimited field
awk -F: '{print $1, $7}' /etc/passwd # custom field separator (colon)
awk 'NR==1 {next} {sum += $3} END {print sum}' file # skip header, sum column 3
awk '$2 > 100' file # lines where column 2 > 100
df -h | awk 'NR>1 {print $6": "$5}' # mountpoint: usage%
cut -d, -f1,3 file # fields 1 and 3, comma-delimited
cut -c1-10 file # first 10 characters of each line
sort file # alphabetic
sort -n file # numeric
sort -k2 -r file # by 2nd column, reverse
sort -u file # sorted + deduped (replaces `sort | uniq`)
uniq -c sorted_file # count duplicates (input must already be sorted)
sort file | uniq -c | sort -rn # classic "most frequent first" idiom
wc -l file # line count
wc -w file # word count
head -20 file # first 20 lines
tail -20 file # last 20 lines
tail -f file # follow: stream new lines as they're written
tail -F file # same, but reopens if the file is rotated
echo 'a b c' | xargs -n 1 echo # one arg per invocation
find . -name '*.txt' | xargs rm # bulk delete
find . -print0 | xargs -0 command # -0 handles spaces/newlines in names
xargs -P 4 -I{} curl -sO {} # 4-way parallel, placeholder {}
ping host # test connectivity
traceroute host # route to host
ip a # interfaces and addresses
ip route # routing table
ss -tlnp # listening TCP ports with owning process
ss -tulpn # TCP + UDP, listening, numeric, with program
netstat -tlnp # legacy — prefer ss
wget URL # download a file
curl URL # fetch a URL (to stdout by default)
curl -O URL # save with remote filename
curl -L URL # follow redirects
Find your network range from ip a — look for inet x.x.x.x/24 on your active interface. /24 means the first three octets identify the network.
# if `ip a` shows inet 192.168.1.42/24, your network is 192.168.1.0/24
sudo nmap -sn 192.168.1.0/24 # ping-scan all hosts on the /24
sudo ufw status
sudo ufw enable
sudo ufw allow 22 # allow SSH by port
sudo ufw allow OpenSSH # allow by service name
sudo ufw deny 80
sudo ufw delete allow 22 # remove a rule
ssh user@host
ssh -p 2222 user@host # custom port
ssh -i ~/.ssh/id_somekey user@host # specific key
# generate a modern key (ed25519 is smaller and faster than RSA)
ssh-keygen -t ed25519 -C 'you@example.com'
# copy public key to the server (adds to ~/.ssh/authorized_keys)
ssh-copy-id user@host
# manual setup on the server
mkdir -p ~/.ssh && chmod 700 ~/.ssh
# append your public key to ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
Edit /etc/ssh/sshd_config:
PasswordAuthentication no
PubkeyAuthentication yes
PermitRootLogin no
Apply:
sudo systemctl reload ssh # Debian/Ubuntu (service is `ssh`)
sudo systemctl reload sshd # RHEL/Fedora (service is `sshd`)
# local forwarding — localhost:8080 here connects to the remote's localhost:80
ssh -L 8080:localhost:80 user@host
# remote forwarding — remote's :9090 connects back to this machine's localhost:3000
ssh -R 9090:localhost:3000 user@host
# X11 forwarding (run remote GUI apps locally)
ssh -Y user@host
~/.ssh/config:
Host myserver
HostName example.com
User yourname
Port 22
IdentityFile ~/.ssh/id_somekey
Host *.internal
ProxyJump bastion
ForwardAgent yes
Then: ssh myserver (no flags needed).
scp file user@host:/path/ # upload
scp user@host:/path/file . # download
scp -r dir user@host:/path/ # recursive
Modern OpenSSH defaults
scpto use the SFTP protocol internally. For big or resumable transfers preferrsync.
rsync -avz src/ dst/ # sync (trailing slash matters: "contents of src/")
rsync -avz --delete src/ dst/ # mirror — remove files in dst not in src
rsync -avzn src/ dst/ # dry run (no changes made)
rsync -avzP src/ dst/ # show progress + allow resume of partial files
# remote
rsync -avz src/ user@host:/dst/
rsync -avz --exclude='*.log' src/ user@host:/dst/
| Flag | Meaning |
|---|---|
-a |
archive mode: preserves perms, times, symlinks, etc. (most common) |
-v |
verbose |
-z |
compress during transfer |
-n |
dry run |
-P |
show progress + keep partial files for resume |
--delete |
delete files in dst that aren't in src (makes it a mirror) |
--exclude=PAT |
skip matching paths |
sftp user@host
sftp> ls # remote listing
sftp> lls # local listing
sftp> cd /path # remote cd
sftp> lcd /path # local cd
sftp> get file # download
sftp> put file # upload
sftp> get -r dir # recursive download
sudo apt install sshfs
mkdir -p ~/mnt/remote
sshfs user@host:/remote/path ~/mnt/remote
fusermount -u ~/mnt/remote # unmount
sudo apt update # refresh package index
sudo apt upgrade # upgrade installed packages
sudo apt install package
sudo apt remove package # remove, keep config
sudo apt purge package # remove + config
sudo apt autoremove # drop packages installed as deps that nothing needs anymore
sudo apt search keyword
apt show package # description + deps + version info
apt list --installed # what's installed
aptis for humans (pretty output, progress bars). For scripts, useapt-get— its output is stable across versions.
sudo dnf update
sudo dnf install package
sudo dnf remove package
sudo dnf search keyword
dnf info package
# create
tar -cvf archive.tar files # uncompressed
tar -czvf archive.tar.gz files # gzip
tar -cjvf archive.tar.bz2 files # bzip2
tar -cJvf archive.tar.xz files # xz (best compression, slower)
# extract
tar -xvf archive.tar
tar -xzvf archive.tar.gz
tar -xvf archive.tar -C /dest/ # extract into a specific directory
# list contents without extracting
tar -tvf archive.tar
# exclude while creating
tar -czvf archive.tar.gz --exclude='*.log' --exclude='node_modules' dir/
| Flag | Meaning |
|---|---|
-c / -x / -t |
create / extract / list |
-v |
verbose |
-f FILE |
archive file (use - for stdin/stdout) |
-z / -j / -J |
gzip / bzip2 / xz compression |
gzip file # compress, replaces original
gunzip file.gz # decompress
gzip -k file # keep the original as well
zcat file.gz # cat a gzipped file without decompressing to disk
zip -r archive.zip dir/ # zip (compatible with other OSes)
unzip archive.zip
unzip -l archive.zip # list contents
ccrypt -e file # encrypt → creates file.cpt, removes original
ccrypt -d file.cpt # decrypt
ccrypt -e -r directory/ # recursive encrypt
cryfs basedir mountdir # first call initializes; subsequent calls mount
cryfs-unmount mountdir # unmount
lsblk --fs # find encrypted volumes (FSTYPE: crypto_LUKS)
sudo cryptsetup luksChangeKey /dev/DEVICE
sudo cryptsetup luksOpen /dev/DEVICE name # unlock → /dev/mapper/name
sudo cryptsetup luksClose name # re-lock
journalctl # all logs
journalctl -f # follow in real time
journalctl -u SERVICE # a specific unit
journalctl -u SERVICE -fn 100 # follow, starting from the last 100 lines
journalctl --since '1 hour ago'
journalctl --since today --priority=err
journalctl -b # current boot only
journalctl -b -1 # previous boot
systemctl status SERVICE
systemctl start / stop / restart SERVICE
systemctl reload SERVICE # reload config without killing the process (if supported)
systemctl enable SERVICE # start on boot
systemctl disable SERVICE
systemctl daemon-reload # after editing unit files (`/etc/systemd/system/*.service`)
systemctl --user status SERVICE # per-user services (no sudo)
# find units
systemctl list-units --type=service
systemctl list-unit-files --state=enabled
python3 -m http.server 8000 # serve the current directory on :8000
python3 -m http.server 8000 --bind 127.0.0.1 # loopback only
sudo apt install samba
# edit /etc/samba/smb.conf, add a share:
# [share]
# path = /path/to/share
# read only = no
# browsable = yes
sudo smbpasswd -a username # set the Samba password for a user
sudo systemctl restart smbd
# connect:
# Windows: \\192.168.1.x\share
# Linux: smb://192.168.1.x/share (in a file manager)
Bash has no true block comment, but a heredoc that no one reads does the job:
: <<'COMMENT'
This block is
effectively a comment.
COMMENT
# watch a command every 2 seconds
watch -n 2 'df -h | grep /dev/sd'
# show the top 10 largest things in the current directory
du -sh * 2>/dev/null | sort -rh | head
# quick "did it succeed" wrapper
some_command && echo OK || echo FAIL
Bi-directional sync client for OneDrive — edit locally or in Office 365 web, changes sync both ways.
# remove any old apt version first
sudo apt remove onedrive
# add the official repo (replace "xUbuntu_24.04" with your base's codename)
echo 'deb https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_24.04/ ./' \
| sudo tee /etc/apt/sources.list.d/onedrive.list
# signing key
curl -fsSL https://download.opensuse.org/repositories/home:/npreining:/debian-ubuntu-onedrive/xUbuntu_24.04/Release.key \
| sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/onedrive.gpg
sudo apt update
sudo apt install onedrive
For other distros, see https://github.com/abraunegg/onedrive.
onedrive --synchronize
Opens a browser for Microsoft login; paste the redirect URL back. Files then sync to ~/OneDrive.
onedrive --synchronize # one-shot sync
onedrive --monitor # watch + sync continuously
onedrive --monitor --monitor-interval 60 # check every 60s (default 300s)
onedrive --display-sync-status
onedrive --reauth # re-authenticate
systemctl --user enable --now onedrive
systemctl --user status onedrive
Config file at ~/.config/onedrive/config:
sync_dir = "~/OneDrive"
monitor_interval = "300"
skip_file = "~*|.~*"
onedrive --display-config # show effective config
Changing sync_dir requires a resync:
mkdir -p ~/custom/OneDrive
# edit config: sync_dir = "~/custom/OneDrive"
onedrive --resync --synchronize # --resync rebuilds the local state DB
Create ~/.config/onedrive/sync_list with one path per line:
Documents
Photos/2024
man pages: man bash, man 7 signal, man find, man rsynctldr — community-maintained examples: tldr find, tldr rsync (install tldr)