June 21, 2025
brew install lynx
Update ~./zshrc
to alias lynx to a lynx with a personal config (pick/make a folder to place the lynx configuration into)
alias lynx='lynx -cfg=~/.dotfiles/.lynx/lynx.cfg'
Create Your Personal Lynx Config
cp /opt/homebrew/etc/lynx.cfg ~/.dotfiles/.lynx/lynx.cfg
Create your jump file
touch .lynx_jumps.html
<DT>?<DD><A HREF="file://ABSOLUTE_PATH_TO_FILE/.lynx_jumps.html">This Shortcut List</A>
<DT>gs<DD><A HREF="https://www.google.com/search?q=%s">Google Search</A>
<DT>ghs<DD><A HREF="https://github.com/search?q=%s">GitHub Search</A>
<DT>wiki<DD><A HREF="https://en.wikipedia.org/wiki/%s">Wikipedia Search</A>
<DT>hn<DD><A HREF="https://news.ycombinator.com">Hacker News</A>
<DT>reddit<DD><A HREF="https://www.reddit.com">Reddit</A>
Find and update config.
[…]
STARTFILE:https://news.ycombinator.com/
[…]
JUMPFILE:/ABSOLUTE_PATH_TO_FILE/.lynx_jumps.html
# Basic usage
lynx # Start with default page
lynx google.com # Go directly to a site
lynx -dump google.com # Get text output (great for scripts)
** | **: Toggle line wrapping (useful for wide content) |
lynx -dump URL # Get plain text of webpage
lynx -source URL # Get HTML source
lynx -width=120 URL # Set display width
lynx -anonymous # Restricted mode (safer browsing)
# Advanced dumping with formatting
lynx -dump -width=120 -nolist URL # Clean text, no link list
lynx -dump -with_backspaces URL | less # Formatted like man pages
lynx -crawl -traversal startpage.html # Spider entire site for indexing
lynx -source URL | grep "meta name" # Extract specific HTML elements
lynx -session=work.session # Save/restore browsing session
lynx -sessionin=saved.session # Resume from saved session
lynx -cmd_log=commands.log URL # Record all keystrokes
lynx -cmd_script=replay.txt # Replay recorded session
When browsing local directories (file://localhost/path/
):
** | **: Toggle line wrapping (with width options) |
./bookmarks/work.html
Create ~/.lynx_jump
file with shortcuts:
google http://www.google.com/search?q=%s
wiki https://en.wikipedia.org/wiki/%s
news https://news.ycombinator.com
Then use j followed by shortcut name.
lynx -anonymous # Restricted anonymous mode
lynx -restrictions=all # Maximum restrictions
lynx -validate # Only allow HTTP URLs
lynx -noredir # Don't follow redirects
lynx -noreferer # Don't send referrer headers
lynx -trace -stderr URL 2> debug.log # Full HTTP trace to file
lynx -head URL # Get headers only
lynx -mime_header -source URL # Include MIME headers in source
lynx -connect_timeout=30 URL # Set connection timeout
# Web scraping pipeline
lynx -dump -listonly URL | grep "http" | \
while read url; do lynx -dump "$url" >> data.txt; done
# Monitor webpage changes
lynx -dump URL > current.txt
diff previous.txt current.txt
# Extract all links from a page
lynx -dump -listonly URL | grep "^[[:space:]]*[0-9]"
Edit ~/.lynxrc
or use command line:
lynx -assume_charset=UTF-8 # Handle broken encoding
lynx -cookie_file=custom_cookies.txt # Custom cookie storage
lynx -display_charset=UTF-8 # Set display encoding
lynx -editor=vim # Set external editor
lynx -useragent="Custom Bot 1.0" # Custom user agent
Configure in lynx.cfg
:
EXTERNAL:http:xdg-open %s:TRUE
PRINTER:Save to PDF:lpr -P PDF %s:TRUE
DOWNLOADER:wget:wget %s:TRUE
-width=200
for better table formattinglynx -cache=50 # Increase document cache
lynx -partial # Enable partial page display
lynx -partial_thres=200 # Set partial display threshold
lynx -stack_dump # Debug memory issues
# Daily news digest
lynx -dump news_site.com | head -50 | mail -s "Daily News" user@domain
# Website monitoring script
#!/bin/bash
URL="$1"
lynx -dump "$URL" | md5sum > current.md5
if ! cmp -s current.md5 previous.md5; then
echo "Website changed!" | mail -s "Change Alert" admin@domain
fi
mv current.md5 previous.md5
# Bulk link checking
lynx -traversal -crawl startpage.html
grep "ERROR" *.html
TERM
variable for optimal displaytmux
/screen
for persistent sessionsRemember: These advanced features make Lynx incredibly powerful for automation, debugging, and specialized workflows. Start with the basics, then add these techniques as your needs grow.
June 21, 2025
brew install lynx
Update ~./zshrc
to alias lynx to a lynx with a personal config (pick/make a folder to place the lynx configuration into)
alias lynx='lynx -cfg=~/.dotfiles/.lynx/lynx.cfg'
Create Your Personal Lynx Config
cp /opt/homebrew/etc/lynx.cfg ~/.dotfiles/.lynx/lynx.cfg
Create your jump file
touch .lynx_jumps.html
<DT>?<DD><A HREF="file://ABSOLUTE_PATH_TO_FILE/.lynx_jumps.html">This Shortcut List</A>
<DT>gs<DD><A HREF="https://www.google.com/search?q=%s">Google Search</A>
<DT>ghs<DD><A HREF="https://github.com/search?q=%s">GitHub Search</A>
<DT>wiki<DD><A HREF="https://en.wikipedia.org/wiki/%s">Wikipedia Search</A>
<DT>hn<DD><A HREF="https://news.ycombinator.com">Hacker News</A>
<DT>reddit<DD><A HREF="https://www.reddit.com">Reddit</A>
Find and update config.
[…]
STARTFILE:https://news.ycombinator.com/
[…]
JUMPFILE:/ABSOLUTE_PATH_TO_FILE/.lynx_jumps.html
# Basic usage
lynx # Start with default page
lynx google.com # Go directly to a site
lynx -dump google.com # Get text output (great for scripts)
** | **: Toggle line wrapping (useful for wide content) |
lynx -dump URL # Get plain text of webpage
lynx -source URL # Get HTML source
lynx -width=120 URL # Set display width
lynx -anonymous # Restricted mode (safer browsing)
# Advanced dumping with formatting
lynx -dump -width=120 -nolist URL # Clean text, no link list
lynx -dump -with_backspaces URL | less # Formatted like man pages
lynx -crawl -traversal startpage.html # Spider entire site for indexing
lynx -source URL | grep "meta name" # Extract specific HTML elements
lynx -session=work.session # Save/restore browsing session
lynx -sessionin=saved.session # Resume from saved session
lynx -cmd_log=commands.log URL # Record all keystrokes
lynx -cmd_script=replay.txt # Replay recorded session
When browsing local directories (file://localhost/path/
):
** | **: Toggle line wrapping (with width options) |
./bookmarks/work.html
Create ~/.lynx_jump
file with shortcuts:
google http://www.google.com/search?q=%s
wiki https://en.wikipedia.org/wiki/%s
news https://news.ycombinator.com
Then use j followed by shortcut name.
lynx -anonymous # Restricted anonymous mode
lynx -restrictions=all # Maximum restrictions
lynx -validate # Only allow HTTP URLs
lynx -noredir # Don't follow redirects
lynx -noreferer # Don't send referrer headers
lynx -trace -stderr URL 2> debug.log # Full HTTP trace to file
lynx -head URL # Get headers only
lynx -mime_header -source URL # Include MIME headers in source
lynx -connect_timeout=30 URL # Set connection timeout
# Web scraping pipeline
lynx -dump -listonly URL | grep "http" | \
while read url; do lynx -dump "$url" >> data.txt; done
# Monitor webpage changes
lynx -dump URL > current.txt
diff previous.txt current.txt
# Extract all links from a page
lynx -dump -listonly URL | grep "^[[:space:]]*[0-9]"
Edit ~/.lynxrc
or use command line:
lynx -assume_charset=UTF-8 # Handle broken encoding
lynx -cookie_file=custom_cookies.txt # Custom cookie storage
lynx -display_charset=UTF-8 # Set display encoding
lynx -editor=vim # Set external editor
lynx -useragent="Custom Bot 1.0" # Custom user agent
Configure in lynx.cfg
:
EXTERNAL:http:xdg-open %s:TRUE
PRINTER:Save to PDF:lpr -P PDF %s:TRUE
DOWNLOADER:wget:wget %s:TRUE
-width=200
for better table formattinglynx -cache=50 # Increase document cache
lynx -partial # Enable partial page display
lynx -partial_thres=200 # Set partial display threshold
lynx -stack_dump # Debug memory issues
# Daily news digest
lynx -dump news_site.com | head -50 | mail -s "Daily News" user@domain
# Website monitoring script
#!/bin/bash
URL="$1"
lynx -dump "$URL" | md5sum > current.md5
if ! cmp -s current.md5 previous.md5; then
echo "Website changed!" | mail -s "Change Alert" admin@domain
fi
mv current.md5 previous.md5
# Bulk link checking
lynx -traversal -crawl startpage.html
grep "ERROR" *.html
TERM
variable for optimal displaytmux
/screen
for persistent sessionsRemember: These advanced features make Lynx incredibly powerful for automation, debugging, and specialized workflows. Start with the basics, then add these techniques as your needs grow.