This website works better with JavaScript.
Explore
Help
Sign In
katolaz
/
cgit-70
Watch
1
Star
0
Fork
You've already forked cgit-70
0
Code
Issues
Pull Requests
Releases
Wiki
Activity
a gopher interface forked from the popular cgit
You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
1228
Commits
3
Branches
43
Tags
1.9 MiB
Tag:
Branch:
Tree:
36b1d78923
gopher
master
upstream
cgit-70_v0.1
cgit-70_v0.1.2.1
v0.1
v0.10
v0.10.1
v0.10.2
v0.11.0
v0.11.1
v0.11.2
v0.12
v0.2
v0.3
v0.4
v0.5
v0.6
v0.6.1
v0.6.2
v0.6.3
v0.7
v0.7.1
v0.7.2
v0.8
v0.8.1
v0.8.1.1
v0.8.2
v0.8.2.1
v0.8.2.2
v0.8.3
v0.8.3.1
v0.8.3.2
v0.8.3.3
v0.8.3.4
v0.8.3.5
v0.9
v0.9.0.1
v0.9.0.2
v0.9.0.3
v0.9.1
v0.9.2
v1.0
v1.1
v1.2
v1.2.1
Branches
Tags
${ item.name }
Create tag
${ searchTerm }
Create branch
${ searchTerm }
from '36b1d78923'
${ noResults }
cgit-70
/
robots.txt
4 lines
47 B
Raw
Normal View
History
Unescape
Escape
robots.txt: disallow access to snapshots My dmesg is filled with the oom killer bringing down processes while the Bingbot downloads every snapshot for every commit of the Linux kernel in tar.xz format. Sure, I should be running with memory limits, and now I'm using cgroups, but a more general solution is to prevent crawlers from wasting resources like that in the first place. Suggested-by: Natanael Copa <ncopa@alpinelinux.org> Suggested-by: Julius Plenz <plenz@cis.fu-berlin.de> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
12 years ago
User-agent: *
Disallow: /*/snapshot/*
Allow: /