docs folder

This commit is contained in:
Shish
2020-03-22 15:49:55 +00:00
parent fe874389ab
commit 76b346b45d
7 changed files with 117 additions and 109 deletions

36
docs/CONFIG.md Normal file
View File

@ -0,0 +1,36 @@
# Custom Configuration
Various aspects of Shimmie can be configured to suit your site specific needs
via the file `data/config/shimmie.conf.php` (created after installation).
Take a look at `core/sys_config.php` for the available options that can
be used.
# Custom User Classes
User classes can be added to or altered by placing them in
`data/config/user-classes.conf.php`.
For example, one can override the default anonymous "allow nothing"
permissions like so:
```php
new UserClass("anonymous", "base", [
Permissions::CREATE_COMMENT => True,
Permissions::EDIT_IMAGE_TAG => True,
Permissions::EDIT_IMAGE_SOURCE => True,
Permissions::CREATE_IMAGE_REPORT => True,
]);
```
For a moderator class, being a regular user who can delete images and comments:
```php
new UserClass("moderator", "user", [
Permissions::DELETE_IMAGE => True,
Permissions::DELETE_COMMENT => True,
]);
```
For a list of permissions, see `core/permissions.php`

20
docs/DEV.md Normal file
View File

@ -0,0 +1,20 @@
# Development Info
ui-\* cookies are for the client-side scripts only; in some configurations
(eg with varnish cache) they will be stripped before they reach the server
shm-\* CSS classes are for javascript to hook into; if you're customising
themes, be careful with these, and avoid styling them, eg:
- shm-thumb = outermost element of a thumbnail
* data-tags
* data-post-id
- shm-toggler = click this to toggle elements that match the selector
* data-toggle-sel
- shm-unlocker = click this to unlock elements that match the selector
* data-unlock-sel
- shm-clink = a link to a comment, flash the target element when clicked
* data-clink-sel
Please tell me if those docs are lacking in any way, so that they can be
improved for the next person who uses them

18
docs/DOCKER.md Normal file
View File

@ -0,0 +1,18 @@
# Docker
If you just want to run shimmie inside docker, there's a pre-built image
in dockerhub - `shish2k/shimmie2` - which can be used like:
```
docker run -p 8000 -v /my/hard/drive:/app/data shish2k/shimmie2
```
If you want to build your own image from source:
```
docker build -t shimmie .
```
There are various options settable with environment variables:
- `UID` / `GID` - which user ID to run as (default 1000/1000)
- `INSTALL_DSN` - specify a data source to install into, to skip the installer screen, eg
`-e INSTALL_DSN="pgsql:user=shimmie;password=6y5erdfg;host=127.0.0.1;dbname=shimmie"`

27
docs/INSTALL.md Normal file
View File

@ -0,0 +1,27 @@
# Requirements
- These are generally based on "whatever is in Debian Stable", because that's
conservative without being TOO painfully out of date, and is a nice target
for the unit test Docker build.
- A database: PostgreSQL 11+ / MariaDB 10.3+ / SQLite 3.27+
- [Stable PHP](https://en.wikipedia.org/wiki/PHP#Release_history) (7.3+ as of writing)
- GD or ImageMagick
# Get the Code
Two main options:
1. Via Git (allows easiest updates via `git pull`):
* `git clone https://github.com/shish/shimmie2`
* Install [Composer](https://getcomposer.org/). (If you don't already have it)
* Run `composer install` in the shimmie folder.
2. Via Stable Release:
* Download the latest release under [Releases](https://github.com/shish/shimmie2/releases).
# Install
1. Create a blank database
2. Visit the install folder with a web browser
3. Enter the location of the database
4. Click "install". Hopefully you'll end up at the welcome screen; if
not, you should be given instructions on how to fix any errors~

65
docs/SPEED.md Normal file
View File

@ -0,0 +1,65 @@
Notes for any sites which require extra performance
===================================================
Image Serving
-------------
Firstly, make sure your webserver is configured properly and nice URLs are
enabled, so that images will be served straight from disk by the webserver
instead of via PHP. If you're serving images via PHP, then your site might
melt under the load of 5 concurrent users...
Add a Cache
-----------
eg installing memcached, then setting
`define("CACHE_DSN", "memcache://127.0.0.1:11211")` - a bunch of stuff will
get served from the high-speed cache instead of the SQL database.
`SPEED_HAX`
-----------
Setting this to true will make a bunch of changes which reduce the correctness
of the software and increase admin workload for the sake of speed. You almost
certainly don't want to set this, but if you do (eg you're trying to run a
site with 10,000 concurrent users on a single server), it can be a huge help.
Notable behaviour changes:
- Database schema upgrades are no longer automatic; you'll need to run
`php index.php db-upgrade` from the CLI each time you update the code.
- Mapping from Events to Extensions is cached - you'll need to delete
`data/cache/shm_event_listeners.php` after each code change, and after
enabling or disabling any extensions.
- Tag lists (eg alphabetic, popularity, map) are cached and you'll need
to delete them manually when you feel like it
- Anonymous users can only search for 3 tags at once
- We only show the first 500 pages of results for any query, except for
the most simple (no tags, or one positive tag)
- We only ever show the first 5,000 results for complex queries
- Only comments from the past 24 hours show up in /comment/list
- Web crawlers are blocked from creating too many nonsense searches
- The first 10 pages in the index get extra caching
- RSS is limited to 10 pages
- HTML for thumbnails is cached
`WH_SPLITS`
-----------
Store files as `images/ab/cd/...` instead of `images/ab/...`, which can
reduce filesystem load when you have millions of images.
Multiple Image Servers
----------------------
Image links don't have to be `/images/$hash.$ext` on the local server, they
can be full URLs, and include weighted random parts, eg:
`https://{fred=3,leo=1}.mysite.com/images/$hash.$ext` - the software will then
use consistent hashing to map 75% of the files to `fred.mysite.com` and 25% to
`leo.mysite.com` - then you can install Varnish or Squid or something as a
caching reverse-proxy.
Profiling
---------
`define()`'ing `TRACE_FILE` to a filename and `TRACE_THRESHOLD` to a number
of seconds will result in JSON event traces being dumped into that file
whenever a page takes longer than the threshold to load. These traces can
then be loaded into the chrome trace viewer (chrome://tracing/) and you'll
get a breakdown of page performance by extension, event, database, and cache
queries.

9
docs/UPGRADE.md Normal file
View File

@ -0,0 +1,9 @@
# Upgrade from earlier versions
I very much recommend going via each major release in turn (eg, 2.0.6
-> 2.1.3 -> 2.2.4 -> 2.3.0 rather than 2.0.6 -> 2.3.0).
While the basic database and file formats haven't changed *completely*, it's
different enough to be a pain.