The tool only supports username and password credentials workflow. The arguments to run are:
Online help:
```bash
-a, --archive-folder (mandatory) Specifies the local folder all data files pulled from the server will be stored
-u, --username (mandatory) Username on your Friendica instance
-s, --server-name (mandatory) The server name for your instance. (e.g. if the URL in your browser is "https://friendica.com/" then this would be "friendica.com
-r, --resume-page The page to restart the downloading process. Will try to read in existing posts and image archive data and start download from there. If set to 0 it resets from scratch.
(defaults to "0")
-d, --delay Delay in milliseconds between requests to try not to stress the server (thousands of API calls can be made)
(defaults to "5000")
-m, --max-post-requests The maximum number of times to query for posts
(defaults to "1000000000")
-p, --items-per-page The requested number of items per page
[1, 5, 10, 20 (default), 50, 100]
-i, --[no-]download-images Whether to download images from posts when those images are stored on the server (not links to other sites) (defaults to true)
(defaults to on)
```
Along with the mandatory arguments you will be required to provide a password.
There will a prompt for the password which one can enter in. Alternatively one can store their password
in a file and pass it in via stdin redirect.
The base archive folder needs to be created first and should be an empty folder if starting an archival
from scratch.
## Examples
### Basic Usage
Minimum arguments to start processing the archive from scratch: