Version 3.0.0
A rabbit hole you shouldn't enter, once entered you can't get out.
Created by: N4O
Last Updated: 26/12/2021
Table of Contents: - Information - Requirements - Setup - Setup Rclone - Setup YTArchive - Configuration - Running and Routes - Routes - Auto Scheduler - Migration - Accessing Protected Routes - WebSockets - Multi Workers - Improvements - Dataset - License
The v3 version of VTHell is a big rewrite from previous version, while previous version use multiple scripts now this version includes a single webserver with other stuff that will automatically download/upload/archive your stuff.
This program utilize the Holodex API to fetch Youtube stream and information about it.
The program also use a specific dataset to map upload path, if its need to be improved feel free to open a new pull request.
This project utilize Poetry to manage its project, please follow this instruction to install Poetry.
After you have installed poetry run all of this command:
1. poetry install
2. cp .env.example .env
This will install all the requirements and copy the example environment into a proper env file.
rclone
: https://rclone.org/install/rclone
by refering to their documentationA simple setup using google drive will be ``` $ rclone config Current remotes:
Name Type ==== ====
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q>
- Type `n` for creating a new remote
e/n/d/r/c/s/q> n
name> [enter whatever you want]
- After that you will be asked to enter number/name of the storage<br>
Find `Google Drive` and type the number beside it.
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[...]
12 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
13 / Google Drive
\ "drive"
14 / Google Photos
\ "google photos"
[...]
Storage> 13
- When asked for `Google Application Client Id` just press enter.<br>
- When asked for `Google Application Client Secret` just press enter.<br>
- When asked for `scope` press `1` and then enter.<br>
- When asked for `root_folder_id` just press enter.<br>
- When asked for `service_account_file` just press enter.<br>
- When asked if you want to edit **advanced config** press `n` and enter.<br>
- When asked this:
Remote config
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
y) Yes (default)
n) No
y/n>
Press `y` if you have GUI access, or `n` if you're using SSH/Console only.
If you use SSH/Console, you will be given a link, open it, authorize your<br>
account and copy the `verification code` result back to the console.<br><br>
If you use GUI it will open a browser and you can authorize it normally.<br>
When asked for `Configure this as a team drive?` just press `n` if you don't or `y` if you're using team drive.<br>
[vthell] type = drive scope = drive token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2020-04-12T11:07:42.967625371Z"}
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
``
- Press
yto complete the setup or
eif you want to edit it again.<br>
- You can exit by typing
q` and enter after this.
YTArchive is a tool to download a youtube stream from the very beginning of the stream. This tools works much better rather than Streamlink for now.
bin
in the root folder of vthellbash
[agrius ~/vthell/bin] ls -alh
total 7.3M
drwx------ 2 mizore mizore 4.0K Dec 14 21:57 .
drwxr-xr-x 11 mizore mizore 4.0K Dec 14 21:57 ..
-rwxr-xr-x 1 mizore mizore 7.3M Oct 20 23:58 ytarchive
VTHell v3 have this following configuration needed:
```yml
PORT=12790
WEBSERVER_REVERSE_PROXY=false
WEBSERVER_REVERSE_PROXY_SECRET=this-is-a-very-secure-reverse-proxy-secret
WEBSERVER_PASSWORD=this-is-a-very-secure-web-password
VTHELL_DB=vth.db
VTHELL_LOOP_DOWNLOADER=60
VTHELL_LOOP_SCHEDULER=180
VTHELL_GRACE_PERIOD=120
HOLODEX_API_KEY=
RCLONE_BINARY=rclone RCLONE_DISABLE=0 RCLONE_DRIVE_TARGET= MKVMERGE_BINARY=mkvmerge YTARCHIVE_BINARY=ytarchive
NOTIFICATION_DISCORD_WEBHOOK= ```
PORT
just means what port it will run on (if you run the app file directly)WEBSERVER_REVERSE_PROXY
enable if you need reverse proxy featureWEBSERVER_REVERSE_PROXY_SECRET
this need to be set if you enable reverse proxy, learn more here.
You can generate a random one with: openssl rand -hex 32
WEBSERVER_PASSWORD
this will be your password to access protected resources.
VTHELL_DB
is your database filename
VTHELL_LOOP_DOWNLOADER
will be your downloader timer, which means the scheduler will run every x seconds that are specified (default 60 seconds)VTHELL_LOOP_SCHEDULER
will be your auto scheduler timer, which means the scheduler will run every x seconds that are specified (default 180 seconds).
This one will run the auto scheduler that will fetch and automatically add the new job to the databaseVTHELL_GRACE_PERIOD
how long should the program waits before start trying to download the stream (in seconds, default 2 minutes)HOLODEX_API_KEY
will be your Holodex API key which you can get from your profile pageRCLONE_BINARY
will be the full path to your rclone (or you can add it to your system PATH)RCLONE_DISABLE
if you set it to 1
, it will disable rclone/upload step and will save the data to your local disk at streamdump/
RCLONE_DRIVE_TARGET
will be your target drive or your remote name that you setup in Setup RcloneMKVMERGE_BINARY
will be your mkvmerge pathYTARCHIVE_BINARY
will be your ytarchve path, you can follow the Setup YTArchive to get your ytarchive up and running.NOTIFICATION_DISCORD_WEBHOOK
will be used to announce any update to your scheduling. Must be a valid Discord Webhook link.After you configure it properly, you can start running with Uvicorn or invoking the app.py file directly.
Via Uvicorn
py
poetry run uvicorn asgi:app
You can see more information here
Invoking directly
.env
filepython3 app.py
to start the webserverPOST
/api/schedule
, schedule a single video.
Returns 200 with the added video on success.
Authentication needed
On fail it will return a JSON with error
field.
This route allows you to schedule a video manually. If video already scheduled, it will replace some stuff but not everything.
This route accept JSON data with this format:
json
{
"id": "abcdef12345"
}
id
is the youtube video ID that will be fetched to Holodex API to check if it's still live/upcoming.
DELETE
/api/schedule
, delete single scheduled video.
Returns 200 with deleted video on success.
Authentication needed
On fail it will return a JSON with error
field.
This route will delete a specific video and return the deleted video if found, the data is the following:
json
{
"id": "bFNvQFyTBx0",
"title": "【ウマ娘】本気の謝罪ガチャをさせてください…【潤羽るしあ/ホロライブ】",
"start_time": 1639559148,
"channel_id": "UCl_gCybOJRIgOXw6Qb4qJzQ",
"is_member": false,
"status": "DOWNLOADING",
"error": null
}
The deletion only work if the status is either:
- WAITING
- DONE
- CLEANUP
If it's anything else, it will return 406 Not Acceptable status code.
GET
/api/status
, get the status of all scheduled video.
Returns 200 with a list scheduled video on success.
This routes accept the following query parameters:
- include_done
, adding this and setting it into 1
or true
will include all scheduled video including the one that are already finished.
json
[
{
"id": "bFNvQFyTBx0",
"title": "【ウマ娘】本気の謝罪ガチャをさせてください…【潤羽るしあ/ホロライブ】",
"start_time": 1639559148,
"channel_id": "UCl_gCybOJRIgOXw6Qb4qJzQ",
"is_member": false,
"status": "DOWNLOADING",
"error": null
}
]
All the data is self-explanatory, the status
is one of this enum:
- WAITING
means that it's not yet started
- PREPARING
means the recording process is started and now waiting for stream to start
- DOWNLOADING
means that the stream is being recorded
- MUXING
means that the stream has finished downloading and now being muxed into .mkv
format
- UPLOAD
means that the stream is now being uploaded to the specified folder
- CLEANING
means that upload process is done and now the program is cleaning up downloaded files.
- DONE
means that the job is finished
- ERROR
means an error occured, see the error
field to learn more.
- CANCELLED
means the job is cancelled because of an unexpected error (members, private, and more)
GET
/api/status/:id
, get the status of a single job
Returns 200 with a requested video on success.
On fail it will return a JSON with error
key.
It does the same thing as above route, but only for a single job and returns a dictionary instead of list.
The auto scheduler is a feature where the program will check every X seconds to the Holodex API for ongoing/upcoming live stream and will schedule anything that match the criteria.
Routes
The following are the routes available to add/remove/modify scheduler:
GET
/api/auto-scheduler
, fetch all the auto scheduler.
Returns 200 on success with the following data:
json
{
"include": [
{
"id": 1,
"type": "channel",
"data": "UC1uv2Oq6kNxgATlCiez59hw",
"chains": null
},
{
"id": 2,
"type": "word",
"data": "ASMR",
"chains": [
{
"type": "group",
"data": "hololive"
}
]
}
],
"exclude": [
{
"id": 3,
"type": "word",
"data": "(cover)",
"chains": null
},
]
}
The data format as seen above includes:
- type
, which is the type of the data. It must be the following enum:
- word
: to check if specific word exist in the title. (case-insensitive)
- regex_word
: same as above, but it use regex. (case-insensitive)
- group
: check if it match the organization or group (case-insensitive)
- channel
: check if channel ID match (case-sensitive)
- data
: a string following the format of specified type
- chains
: A list of data to be chained with the original data check. If chains are defined, all of them must be matching to be scheduled.
- This only works on the following type: word
, regex_word
- This only works on include
filters only right now.
You can add new scheduler by sending a POST request to this following route:
POST
/api/auto-scheduler
, add new scheduler filter
Returns 201 on success
Authentication needed
On fail it will return a JSON with error
field.
This route accepts a JSON data with this format:
json
{
"type": "string-or-type-enum",
"data": "string",
"chains": null,
"include": true
}
type
must be the enum specified above, data
must be a string, and include
means if it should be included or excluded when processing the filters later.
Chains can be either, a dictionary/map for single chain, or a list for multiple chains. It can also be none if you dont need it.
Chains will be ignored automatically if type
is not word
or regex_word
.
PATCH
/api/auto-scheduler/:id
, modify specific scheduler filter.
Returns 204 on success
Authentication needed
On fail it will return a JSON with error
field.
This route accepts all of this JSON data:
json
{
"type": "string-or-type-enum",
"data": "string",
"chains": null,
"include": true
}
All of it are optional, but you must specify something if you want to modify it.
:id
can be found from using the GET /api/auto-scheduler
.
DELETE
/api/auto-scheduler/:id
, delete specific scheduler filter.
Returns 200 on success with the deleted data
Authentication needed
On fail it will return a JSON with error
field.
:id
can be found from using the GET /api/auto-scheduler
.
The auto scheduler has now been rewritten, if you still have the old one you might want to run the migration scripts.
py
$ python3 migrations/auto_scheduler.py
Make sure you have the _auto_scheduler.json
in the dataset
folder, and make sure the webserver is running.
Some routes are protected with password to make sure not everyone can use it. To access it, you need to set the WEBSERVER_PASSWORD
and copy te value elsewhere.
After that to access it, you need to set either of following header:
- Authorization
: You also need to prefix it with Password
(ex: Password 123
)
- X-Auth-Token
: No extra prefix
- X-Password
: No extra prefix
The program will first check it in Authorization
header then the both X-*
header.
Sample request
sh
curl -X POST -H "Authorization: Password SecretPassword123" http://localhost:12790/api/add
sh
curl -X POST -H "X-Password: SecretPassword123" http://localhost:12790/api/add
sh
curl -X POST -H "X-Auth-Token: SecretPassword123" http://localhost:12790/api/add
Note
If you are running with Uvicorn or anything else, make sure to disable the ping timeout and ping interval. We have our own ping method that you need to answer and using that ping method will broke if you use Nginx deployment or something like that.
The v3 of VTHell now have a Websocket server ready to be connected to. To start, connect to this following route: /api/event
For example in JS:
js
const ws = new WebSocket("ws://127.0.0.1:12790/api/event");
The websocket have the following formatting that you can understand:
json
{
"event": "event name",
"data": "can be anything"
}
The raw data will be sent as string, so you need to parse it first to JSON format before parsing anything. The data can be a dictionary, list, string, or even null. So make sure you see this following section that will show all the event name with the data.
Event and Data:
job_update
event
Will be emitted everytime there is an update on the job status. It will broadcast the following data:
json
{
"id": "123",
"title": "optional",
"start_time": "optional",
"channel_id": "optional",
"is_member": "optional",
"status": "DOWNLOADING",
"error": "An error if possible"
}
or
{
"id": "123",
"status": "DOWNLOADING",
"error": "An error if possible"
}
The error
field might be not available if the status
is not ERROR
.
The only data that will always be sent is id
and status
, if you got the extra field like title
. It means someone called the /api/schedule
API and the existing job data got replaced with some new data. Please maks sure you handle it properly!
job_scheduled
event
This will be emitted everytime autoscheduler added a new scheduled job automatically. It will contains the following data as an example:
json
{
"id": "bFNvQFyTBx0",
"title": "【ウマ娘】本気の謝罪ガチャをさせてください…【潤羽るしあ/ホロライブ】",
"start_time": 1639559148,
"channel_id": "UCl_gCybOJRIgOXw6Qb4qJzQ",
"is_member": false,
"status": "DOWNLOADING"
}
job_deleted
event
This will be emitted whenever a job was deleted from the database. It will contains the follwing data:
json
{
"id": "bFNvQFyTBx0"
}
connect_job_init
event
This will be called as soon as you established connection with the Socket.IO server. It will be used so you can store the current state without needing to use the API.
The data will be the same as requesting to the /api/status
(without the job with DONE
status)
ping
andpong
event
This ping/pong packet or event is being used to make sure the connection is alive and well.
The server will sent a ping
request with the followwing content:
json
{
"t" 1234567890,
"sid": "user-id"
}
t
will be the server unix milis, you will need to respond with the pong
event with the same data.
If you dont answer within 30 seconds, the connection will be closed immediately.
When you connect with the socket, you will get the ping
event immediately!
It is recommended to run it in direct mode if you want to use multiple workers.
Although, it is supported, it might doing some unexpected thing.
To run it in multiple workers mode, just add the parameter --workers
or -W
when invoking the app.py
file
sh
$ source .venv/bin/activate
$ (.venv) python3 app.py -W 4
Above command will run the server with 4 workers.
Version 3.0 of VTHell is very much different to the original 2.x or 1.x version of it. It includes a full web server to monitor your recording externally, a better task management to allow you to fire multiple download at once, Socket.IO feature to better monitor your data via websocket.
It also now using Holodex API rather than Holotools API since it support many more VTuber.
The other thing is moving from JSON file to SQLite3 database for all the job, this improve performance since we dont need to read/write multiple time to disk.
Oh, and I guess now it support Windows since it does not rely on some linux only feature.
With v3, the dataset is now on its own repository, you can access it here: https://github.com/noaione/vthell-dataset
The dataset repo will be fetched every 1 hour to see if the deployed hash changes.
If you have suggestion for new dataset, removal, and more. Please visit the repo and open a PR or Issue there!
This project is licensed with MIT License, learn more here
Bumps certifi from 2021.10.8 to 2022.12.7.
9e9e840
2022.12.07b81bdb2
2022.09.24939a28f
2022.09.14aca828a
2022.06.15.2de0eae1
Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...b8eb5e9
2022.06.15.147fb7ab
Fix deprecation warning on Python 3.11 (#199)b0b48e0
fixes #198 -- update link in license9d514b4
2022.06.154151e88
Add py.typed to MANIFEST.in to package in sdist (#196)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps dparse from 0.5.1 to 0.5.2.
Sourced from dparse's changelog.
0.5.2 (2022-08-09)
- Install pyyaml only when asked for with extras (conda extra)
- Add support for piptools requirements.in
- Use ConfigParser directly
- Removed a regex used in the index server validation, fixing a possible ReDos security issue
2bcf15b
Fixing rst issueaa53997
Fixing travis file13934e1
Release 0.5.2125beef
Adding pyyaml to tests, now pyyaml is an extrab18762f
Adding conda extra, pyyaml won't be installed by default.69ba6dc
Updapting travis importlib_metadata8c99017
Merge pull request #57 from pyupio/security/remove-intensive-regexd87364f
Removing index server validation3290bb5
Merge pull request #53 from oz123/local-import-yaml5a3ad57
Local import YAML instead of top levelDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps sanic from 21.9.3 to 21.12.2.
Sourced from sanic's releases.
Version 21.12.2
Resolves #2477 and #2478 See also #2495 and https://github.com/sanic-org/sanic/security/advisories/GHSA-8cw9-5hmv-77w6
Full Changelog: https://github.com/sanic-org/sanic/compare/v21.12.1...v21.12.2
Version 21.12.1
- #2349 Only display MOTD on startup
- #2354 Add config.update support for all config values
- #2355 Ignore name argument in Python 3.7
Version 21.12.0
Features
- #2260 Allow early Blueprint registrations to still apply later added objects
- #2262 Noisy exceptions - force logging of all exceptions
- #2264 Optional
uvloop
by configuration- #2270 Vhost support using multiple TLS certificates
- #2277 Change signal routing for increased consistency
- BREAKING CHANGE: If you were manually routing signals there is a breaking change. The signal router's
get
is no longer 100% determinative. There is now an additional step to loop thru the returned signals for proper matching on the requirements. If signals are being dispatched usingapp.dispatch
orbp.dispatch
, there is no change.- #2290 Add contextual exceptions
- #2291 Increase join concat performance
- #2295, #2316, #2331 Restructure of CLI and application state with new displays and more command parity with
app.run
- #2302 Add route context at definition time
- #2304 Named tasks and new API for managing background tasks
- #2307 On app auto-reload, provide insight of changed files
- #2308 Auto extend application with Sanic Extensions if it is installed, and provide first class support for accessing the extensions
- #2309 Builtin signals changed to
Enum
- #2313 Support additional config implementation use case
- #2321 Refactor environment variable hydration logic
- #2327 Prevent sending multiple or mixed responses on a single request
- #2330 Custom type casting on environment variables
- #2332 Make all deprecation notices consistent
- #2335 Allow underscore to start instance names
Bugfixes
- #2273 Replace assignation by typing for
websocket_handshake
- #2285 Fix IPv6 display in startup logs
- #2299 Dispatch
http.lifecyle.response
from exception handlerDeprecations and Removals
- #2306 Removal of deprecated items
Sanic
andBlueprint
may no longer have arbitrary properties attached to themSanic
andBlueprint
forced to have compliant names
- alphanumeric +
_
+-
- must start with letter or
_
load_env
keyword argument ofSanic
sanic.exceptions.abort
sanic.views.CompositionView
sanic.response.StreamingHTTPResponse
- NOTE: the
stream()
response method (where you pass a callable streaming function) has been deprecated and will be removed in v22.6. You should upgrade all streaming responses to the new style: https://sanicframework.org/en/guide/advanced/streaming.html#response-streaming
... (truncated)
0b75059
Version Bump5b1686c
Use path.parts instead of match (#2508)86baaef
Use pathlib for path resolution (#2506)2b4b78d
Fix dotted testee6d8cf
Prevent directory traversion with static files (#2495)c4da66b
Update changelogd50d3b8
Bump version313f97a
Only display MOTD in ASGI on startup (#2349)a23547d
Ignore name argument on Python 3.7 (#2355)34d1dee
Add config.update support for setters (#2354)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps ujson from 5.0.0 to 5.4.0.
Sourced from ujson's releases.
5.4.0
Added
- Add support for arbitrary size integers (#548)
@JustAnotherArchivist
Fixed
- CVE-2022-31116:
- Replace
wchar_t
string decoding implementation with auint32_t
-based one (#555)@JustAnotherArchivist
- Fix handling of surrogates on decoding (#550)
@JustAnotherArchivist
- CVE-2022-31117: Potential double free of buffer during string decoding
@JustAnotherArchivist
- Fix memory leak on encoding errors when the buffer was resized (#549)
@JustAnotherArchivist
- Integer parsing: always detect overflows (#544)
@NaN-git
- Fix handling of surrogates on encoding (#530)
@JustAnotherArchivist
5.3.0
Added
Changed
- Benchmark refactor - argparse CLI (#533)
@Erotemic
Fixed
- Fix segmentation faults when errors occur while handling unserialisable objects (#531)
@JustAnotherArchivist
- Fix segmentation fault when an exception is raised while converting a dict key to a string (#526)
@JustAnotherArchivist
- Fix memory leak dumping on non-string dict keys (#521)
@JustAnotherArchivist
- Fix ref counting on repeated default function calls (#524)
@JustAnotherArchivist
- Remove redundant
wheel
dependency frompyproject.toml
(#535)@hugovk
5.2.0
Added
- Support parsing NaN, Infinity and -Infinity (#514)
@Erotemic
- Support dynamically linking against system double-conversion library (#508)
@musicinmybrain
- Add env var to control stripping debug info (#507)
@musicinmybrain
- Add
JSONDecodeError
(#498)@JustAnotherArchivist
Fixed
- Fix buffer overflows (CVE-2021-45958) (#519)
@JustAnotherArchivist
- Upgrade Black to fix Click (#515)
@hugovk
- simplify exception handling on integer overflow (#510)
@RouquinBlanc
- Remove dead code that used to handle the separate int type in Python 2 (#509)
@JustAnotherArchivist
- Fix exceptions on encoding list or dict elements and non-overflow errors on int handling getting silenced (#505)
@JustAnotherArchivist
5.1.0
Changed
... (truncated)
9c20de0
Merge pull request from GHSA-fm67-cv37-96ffb21da40
Fix double free on string decoding if realloc fails67ec071
Merge pull request #555 from JustAnotherArchivist/fix-decode-surrogates-2bc7bdff
Replace wchar_t string decoding implementation with a uint32_t-based onecc70119
Merge pull request #548 from JustAnotherArchivist/arbitrary-ints4b5cccc
Merge pull request #553 from bwoodsend/pypy-ciabe26fc
Merge pull request #551 from bwoodsend/bye-bye-travis3efb5cc
Delete old TravisCI workflow and references.404de1a
xfail test_decode_surrogate_characters() on Windows PyPy.f7e66dc
Switch to musl docker base images.Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Todo: - [x] Implement base extractor class - [x] Twitcasting extractor - [x] Twitter spaces extractor - [x] Twitch extractor - [x] Mildom(?) extractor - [x] All around ffmpeg downloader - [x] ihateani.me API wrapper (for Twitch/Twitcasting/Twitter Spaces)
close #10
Explanation Twitcasting feature are available on previous version, it is removed from v3 for now since I need to reimplement it. For Twitter spcaces, I would try to implement it properly I guess?
Current Implementation None, and check version 2
Extra Context None
This release is a rework of the whole VTHell system from a separate scripts into one giant web server with multiple wrappers.
streamlink
to ytarchive
for archiving Youtube streamsyoutube-dl
to yt-dlp
to download already finished stream/premiereregex
and group
typechains
to chain word
and regex
checking.Please see https://github.com/noaione/vthell-dataset
Since this release changes a lot of stuff, please see the new README on how to setup some stuff again.
Some stuff still missing, so that's why this release is marked as "Pre-Release". The full v3 release going to come after I finished up some implementation, but this version is already ready to be used.
I also renamed the release from N4O VTuber Recording Tools
to just VTHell
Full Changelog: https://github.com/noaione/vthell/compare/v2.0...v3.0-rc1
Changelog:
- Introduce rerecording for stream that stopped recording because failed to reload playlist.
- Determine resolution automatically
- Auto Scheduler now have separate dataset file that will allow/ignore (moved)
- Fix missing Hololive member from _youtube_mapping.json
and fix HoloStars YouTube Channel ID.
- (NEW) Dataset:
- Add new HoloStars Unit (TriNero)
- Add VOMS Project
Changelog:
- Introducing other VTuber other than Hololive
- Support Nijisanji Main (Except World) to auto scheduler.
- Allow cookies usage, so you can use it for recording member-only stream
- Finally fix most of the "unrecognized command" error
- Remove vtup.sh
Disclaimer: I do not condone sharing your recorded Member-only stream to anyone else except yourself.
Changelog: - Add a simple twitcasting support
Changelog: - Rewrite schedule.py to support BiliBili (note: it doesn't work yet) - Cleanup and patching and fixing broken stuff.
Changelog: - Introduce auto scheduling. (See README for more Info) - Some minor fixes
vtuber