Compare commits

..

169 commits

Author SHA1 Message Date
ece6e42b68
Refactor post_embed function and simplify first_image extraction logic
All checks were successful
Test and build Docker image / docker (push) Successful in 22s
2026-03-18 05:35:09 +01:00
4836c2428b
Add default embed settings when creating the feed 2026-03-18 05:33:37 +01:00
d51ca2cced
Update uv sync command to specify directory for deployment
All checks were successful
Test and build Docker image / docker (push) Successful in 1m39s
2026-03-18 00:49:35 +01:00
b9d04358f3
Reduce max_instances for send_to_discord scheduler to 1
All checks were successful
Test and build Docker image / docker (push) Successful in 48s
2026-03-16 05:00:05 +01:00
4d8fd145cf
Update label for URL resolution option in webhook entries template 2026-03-16 04:53:39 +01:00
955b94456d
Add mass update functionality for feed URLs with preview
All checks were successful
Test and build Docker image / docker (push) Successful in 24s
2026-03-16 04:48:34 +01:00
bf94f3f3e4
Add new webhook detail view 2026-03-16 01:10:49 +01:00
94d5935b78
Refactor send_entry_to_discord function to require a reader parameter and update related tests 2026-03-15 20:47:41 +01:00
5323245cf6
Fix /post_entry 2026-03-15 20:45:00 +01:00
71695c2987
WIP 2026-03-15 19:37:55 +01:00
dfa6ea48e5
Refactor tag retrieval to use default values and remove missing_tags module 2026-03-15 15:50:04 +01:00
8805da33b6
Remove end-to-end test for git backup push 2026-03-15 15:39:15 +01:00
727057439e
Refactor reader dependency injection in FastAPI routes and tests 2026-03-15 15:39:05 +01:00
168f38b764
Add pytest option to run real git backup tests and skip by default 2026-03-15 15:13:32 +01:00
ed395a951c
Refactor URL change logic, mark all the new entries as read 2026-03-15 15:13:25 +01:00
b19927af0f
Preserve Discord timestamp tags in message
All checks were successful
Test and build Docker image / docker (push) Successful in 1m30s
2026-03-14 05:26:48 +01:00
f1d3204930
Add deployment step to build workflow for production server
All checks were successful
Test and build Docker image / docker (push) Successful in 21s
2026-03-07 23:12:00 +01:00
a025f29179
Rename .github to .forgejo
All checks were successful
Test and build Docker image / docker (push) Successful in 23s
2026-03-07 22:27:03 +01:00
1cce89c637
Refactor GitHub Actions workflow for self-hosted runner 2026-03-07 21:27:24 +01:00
a7e0213b1a
Refactor feed URL and update interval sections 2026-03-07 19:06:57 +01:00
5215b80643
Add endpoint to see all the entries for a webhook 2026-03-07 19:00:11 +01:00
f9882c7e43
Update pre-commit dependencies to latest versions 2026-03-07 15:30:34 +01:00
dcd86eff69
Randomize test order 2026-03-07 06:43:32 +01:00
d87341d729
Add feed URL change functionality and related tests 2026-03-07 06:29:12 +01:00
24d4d7a293
Add support for changing the update interval for feeds
Some checks failed
Test and build Docker image / docker (push) Has been cancelled
2026-03-07 05:50:20 +01:00
567273678e
Only show backup button in navbar if backups are enabled 2026-03-07 05:50:20 +01:00
renovate[bot]
82bcd27cdc
chore(deps): update docker/setup-qemu-action action to v4 (#424)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-07 01:25:05 +00:00
renovate[bot]
3b034c15f5
chore(deps): update docker/metadata-action action to v6 (#426)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-07 01:04:31 +00:00
renovate[bot]
cae5619915
chore(deps): update docker/login-action action to v4 (#423)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-07 01:01:13 +00:00
renovate[bot]
8e5d3170d7
chore(deps): update docker/setup-buildx-action action to v4 (#425)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-07 00:51:31 +00:00
renovate[bot]
e21449c09e
chore(deps): update docker/build-push-action action to v7 (#427)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2026-03-07 00:03:33 +00:00
e8bd528def
Add git backup functionality
Fixes: https://github.com/TheLovinator1/discord-rss-bot/issues/421
Merges: https://github.com/TheLovinator1/discord-rss-bot/pull/422
2026-03-07 01:01:09 +01:00
9378dac0fa
Unescape HTML entities in summary and content before markdown conversion 2025-12-08 17:47:45 +01:00
renovate[bot]
86cbad98b0
chore(deps): update actions/checkout action to v6 (#419)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-11-20 17:38:56 +00:00
renovate[bot]
ab733cde5e
chore(deps): update dependency python (#401)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-15 03:12:19 +00:00
6419b977c1 chore: remove unused requirements.txt COPY from Dockerfile 2025-10-15 05:09:50 +02:00
7f961be4dc chore: remove generated lockfiles, simplify deps, update pre-commit and ignore .python-version
- delete requirements.txt and uv.lock (remove exported/locked artefacts)
- remove .python-version file and add .python-version / uv.lock to .gitignore
- simplify pyproject.toml dependency pins (use package names instead of explicit version specifiers)
- update pre-commit hooks revisions and remove uv-pre-commit hooks (rev bumps for add-trailing-comma, pyupgrade, ruff-pre-commit, actionlint)
2025-10-15 04:59:44 +02:00
4d4791955f
Merge pull request #418 from mirusu400/master 2025-10-15 04:47:06 +02:00
e0894779d3 fix: classify Reddit feeds only when URL contains both "reddit.com" and ".rss" 2025-10-15 04:43:39 +02:00
mirusu400
ccb55e0ad4 fix: Use regex for detailed handling reddit feeds 2025-10-15 11:21:12 +09:00
renovate[bot]
ec18e9c978
Update dependency pydantic to v2.12.2 (#417)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-14 18:09:28 +00:00
renovate[bot]
be3912b8e8
Update dependency pydantic-core to v2.41.4 (#416)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-14 13:15:08 +00:00
renovate[bot]
f20d8763c8
Update dependency charset-normalizer to v3.4.4 (#414)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-14 05:30:37 +00:00
renovate[bot]
d834f9220d
Update dependency pydantic-core to v2.41.3 (#413)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-14 02:16:23 +00:00
renovate[bot]
d88eaf859c
Update dependency pydantic_core to v2.41.3 (#412)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-14 02:15:56 +00:00
renovate[bot]
ee0f221277
Update dependency pydantic to v2.12.1 (#411)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-13 22:00:56 +00:00
renovate[bot]
2a03bf0f54
Update dependency idna to v3.11 (#410)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-12 16:47:50 +00:00
renovate[bot]
e88400cc9e
Update dependency fastapi to v0.119.0 (#409)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-11 21:28:56 +00:00
renovate[bot]
e051a7c53d
Update dependency fastapi to v0.118.3 (#408)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-10 10:57:30 +00:00
renovate[bot]
7533ae4d1f
Update dependency sentry-sdk to v2.41.0 (#407)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-09 17:05:43 +00:00
renovate[bot]
7a4cf17a93
Update dependency platformdirs to v4.5.0 (#406)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-09 01:08:05 +00:00
renovate[bot]
7bdbf8bc9b
Update dependency filelock to v3.20.0 (#405)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-08 20:27:20 +00:00
renovate[bot]
7e18de0d78
Update dependency fastapi to v0.118.2 (#404)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-08 15:30:40 +00:00
renovate[bot]
61a94a5bca
Update dependency fastapi to v0.118.1 (#403)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-08 11:50:52 +00:00
renovate[bot]
7148264bfe
Update astral-sh/setup-uv action to v7 (#402)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-08 00:55:53 +00:00
renovate[bot]
21bbab8620
Update dependency pydantic to v2.12.0 (#400)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-07 20:01:08 +00:00
renovate[bot]
949d750294
Update dependency pydantic-core to v2.41.1 (#399)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-06 21:33:40 +00:00
renovate[bot]
c48390956e
Update dependency pydantic_core to v2.41.1 (#398)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-06 21:33:19 +00:00
renovate[bot]
88174ddb98
Update dependency sentry-sdk to v2.40.0 (#397)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-06 14:47:14 +00:00
renovate[bot]
369a3b3c40
Update dependency certifi to v2025.10.5 (#396)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-05 04:37:28 +00:00
renovate[bot]
aafebea33d
Update dependency pydantic to v2.11.10 (#395)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-04 12:37:50 +00:00
renovate[bot]
715851e58a
Update dependency pydantic-core to v2.40.1 (#394)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-02 22:27:52 +00:00
renovate[bot]
8bd594f59b
Update dependency pydantic_core to v2.40.1 (#393)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-02 22:27:22 +00:00
renovate[bot]
699bb83fc3
Update dependency reader to v3.19 (#392)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-02 00:52:20 +00:00
renovate[bot]
8317f8c7d3
Update dependency pydantic-core to v2.40.0 (#357)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-01 20:36:09 +00:00
renovate[bot]
2e9c99efa6
Update dependency typing-inspection to v0.4.2 (#391)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-10-01 03:32:54 +00:00
renovate[bot]
267fb673bf
Update dependency beautifulsoup4 to v4.14.2 (#390)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-29 14:49:10 +00:00
renovate[bot]
89c10edb4e
Update dependency fastapi to v0.118.0 (#389)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-29 04:33:33 +00:00
renovate[bot]
2a4b98a2fc
Update dependency beautifulsoup4 to v4.14.0 (#388)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-28 02:05:06 +00:00
renovate[bot]
8f7cdda720
Update dependency markupsafe to v3.0.3 (#387)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-27 21:44:00 +00:00
renovate[bot]
2ae04e6f56
Update dependency pyyaml to v6.0.3 (#386)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-25 21:40:57 +00:00
renovate[bot]
73a591950f
Update dependency sentry-sdk to v2.39.0 (#385)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-25 15:32:50 +00:00
renovate[bot]
2237e8afe2
Update dependency uvicorn to v0.37.0 (#384)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-23 13:54:17 +00:00
renovate[bot]
02a5215fe3
Update dependency anyio to v4.11.0 (#383)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-23 10:02:56 +00:00
renovate[bot]
bc2bea8d73
Update dependency lxml to v6.0.2 (#382)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-22 04:48:28 +00:00
renovate[bot]
405d2a3d34
Update dependency fastapi to v0.117.1 (#381)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-20 21:45:00 +00:00
renovate[bot]
988649712e
Update dependency uvicorn to v0.36.0 (#380)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-20 05:01:54 +00:00
renovate[bot]
a447103809
Update dependency regex to v2025.9.18 (#379)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-19 05:09:13 +00:00
renovate[bot]
4ad0ed5b71
Update dependency click to v8.3.0 (#378)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-18 21:16:07 +00:00
renovate[bot]
fc479f67d0
Update dependency fastapi to v0.116.2 (#377)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-16 21:53:53 +00:00
renovate[bot]
6ed0806c4b
Update dependency sentry-sdk to v2.38.0 (#376)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-15 23:42:52 +00:00
renovate[bot]
639c2f79de
Update dependency starlette to v0.48.0 (#375)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-14 02:27:13 +00:00
renovate[bot]
046eacc040
Update dependency pydantic to v2.11.9 (#374)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-13 20:54:40 +00:00
renovate[bot]
bf7320a688
Update dependency feedparser to v6.0.12 (#373)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-10 14:37:10 +00:00
renovate[bot]
f906932660
Update dependency sentry-sdk to v2.37.1 (#372)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-09 14:56:51 +00:00
renovate[bot]
6c14f27aa8
Update dependency sentry-sdk to v2.37.0 (#371)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-05 12:43:45 +00:00
renovate[bot]
d08c8eb175
Update dependency pytest to v8.4.2 (#370)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-04 17:41:50 +00:00
renovate[bot]
e355e8adb3
Update dependency sentry-sdk to v2.36.0 (#369)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-04 09:51:21 +00:00
renovate[bot]
2e8efc3aea
Update actions/setup-python action to v6 (#368)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-04 04:53:44 +00:00
renovate[bot]
d6fd1def64
Update dependency regex to v2025.9.1 (#367)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-02 01:04:56 +00:00
renovate[bot]
a00ce8006f
Update dependency sentry-sdk to v2.35.2 (#366)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-09-02 00:09:46 +00:00
renovate[bot]
79bf0c8124
Update dependency regex to v2025.8.29 (#365)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-31 09:23:42 +00:00
fb4910430a Refactor search response to use named parameters and update search context test 2025-08-29 16:41:34 +02:00
afd2a0dd78 Improve search page 2025-08-29 03:02:32 +02:00
renovate[bot]
7ba078719d
Update dependency soupsieve to v2.8 (#364)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-27 17:01:10 +00:00
renovate[bot]
5e07de1f69
Update dependency platformdirs to v4.4.0 (#363)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-26 21:36:55 +00:00
renovate[bot]
357273b88a
Update dependency sentry-sdk to v2.35.1 (#362)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-26 14:17:45 +00:00
renovate[bot]
8dd02d59fa
Update dependency typing-extensions to v4.15.0 (#361)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-25 14:39:26 +00:00
renovate[bot]
09fad4ed49
Update dependency beautifulsoup4 to v4.13.5 (#360)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-24 17:11:11 +00:00
renovate[bot]
c8d789a565
Update dependency starlette to v0.47.3 (#359)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-24 14:12:52 +00:00
renovate[bot]
8e7092588a
Update dependency lxml to v6.0.1 (#358)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-22 13:55:43 +00:00
66b59c7691 Merge branch 'master' of https://github.com/TheLovinator1/discord-rss-bot 2025-08-21 17:52:26 +02:00
renovate[bot]
2374f7ef08
Update dependency requests to v2.32.5 (#356)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-18 21:41:38 +00:00
renovate[bot]
09cac86b17
Update dependency sentry-sdk to v2.35.0 (#355)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-14 20:28:46 +00:00
renovate[bot]
d8fc1a5086
Update dependency filelock to v3.19.1 (#354)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-14 18:35:40 +00:00
be520dda12 Update uv.lock 2025-08-13 05:20:10 +02:00
renovate[bot]
da92c8473c
Update actions/checkout action to v5 (#353)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 18:03:16 +00:00
renovate[bot]
b364af137b
Update dependency pydantic-core to v2.39.0 (#352)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 14:42:57 +00:00
renovate[bot]
d17a299f14
Update dependency pydantic_core to v2.39.0 (#351)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-11 14:42:26 +00:00
renovate[bot]
743588066e
Update dependency markdownify to v1.2.0 (#350)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-09 20:54:36 +00:00
renovate[bot]
fba88f4799
Update dependency charset-normalizer to v3.4.3 (#349)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-09 09:08:31 +00:00
renovate[bot]
e7cb161e82
Update dependency pydantic-core to v2.38.0 (#348)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 17:05:24 +00:00
renovate[bot]
b4b5e42d80
Update dependency pydantic_core to v2.38.0 (#347)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 17:04:20 +00:00
renovate[bot]
9956b97c70
Update dependency anyio to v4.10.0 (#346)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-04 15:29:30 +00:00
renovate[bot]
ca18898a89
Update dependency certifi to v2025.8.3 (#345)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-03 03:27:17 +00:00
renovate[bot]
9dc73b1118
Update dependency click to v8.2.2 (#344)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-08-02 02:56:14 +00:00
renovate[bot]
782323238f
Update dependency sentry-sdk to v2.34.1 (#343)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-30 16:49:46 +00:00
renovate[bot]
95d5f379e7
Update dependency sentry-sdk to v2.34.0 (#342)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-29 15:10:59 +00:00
renovate[bot]
61ecfc631a
Update dependency pydantic-core to v2.37.2 (#341)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-26 13:49:27 +00:00
renovate[bot]
4fb8b9268b
Update dependency pydantic_core to v2.37.2 (#340)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-26 13:48:51 +00:00
renovate[bot]
967fadecde
Update dependency pydantic-core to v2.37.1 (#339)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-25 19:58:42 +00:00
renovate[bot]
de56c94afc
Update dependency pydantic_core to v2.37.1 (#338)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-25 19:58:23 +00:00
renovate[bot]
3d56b04620
Update dependency pydantic-core to v2.36.0 (#337)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-24 00:35:36 +00:00
renovate[bot]
a15d51e83b
Update dependency pydantic_core to v2.36.0 (#336)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-24 00:35:13 +00:00
renovate[bot]
3ba65aa011
Update dependency sentry-sdk to v2.33.2 (#335)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-22 10:55:04 +00:00
renovate[bot]
22555331f0
Update dependency sentry-sdk to v2.33.1 (#334)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-21 14:08:33 +00:00
renovate[bot]
d972eb48e7
Update dependency starlette to v0.47.2 (#333)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-20 18:03:05 +00:00
renovate[bot]
79599acfa2
Update dependency sentry-sdk to v2.33.0 (#332)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-15 12:11:50 +00:00
renovate[bot]
3c5e1a27ff
Update dependency certifi to v2025.7.14 (#331)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-14 07:28:37 +00:00
renovate[bot]
d9092a170d
Update dependency fastapi to v0.116.1 (#321)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-11 23:09:52 +00:00
renovate[bot]
925f3ff6f8
Update dependency certifi to v2025.7.9 (#330)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-09 06:49:59 +00:00
24c9296560 Fix formatting in installation instructions and update email link 2025-07-08 01:10:58 +02:00
renovate[bot]
fd990268b1
Update astral-sh/setup-uv action to v6 (#329)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-07 23:08:39 +00:00
renovate[bot]
be4dac38c3
Update dependency pydantic-core to v2.35.2 (#325)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-07 21:44:43 +00:00
renovate[bot]
aef117e26b
Update dependency starlette to v0.47.1 (#326)
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-07-07 21:20:33 +00:00
8be7e245e3 Merge branch 'master' of https://github.com/TheLovinator1/discord-rss-bot 2025-07-07 23:08:24 +02:00
f70f870a7b Use GitHub Actions instead of Gitea Actions 2025-07-07 23:07:47 +02:00
bda9e424e5 Merge branch 'master' of https://git.lovinator.space/TheLovinator/discord-rss-bot 2025-07-07 22:58:55 +02:00
efe7cd2e95 Update uv.lock 2025-07-07 22:58:45 +02:00
2e0157ff6c Add logging for entry sending and improve scheduler configuration 2025-07-07 22:56:56 +02:00
a2df0a84b3 Fix line endings? 2025-06-25 06:06:42 +02:00
fdd8e19aa3 Update pre-commit hook versions for uv-pre-commit and ruff 2025-06-25 05:41:13 +02:00
2ef03c8b45 Fix formatting of tags in /custom 2025-06-25 05:38:47 +02:00
4be0617a01
Add additional words to cSpell configuration in settings.json 2025-06-07 06:05:14 +02:00
1d16e7e24e
Change log level to debug and update port to 3000 2025-06-07 05:54:55 +02:00
b3197a77e3
Update README 2025-06-07 05:54:47 +02:00
8095e54464
Add uv-pre-commit hooks
- Added hooks for uv-pre-commit to ensure the lockfile is up-to-date and autoexport uv.lock to requirements.txt.
- Added "autoexport" to the cSpell words in the VSCode settings for spell checking.
2025-06-07 05:53:29 +02:00
aeee40820d
Add uv stuff 2025-06-07 05:04:10 +02:00
de665817d0
Update .gitignore 2025-06-07 05:00:26 +02:00
a010815de0
Send a separate message with the Discord quest 2025-06-07 04:56:49 +02:00
799446d727
Update pre-commit hook versions 2025-06-07 04:55:19 +02:00
f9e4f109d5
Add checks for paused or deleted feeds before sending entries to Discord 2025-06-07 04:54:35 +02:00
44f50a4a98
Remove test for updating an existing feed 2025-05-17 04:07:13 +02:00
2a6dbd33dd
Add button for manually updating feed 2025-05-17 03:58:08 +02:00
96bcd81191
Use ATX headers instead of SETEXT 2025-05-17 03:53:15 +02:00
901d6cb1a6
Honor 429 Too Many Requests and 503 Service Unavailable responses 2025-05-05 01:19:52 +02:00
7f9c934d08
Also use custom feed stuff if sent from send_to_discord 2025-05-04 16:50:29 +02:00
c3a11f55b0
Update Docker healthcheck 2025-05-04 05:28:37 +02:00
d8247fec01
Replace GitHub Actions build workflow with Gitea workflow 2025-05-04 04:08:39 +02:00
ffd6f2f9f2
Add Hoyolab API integration 2025-05-04 03:48:22 +02:00
544ef6dca3
Update ruff-pre-commit to version 0.11.8 2025-05-03 19:42:20 +02:00
e33b331564
Update ruff-pre-commit to version 0.11.5 2025-04-16 13:33:56 +02:00
cd0f63d59a
Add tldextract for improved domain extraction and add new tests for extract_domain function 2025-04-16 13:32:31 +02:00
8b50003eda
Group feeds by domain 2025-04-03 16:47:53 +02:00
97d06ddb43
Embed YouTube videos in /feed HTML. Strong code, many bananas! 🦍🦍🦍🦍 2025-04-03 06:20:01 +02:00
ac63041b28
Add regex support to blacklist and whitelist filters. Strong code, many bananas! 🦍🦍🦍🦍 2025-04-03 05:44:50 +02:00
84e39c9f79
Add .gitattributes to set Jinja as the language for HTML files 2025-04-01 22:58:42 +02:00
8408db9afd
Enhance YouTube feed display in index.html with username and channel ID formatting 2025-04-01 22:56:54 +02:00
6dfc72d3b0
Add discord_rss_bot directory to Dockerfile 2025-02-10 05:17:46 +01:00
53 changed files with 6115 additions and 926 deletions

19
.env.example Normal file
View file

@ -0,0 +1,19 @@
# You can optionally store backups of your bot's configuration in a git repository.
# This allows you to track changes by subscribing to the repository or using a RSS feed.
# Local path for the backup git repository (e.g., /data/backup or /home/user/backups/discord-rss-bot)
# When set, the bot will initialize a git repo here and commit state.json after every configuration change
# GIT_BACKUP_PATH=
# Remote URL for pushing backup commits (e.g., git@github.com:username/private-config.git)
# Optional - only set if you want automatic pushes to a remote repository
# Leave empty to keep git history local only
# GIT_BACKUP_REMOTE=
# Sentry Configuration (Optional)
# Sentry DSN for error tracking and monitoring
# Leave empty to disable Sentry integration
# SENTRY_DSN=
# Testing Configuration
# Discord webhook URL used for testing (optional, only needed when running tests)
# TEST_WEBHOOK_URL=

View file

@ -1,6 +1,8 @@
{ {
"$schema": "https://docs.renovatebot.com/renovate-schema.json", "$schema": "https://docs.renovatebot.com/renovate-schema.json",
"extends": ["config:recommended"], "extends": [
"config:recommended"
],
"automerge": true, "automerge": true,
"configMigration": true, "configMigration": true,
"dependencyDashboard": false, "dependencyDashboard": false,

View file

@ -0,0 +1,100 @@
---
# Required setup for self-hosted runner:
# 1. Install dependencies:
# sudo pacman -S qemu-user-static qemu-user-static-binfmt docker docker-buildx
# 2. Add runner to docker group:
# sudo usermod -aG docker forgejo-runner
# 3. Restart runner service to apply group membership:
# sudo systemctl restart forgejo-runner
# 4. Install uv and ruff for the runner user
# 5. Login to GitHub Container Registry:
# echo "ghp_YOUR_TOKEN_HERE" | sudo -u forgejo-runner docker login ghcr.io -u TheLovinator1 --password-stdin
# 6. Configure sudoers for deployment (sudo EDITOR=nvim visudo):
# forgejo-runner ALL=(discord-rss) NOPASSWD: /usr/bin/git -C /home/discord-rss/discord-rss-bot pull
# forgejo-runner ALL=(discord-rss) NOPASSWD: /usr/bin/uv sync -U --directory /home/discord-rss/discord-rss-bot
# forgejo-runner ALL=(root) NOPASSWD: /bin/systemctl restart discord-rss-bot
name: Test and build Docker image
on:
push:
branches:
- master
pull_request:
workflow_dispatch:
schedule:
- cron: "0 0 1 * *"
jobs:
docker:
runs-on: self-hosted
steps:
# Download the latest commit from the master branch
- uses: actions/checkout@v6
# Verify local tools are available on the self-hosted runner
- name: Check local toolchain
run: |
python --version
uv --version
ruff --version
docker version
# Bootstrap a local Buildx builder for multi-arch builds
# (requires qemu-user-static and qemu-user-static-binfmt installed via pacman)
- name: Configure local buildx for multi-arch
run: |
docker buildx inspect local-multiarch-builder >/dev/null 2>&1 || \
docker buildx create --name local-multiarch-builder --driver docker-container
docker buildx use local-multiarch-builder
docker buildx inspect --bootstrap
- name: Lint Python code
run: ruff check --exit-non-zero-on-fix --verbose
- name: Check Python formatting
run: ruff format --check --verbose
- name: Lint Dockerfile
run: docker build --check .
- name: Install dependencies
run: uv sync --all-extras --all-groups
- name: Run tests
run: uv run pytest
- id: tags
name: Compute image tags
run: |
IMAGE="ghcr.io/thelovinator1/discord-rss-bot"
if [ "${FORGEJO_REF}" = "refs/heads/master" ]; then
echo "tags=${IMAGE}:latest,${IMAGE}:master" >> "$FORGEJO_OUTPUT"
else
SHORT_SHA="$(echo "$FORGEJO_SHA" | cut -c1-12)"
echo "tags=${IMAGE}:sha-${SHORT_SHA}" >> "$FORGEJO_OUTPUT"
fi
# Build (and optionally push) Docker image
- name: Build and push Docker image
env:
TAGS: ${{ steps.tags.outputs.tags }}
run: |
IFS=',' read -r -a tag_array <<< "$TAGS"
tag_args=()
for tag in "${tag_array[@]}"; do
tag_args+=( -t "$tag" )
done
if [ "${{ forge.event_name }}" = "pull_request" ]; then
docker buildx build --platform linux/amd64,linux/arm64 "${tag_args[@]}" --load .
else
docker buildx build --platform linux/amd64,linux/arm64 "${tag_args[@]}" --push .
fi
# Deploy to production server
- name: Deploy to Server
if: success() && forge.ref == 'refs/heads/master'
run: |
sudo -u discord-rss git -C /home/discord-rss/discord-rss-bot pull
sudo -u discord-rss uv sync -U --directory /home/discord-rss/discord-rss-bot
sudo systemctl restart discord-rss-bot

1
.gitattributes vendored Normal file
View file

@ -0,0 +1 @@
*.html linguist-language=jinja

View file

@ -1,64 +0,0 @@
---
name: Test and build Docker image
on:
push:
pull_request:
workflow_dispatch:
schedule:
- cron: "0 6 * * *"
env:
TEST_WEBHOOK_URL: ${{ secrets.TEST_WEBHOOK_URL }}
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: 3.12
- uses: astral-sh/setup-uv@v5
with:
version: "latest"
- run: uv sync --all-extras --all-groups
- run: uv run pytest
ruff:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: astral-sh/ruff-action@v3
with:
version: "latest"
- run: ruff check --exit-non-zero-on-fix --verbose
- run: ruff format --check --verbose
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
if: github.event_name != 'pull_request'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
needs: [test, ruff]
steps:
- uses: actions/checkout@v4
- uses: docker/setup-qemu-action@v3
with:
platforms: all
- uses: docker/setup-buildx-action@v3
- uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: docker/build-push-action@v6
with:
context: .
platforms: linux/amd64, linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: |
ghcr.io/thelovinator1/discord-rss-bot:latest
ghcr.io/thelovinator1/discord-rss-bot:master

29
.gitignore vendored
View file

@ -92,7 +92,7 @@ ipython_config.py
# However, in case of collaboration, if having platform-specific dependencies or dependencies # However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not # having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies. # install all needed dependencies.
Pipfile.lock # Pipfile.lock
# UV # UV
# Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control. # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
@ -105,11 +105,12 @@ uv.lock
# This is especially recommended for binary packages to ensure reproducibility, and is more # This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries. # commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
poetry.lock # poetry.lock
# poetry.toml
# pdm # pdm
# Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock # pdm.lock
# pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
# in version control. # in version control.
# https://pdm.fming.dev/latest/usage/project/#working-with-version-control # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
@ -165,7 +166,20 @@ cython_debug/
# be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
# and can be added to the global gitignore or merged into this file. For a more nuclear # and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder. # option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/ # .idea/
# Abstra
# Abstra is an AI-powered process automation framework.
# Ignore directories containing user credentials, local state, and settings.
# Learn more at https://abstra.io/docs
.abstra/
# Visual Studio Code
# Visual Studio Code specific template is maintained in a separate VisualStudioCode.gitignore
# that can be found at https://github.com/github/gitignore/blob/main/Global/VisualStudioCode.gitignore
# and can be added to the global gitignore or merged into this file. However, if you prefer,
# you could uncomment the following to ignore the entire vscode folder
# .vscode/
# Ruff stuff: # Ruff stuff:
.ruff_cache/ .ruff_cache/
@ -173,6 +187,13 @@ cython_debug/
# PyPI configuration file # PyPI configuration file
.pypirc .pypirc
# Cursor
# Cursor is an AI-powered code editor. `.cursorignore` specifies files/directories to
# exclude from AI features like autocomplete and code analysis. Recommended for sensitive data
# refer to https://docs.cursor.com/context/ignore-files
.cursorignore
.cursorindexingignore
# Database stuff # Database stuff
*.sqlite *.sqlite
*.sqlite-shm *.sqlite-shm

View file

@ -1,13 +1,13 @@
repos: repos:
# Automatically add trailing commas to calls and literals. # Automatically add trailing commas to calls and literals.
- repo: https://github.com/asottile/add-trailing-comma - repo: https://github.com/asottile/add-trailing-comma
rev: v3.1.0 rev: v4.0.0
hooks: hooks:
- id: add-trailing-comma - id: add-trailing-comma
# Some out-of-the-box hooks for pre-commit. # Some out-of-the-box hooks for pre-commit.
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0 rev: v6.0.0
hooks: hooks:
- id: check-added-large-files - id: check-added-large-files
- id: check-ast - id: check-ast
@ -31,14 +31,14 @@ repos:
# Run Pyupgrade on all Python files. This will upgrade the code to Python 3.12. # Run Pyupgrade on all Python files. This will upgrade the code to Python 3.12.
- repo: https://github.com/asottile/pyupgrade - repo: https://github.com/asottile/pyupgrade
rev: v3.19.1 rev: v3.21.2
hooks: hooks:
- id: pyupgrade - id: pyupgrade
args: ["--py312-plus"] args: ["--py312-plus"]
# An extremely fast Python linter and formatter. # An extremely fast Python linter and formatter.
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.9.5 rev: v0.15.5
hooks: hooks:
- id: ruff-format - id: ruff-format
- id: ruff - id: ruff
@ -46,6 +46,6 @@ repos:
# Static checker for GitHub Actions workflow files. # Static checker for GitHub Actions workflow files.
- repo: https://github.com/rhysd/actionlint - repo: https://github.com/rhysd/actionlint
rev: v1.7.7 rev: v1.7.11
hooks: hooks:
- id: actionlint - id: actionlint

6
.vscode/launch.json vendored
View file

@ -8,7 +8,11 @@
"module": "uvicorn", "module": "uvicorn",
"args": [ "args": [
"discord_rss_bot.main:app", "discord_rss_bot.main:app",
"--reload" "--reload",
"--host",
"0.0.0.0",
"--port",
"3000",
], ],
"jinja": true, "jinja": true,
"justMyCode": true "justMyCode": true

View file

@ -1,13 +1,19 @@
{ {
"cSpell.words": [ "cSpell.words": [
"autoexport",
"botuser", "botuser",
"Genshins", "Genshins",
"healthcheck",
"Hoyolab",
"levelname", "levelname",
"Lovinator", "Lovinator",
"markdownified", "markdownified",
"markdownify", "markdownify",
"pipx", "pipx",
"thead" "pyproject",
"thead",
"thelovinator",
"uvicorn"
], ],
"python.analysis.typeCheckingMode": "basic" "python.analysis.typeCheckingMode": "basic"
} }

View file

@ -1,14 +1,15 @@
FROM python:3.13-slim FROM python:3.14-slim
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
RUN useradd --create-home botuser && \ RUN useradd --create-home botuser && \
mkdir -p /home/botuser/discord-rss-bot/ /home/botuser/.local/share/discord_rss_bot/ && \ mkdir -p /home/botuser/discord-rss-bot/ /home/botuser/.local/share/discord_rss_bot/ && \
chown -R botuser:botuser /home/botuser/ chown -R botuser:botuser /home/botuser/
USER botuser USER botuser
WORKDIR /home/botuser/discord-rss-bot WORKDIR /home/botuser/discord-rss-bot
COPY --chown=botuser:botuser requirements.txt /home/botuser/discord-rss-bot/
RUN --mount=type=cache,target=/root/.cache/uv \ RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \ --mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --no-install-project uv sync --no-install-project
COPY --chown=botuser:botuser discord_rss_bot/ /home/botuser/discord-rss-bot/discord_rss_bot/
EXPOSE 5000 EXPOSE 5000
VOLUME ["/home/botuser/.local/share/discord_rss_bot/"] VOLUME ["/home/botuser/.local/share/discord_rss_bot/"]
HEALTHCHECK --interval=10m --timeout=5s CMD ["uv", "run", "./discord_rss_bot/healthcheck.py"]
CMD ["uv", "run", "uvicorn", "discord_rss_bot.main:app", "--host=0.0.0.0", "--port=5000", "--proxy-headers", "--forwarded-allow-ips='*'", "--log-level", "debug"] CMD ["uv", "run", "uvicorn", "discord_rss_bot.main:app", "--host=0.0.0.0", "--port=5000", "--proxy-headers", "--forwarded-allow-ips='*'", "--log-level", "debug"]

103
README.md
View file

@ -2,8 +2,25 @@
Subscribe to RSS feeds and get updates to a Discord webhook. Subscribe to RSS feeds and get updates to a Discord webhook.
> [!NOTE] Email: [tlovinator@gmail.com](mailto:tlovinator@gmail.com)
> You should look at [MonitoRSS](https://github.com/synzen/monitorss) for a more feature-rich project.
Discord: TheLovinator#9276
## Features
- Subscribe to RSS feeds and get updates to a Discord webhook.
- Web interface to manage subscriptions.
- Customizable message format for each feed.
- Choose between Discord embed or plain text.
- Regex filters for RSS feeds.
- Blacklist/whitelist words in the title/description/author/etc.
- Set different update frequencies for each feed or use a global default.
- Gets extra information from APIs if available, currently for:
- [https://feeds.c3kay.de/](https://feeds.c3kay.de/)
- Genshin Impact News
- Honkai Impact 3rd News
- Honkai Starrail News
- Zenless Zone Zero News
## Installation ## Installation
@ -13,9 +30,7 @@ or [install directly on your computer](#install-directly-on-your-computer).
### Docker ### Docker
- Open a terminal in the repository folder. - Open a terminal in the repository folder.
- Windows 10: <kbd>Shift</kbd> + <kbd>right-click</kbd> in the folder and select `Open PowerShell window here` - <kbd>Shift</kbd> + <kbd>right-click</kbd> in the folder and `Open PowerShell window here`
- Windows 11: <kbd>Shift</kbd> + <kbd>right-click</kbd> in the folder and Show more options
and `Open PowerShell window here`
- Run the Docker Compose file: - Run the Docker Compose file:
- `docker-compose up` - `docker-compose up`
- You can stop the bot with <kbd>Ctrl</kbd> + <kbd>c</kbd>. - You can stop the bot with <kbd>Ctrl</kbd> + <kbd>c</kbd>.
@ -29,34 +44,68 @@ or [install directly on your computer](#install-directly-on-your-computer).
### Install directly on your computer ### Install directly on your computer
This is not recommended if you don't have an init system (e.g., systemd) - Install the latest of [uv](https://docs.astral.sh/uv/#installation):
- `powershell -ExecutionPolicy ByPass -c "irm <https://astral.sh/uv/install.ps1> | iex"`
- Install the latest version of needed software:
- [Python](https://www.python.org/)
- You should use the latest version.
- You want to add Python to your PATH.
- Windows: Find `App execution aliases` and disable python.exe and python3.exe
- [Poetry](https://python-poetry.org/docs/master/#installation)
- Windows: You have to add `%appdata%\Python\Scripts` to your PATH for Poetry to work.
- Download the project from GitHub with Git or download - Download the project from GitHub with Git or download
the [ZIP](https://github.com/TheLovinator1/discord-rss-bot/archive/refs/heads/master.zip). the [ZIP](https://github.com/TheLovinator1/discord-rss-bot/archive/refs/heads/master.zip).
- If you want to update the bot, you can run `git pull` in the project folder or download the ZIP again. - If you want to update the bot, you can run `git pull` in the project folder or download the ZIP again.
- Open a terminal in the repository folder. - Open a terminal in the repository folder.
- Windows 10: <kbd>Shift</kbd> + <kbd>right-click</kbd> in the folder and select `Open PowerShell window here` - <kbd>Shift</kbd> + <kbd>right-click</kbd> in the folder and `Open PowerShell window here`
- Windows 11: <kbd>Shift</kbd> + <kbd>right-click</kbd> in the folder and Show more options
and `Open PowerShell window here`
- Install requirements:
- Type `poetry install` into the PowerShell window. Make sure you are
in the repository folder where the [pyproject.toml](pyproject.toml) file is located.
- (You may have to restart your terminal if it can't find the `poetry` command. Also double check it is in
your PATH.)
- Start the bot: - Start the bot:
- Type `poetry run python discord_rss_bot/main.py` into the PowerShell window. - Type `uv run discord_rss_bot/main.py` into the PowerShell window.
- You can stop the bot with <kbd>Ctrl</kbd> + <kbd>c</kbd>. - You can stop the bot with <kbd>Ctrl</kbd> + <kbd>c</kbd>.
- Bot is now running on port 3000.
- You should run this bot behind a reverse proxy like [Caddy](https://caddyserver.com/)
or [Nginx](https://www.nginx.com/) if you want to access it from the internet. Remember to add authentication.
- You can access the web interface at `http://localhost:3000/`.
Note: You will need to run `poetry install` again if [poetry.lock](poetry.lock) has been modified. - To run automatically on boot:
- Use [Windows Task Scheduler](https://en.wikipedia.org/wiki/Windows_Task_Scheduler).
- Or add a shortcut to `%userprofile%\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup`.
## Contact ## Git Backup (State Version Control)
Email: [mailto:tlovinator@gmail.com](tlovinator@gmail.com) The bot can commit every configuration change (adding/removing feeds, webhook
Discord: TheLovinator#9276 changes, blacklist/whitelist updates) to a separate private Git repository so
you get a full, auditable history of state changes — similar to `etckeeper`.
### Configuration
Set the following environment variables (e.g. in `docker-compose.yml` or a
`.env` file):
| Variable | Required | Description |
| ------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| `GIT_BACKUP_PATH` | Yes | Local path where the backup git repository is stored. The bot will initialise it automatically if it does not yet exist. |
| `GIT_BACKUP_REMOTE` | No | Remote URL to push to after each commit (e.g. `git@github.com:you/private-config.git`). Leave unset to keep the history local only. |
### What is backed up
After every relevant change a `state.json` file is written and committed.
The file contains:
- All feed URLs together with their webhook URL, custom message, embed
settings, and any blacklist/whitelist filters.
- The global list of Discord webhooks.
### Docker example
```yaml
services:
discord-rss-bot:
image: ghcr.io/thelovinator1/discord-rss-bot:latest
volumes:
- ./data:/data
environment:
- GIT_BACKUP_PATH=/data/backup
- GIT_BACKUP_REMOTE=git@github.com:you/private-config.git
```
For SSH-based remotes mount your SSH key into the container and make sure the
host key is trusted, e.g.:
```yaml
volumes:
- ./data:/data
- ~/.ssh:/root/.ssh:ro
```

View file

@ -4,15 +4,14 @@ import urllib.parse
from functools import lru_cache from functools import lru_cache
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from discord_rss_bot.filter.blacklist import entry_should_be_skipped, feed_has_blacklist_tags from discord_rss_bot.filter.blacklist import entry_should_be_skipped
from discord_rss_bot.filter.whitelist import has_white_tags, should_be_sent from discord_rss_bot.filter.blacklist import feed_has_blacklist_tags
from discord_rss_bot.settings import get_reader from discord_rss_bot.filter.whitelist import has_white_tags
from discord_rss_bot.filter.whitelist import should_be_sent
if TYPE_CHECKING: if TYPE_CHECKING:
from reader import Entry, Reader from reader import Entry
from reader import Reader
# Our reader
reader: Reader = get_reader()
@lru_cache @lru_cache
@ -31,11 +30,12 @@ def encode_url(url_to_quote: str) -> str:
return urllib.parse.quote(string=url_to_quote) if url_to_quote else "" return urllib.parse.quote(string=url_to_quote) if url_to_quote else ""
def entry_is_whitelisted(entry_to_check: Entry) -> bool: def entry_is_whitelisted(entry_to_check: Entry, reader: Reader) -> bool:
"""Check if the entry is whitelisted. """Check if the entry is whitelisted.
Args: Args:
entry_to_check: The feed to check. entry_to_check: The feed to check.
reader: Custom Reader instance.
Returns: Returns:
bool: True if the feed is whitelisted, False otherwise. bool: True if the feed is whitelisted, False otherwise.
@ -44,11 +44,12 @@ def entry_is_whitelisted(entry_to_check: Entry) -> bool:
return bool(has_white_tags(reader, entry_to_check.feed) and should_be_sent(reader, entry_to_check)) return bool(has_white_tags(reader, entry_to_check.feed) and should_be_sent(reader, entry_to_check))
def entry_is_blacklisted(entry_to_check: Entry) -> bool: def entry_is_blacklisted(entry_to_check: Entry, reader: Reader) -> bool:
"""Check if the entry is blacklisted. """Check if the entry is blacklisted.
Args: Args:
entry_to_check: The feed to check. entry_to_check: The feed to check.
reader: Custom Reader instance.
Returns: Returns:
bool: True if the feed is blacklisted, False otherwise. bool: True if the feed is blacklisted, False otherwise.

View file

@ -1,18 +1,27 @@
from __future__ import annotations from __future__ import annotations
import html
import json import json
import logging import logging
import re
from dataclasses import dataclass from dataclasses import dataclass
from typing import TYPE_CHECKING
from bs4 import BeautifulSoup, Tag from bs4 import BeautifulSoup
from bs4 import Tag
from markdownify import markdownify from markdownify import markdownify
from reader import Entry, Feed, Reader, TagNotFoundError
from discord_rss_bot.is_url_valid import is_url_valid from discord_rss_bot.is_url_valid import is_url_valid
from discord_rss_bot.settings import get_reader
if TYPE_CHECKING:
from reader import Entry
from reader import Feed
from reader import Reader
logger: logging.Logger = logging.getLogger(__name__) logger: logging.Logger = logging.getLogger(__name__)
DISCORD_TIMESTAMP_TAG_RE: re.Pattern[str] = re.compile(r"<t:\d+(?::[tTdDfFrRsS])?>")
@dataclass(slots=True) @dataclass(slots=True)
class CustomEmbed: class CustomEmbed:
@ -46,18 +55,80 @@ def try_to_replace(custom_message: str, template: str, replace_with: str) -> str
return custom_message return custom_message
def replace_tags_in_text_message(entry: Entry) -> str: def _preserve_discord_timestamp_tags(text: str) -> tuple[str, dict[str, str]]:
"""Replace Discord timestamp tags with placeholders before markdown conversion.
Args:
text: The text to replace tags in.
Returns:
The text with Discord timestamp tags replaced by placeholders and a mapping of placeholders to original tags.
"""
replacements: dict[str, str] = {}
def replace_match(match: re.Match[str]) -> str:
placeholder: str = f"DISCORDTIMESTAMPPLACEHOLDER{len(replacements)}"
replacements[placeholder] = match.group(0)
return placeholder
return DISCORD_TIMESTAMP_TAG_RE.sub(replace_match, text), replacements
def _restore_discord_timestamp_tags(text: str, replacements: dict[str, str]) -> str:
"""Restore preserved Discord timestamp tags after markdown conversion.
Args:
text: The text to restore tags in.
replacements: A mapping of placeholders to original Discord timestamp tags.
Returns:
The text with placeholders replaced by the original Discord timestamp tags.
"""
for placeholder, original_value in replacements.items():
text = text.replace(placeholder, original_value)
return text
def format_entry_html_for_discord(text: str) -> str:
"""Convert entry HTML to Discord-friendly markdown while preserving Discord timestamp tags.
Args:
text: The HTML text to format.
Returns:
The formatted text with Discord timestamp tags preserved.
"""
if not text:
return ""
unescaped_text: str = html.unescape(text)
protected_text, replacements = _preserve_discord_timestamp_tags(unescaped_text)
formatted_text: str = markdownify(
html=protected_text,
strip=["img", "table", "td", "tr", "tbody", "thead"],
escape_misc=False,
heading_style="ATX",
)
if "[https://" in formatted_text or "[https://www." in formatted_text:
formatted_text = formatted_text.replace("[https://", "[")
formatted_text = formatted_text.replace("[https://www.", "[")
return _restore_discord_timestamp_tags(formatted_text, replacements)
def replace_tags_in_text_message(entry: Entry, reader: Reader) -> str:
"""Replace tags in custom_message. """Replace tags in custom_message.
Args: Args:
entry: The entry to get the tags from. entry: The entry to get the tags from.
reader: Custom Reader instance.
Returns: Returns:
Returns the custom_message with the tags replaced. Returns the custom_message with the tags replaced.
""" """
feed: Feed = entry.feed feed: Feed = entry.feed
custom_reader: Reader = get_reader() custom_message: str = get_custom_message(feed=feed, reader=reader)
custom_message: str = get_custom_message(feed=feed, custom_reader=custom_reader)
content = "" content = ""
if entry.content: if entry.content:
@ -68,16 +139,8 @@ def replace_tags_in_text_message(entry: Entry) -> str:
first_image: str = get_first_image(summary, content) first_image: str = get_first_image(summary, content)
summary = markdownify(html=summary, strip=["img", "table", "td", "tr", "tbody", "thead"], escape_misc=False) summary = format_entry_html_for_discord(summary)
content = markdownify(html=content, strip=["img", "table", "td", "tr", "tbody", "thead"], escape_misc=False) content = format_entry_html_for_discord(content)
if "[https://" in content or "[https://www." in content:
content = content.replace("[https://", "[")
content = content.replace("[https://www.", "[")
if "[https://" in summary or "[https://www." in summary:
summary = summary.replace("[https://", "[")
summary = summary.replace("[https://www.", "[")
feed_added: str = feed.added.strftime("%Y-%m-%d %H:%M:%S") if feed.added else "Never" feed_added: str = feed.added.strftime("%Y-%m-%d %H:%M:%S") if feed.added else "Never"
feed_last_exception: str = feed.last_exception.value_str if feed.last_exception else "" feed_last_exception: str = feed.last_exception.value_str if feed.last_exception else ""
@ -152,13 +215,6 @@ def get_first_image(summary: str | None, content: str | None) -> str:
logger.warning("Invalid URL: %s", src) logger.warning("Invalid URL: %s", src)
continue continue
# Genshins first image is a divider, so we ignore it.
# https://hyl-static-res-prod.hoyolab.com/divider_config/PC/line_3.png
skip_images: list[str] = [
"https://img-os-static.hoyolab.com/divider_config/",
"https://hyl-static-res-prod.hoyolab.com/divider_config/",
]
if not str(image.attrs["src"]).startswith(tuple(skip_images)):
return str(image.attrs["src"]) return str(image.attrs["src"])
if summary and (images := BeautifulSoup(summary, features="lxml").find_all("img")): if summary and (images := BeautifulSoup(summary, features="lxml").find_all("img")):
for image in images: for image in images:
@ -170,24 +226,22 @@ def get_first_image(summary: str | None, content: str | None) -> str:
logger.warning("Invalid URL: %s", image.attrs["src"]) logger.warning("Invalid URL: %s", image.attrs["src"])
continue continue
# Genshins first image is a divider, so we ignore it.
if not str(image.attrs["src"]).startswith("https://img-os-static.hoyolab.com/divider_config"):
return str(image.attrs["src"]) return str(image.attrs["src"])
return "" return ""
def replace_tags_in_embed(feed: Feed, entry: Entry) -> CustomEmbed: def replace_tags_in_embed(feed: Feed, entry: Entry, reader: Reader) -> CustomEmbed:
"""Replace tags in embed. """Replace tags in embed.
Args: Args:
feed: The feed to get the tags from. feed: The feed to get the tags from.
entry: The entry to get the tags from. entry: The entry to get the tags from.
reader: Custom Reader instance.
Returns: Returns:
Returns the embed with the tags replaced. Returns the embed with the tags replaced.
""" """
custom_reader: Reader = get_reader() embed: CustomEmbed = get_embed(feed=feed, reader=reader)
embed: CustomEmbed = get_embed(feed=feed, custom_reader=custom_reader)
content = "" content = ""
if entry.content: if entry.content:
@ -198,16 +252,8 @@ def replace_tags_in_embed(feed: Feed, entry: Entry) -> CustomEmbed:
first_image: str = get_first_image(summary, content) first_image: str = get_first_image(summary, content)
summary = markdownify(html=summary, strip=["img", "table", "td", "tr", "tbody", "thead"], escape_misc=False) summary = format_entry_html_for_discord(summary)
content = markdownify(html=content, strip=["img", "table", "td", "tr", "tbody", "thead"], escape_misc=False) content = format_entry_html_for_discord(content)
if "[https://" in content or "[https://www." in content:
content = content.replace("[https://", "[")
content = content.replace("[https://www.", "[")
if "[https://" in summary or "[https://www." in summary:
summary = summary.replace("[https://", "[")
summary = summary.replace("[https://www.", "[")
feed_added: str = feed.added.strftime("%Y-%m-%d %H:%M:%S") if feed.added else "Never" feed_added: str = feed.added.strftime("%Y-%m-%d %H:%M:%S") if feed.added else "Never"
feed_last_updated: str = feed.last_updated.strftime("%Y-%m-%d %H:%M:%S") if feed.last_updated else "Never" feed_last_updated: str = feed.last_updated.strftime("%Y-%m-%d %H:%M:%S") if feed.last_updated else "Never"
@ -286,31 +332,29 @@ def _replace_embed_tags(embed: CustomEmbed, template: str, replace_with: str) ->
embed.footer_icon_url = try_to_replace(embed.footer_icon_url, template, replace_with) embed.footer_icon_url = try_to_replace(embed.footer_icon_url, template, replace_with)
def get_custom_message(custom_reader: Reader, feed: Feed) -> str: def get_custom_message(reader: Reader, feed: Feed) -> str:
"""Get custom_message tag from feed. """Get custom_message tag from feed.
Args: Args:
custom_reader: What Reader to use. reader: What Reader to use.
feed: The feed to get the tag from. feed: The feed to get the tag from.
Returns: Returns:
Returns the contents from the custom_message tag. Returns the contents from the custom_message tag.
""" """
try: try:
custom_message: str = str(custom_reader.get_tag(feed, "custom_message")) custom_message: str = str(reader.get_tag(feed, "custom_message", ""))
except TagNotFoundError:
custom_message = ""
except ValueError: except ValueError:
custom_message = "" custom_message = ""
return custom_message return custom_message
def save_embed(custom_reader: Reader, feed: Feed, embed: CustomEmbed) -> None: def save_embed(reader: Reader, feed: Feed, embed: CustomEmbed) -> None:
"""Set embed tag in feed. """Set embed tag in feed.
Args: Args:
custom_reader: What Reader to use. reader: What Reader to use.
feed: The feed to set the tag in. feed: The feed to set the tag in.
embed: The embed to set. embed: The embed to set.
""" """
@ -326,20 +370,20 @@ def save_embed(custom_reader: Reader, feed: Feed, embed: CustomEmbed) -> None:
"footer_text": embed.footer_text, "footer_text": embed.footer_text,
"footer_icon_url": embed.footer_icon_url, "footer_icon_url": embed.footer_icon_url,
} }
custom_reader.set_tag(feed, "embed", json.dumps(embed_dict)) # pyright: ignore[reportArgumentType] reader.set_tag(feed, "embed", json.dumps(embed_dict)) # pyright: ignore[reportArgumentType]
def get_embed(custom_reader: Reader, feed: Feed) -> CustomEmbed: def get_embed(reader: Reader, feed: Feed) -> CustomEmbed:
"""Get embed tag from feed. """Get embed tag from feed.
Args: Args:
custom_reader: What Reader to use. reader: What Reader to use.
feed: The feed to get the tag from. feed: The feed to get the tag from.
Returns: Returns:
Returns the contents from the embed tag. Returns the contents from the embed tag.
""" """
embed = custom_reader.get_tag(feed, "embed", "") embed = reader.get_tag(feed, "embed", "")
if embed: if embed:
if not isinstance(embed, str): if not isinstance(embed, str):

View file

@ -1,25 +1,45 @@
from __future__ import annotations from __future__ import annotations
import datetime import datetime
import json
import logging import logging
import os
import pprint import pprint
import re
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from typing import Any
from urllib.parse import ParseResult
from urllib.parse import urlparse
from discord_webhook import DiscordEmbed, DiscordWebhook import tldextract
from discord_webhook import DiscordEmbed
from discord_webhook import DiscordWebhook
from fastapi import HTTPException from fastapi import HTTPException
from reader import Entry, EntryNotFoundError, Feed, FeedExistsError, Reader, ReaderError, StorageError, TagNotFoundError from markdownify import markdownify
from reader import Entry
from reader import EntryNotFoundError
from reader import Feed
from reader import FeedExistsError
from reader import FeedNotFoundError
from reader import Reader
from reader import ReaderError
from reader import StorageError
from discord_rss_bot.custom_message import ( from discord_rss_bot.custom_message import CustomEmbed
CustomEmbed, from discord_rss_bot.custom_message import get_custom_message
get_custom_message, from discord_rss_bot.custom_message import replace_tags_in_embed
replace_tags_in_embed, from discord_rss_bot.custom_message import replace_tags_in_text_message
replace_tags_in_text_message,
)
from discord_rss_bot.filter.blacklist import entry_should_be_skipped from discord_rss_bot.filter.blacklist import entry_should_be_skipped
from discord_rss_bot.filter.whitelist import has_white_tags, should_be_sent from discord_rss_bot.filter.whitelist import has_white_tags
from discord_rss_bot.filter.whitelist import should_be_sent
from discord_rss_bot.hoyolab_api import create_hoyolab_webhook
from discord_rss_bot.hoyolab_api import extract_post_id_from_hoyolab_url
from discord_rss_bot.hoyolab_api import fetch_hoyolab_post
from discord_rss_bot.hoyolab_api import is_c3kay_feed
from discord_rss_bot.is_url_valid import is_url_valid from discord_rss_bot.is_url_valid import is_url_valid
from discord_rss_bot.missing_tags import add_missing_tags from discord_rss_bot.settings import default_custom_embed
from discord_rss_bot.settings import default_custom_message, get_reader from discord_rss_bot.settings import default_custom_message
from discord_rss_bot.settings import get_reader
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Iterable from collections.abc import Iterable
@ -29,53 +49,159 @@ if TYPE_CHECKING:
logger: logging.Logger = logging.getLogger(__name__) logger: logging.Logger = logging.getLogger(__name__)
def send_entry_to_discord(entry: Entry, custom_reader: Reader | None = None) -> str | None: def extract_domain(url: str) -> str: # noqa: PLR0911
"""Extract the domain name from a URL.
Args:
url: The URL to extract the domain from.
Returns:
str: The domain name, formatted for display.
"""
# Check for empty URL first
if not url:
return "Other"
try:
# Special handling for YouTube feeds
if "youtube.com/feeds/videos.xml" in url:
return "YouTube"
# Special handling for Reddit feeds
if "reddit.com" in url and ".rss" in url:
return "Reddit"
# Parse the URL and extract the domain
parsed_url: ParseResult = urlparse(url)
domain: str = parsed_url.netloc
# If we couldn't extract a domain, return "Other"
if not domain:
return "Other"
# Remove www. prefix if present
domain = re.sub(r"^www\.", "", domain)
# Special handling for common domains
domain_mapping: dict[str, str] = {"github.com": "GitHub"}
if domain in domain_mapping:
return domain_mapping[domain]
# Use tldextract to get the domain (SLD)
ext = tldextract.extract(url)
if ext.domain:
return ext.domain.capitalize()
return domain.capitalize()
except (ValueError, AttributeError, TypeError) as e:
logger.warning("Error extracting domain from %s: %s", url, e)
return "Other"
def send_entry_to_discord(entry: Entry, reader: Reader) -> str | None: # noqa: C901
"""Send a single entry to Discord. """Send a single entry to Discord.
Args: Args:
entry: The entry to send to Discord. entry: The entry to send to Discord.
custom_reader: The reader to use. If None, the default reader will be used. reader: The reader to use.
Returns: Returns:
str | None: The error message if there was an error, otherwise None. str | None: The error message if there was an error, otherwise None.
""" """
# Get the default reader if we didn't get a custom one.
reader: Reader = get_reader() if custom_reader is None else custom_reader
# Get the webhook URL for the entry. # Get the webhook URL for the entry.
webhook_url: str = str(reader.get_tag(entry.feed_url, "webhook", "")) webhook_url: str = str(reader.get_tag(entry.feed_url, "webhook", ""))
if not webhook_url: if not webhook_url:
return "No webhook URL found." return "No webhook URL found."
# If https://discord.com/quests/<quest_id> is in the URL, send a separate message with the URL.
send_discord_quest_notification(entry, webhook_url, reader=reader)
# Check if this is a c3kay feed
if is_c3kay_feed(entry.feed.url):
entry_link: str | None = entry.link
if entry_link:
post_id: str | None = extract_post_id_from_hoyolab_url(entry_link)
if post_id:
post_data: dict[str, Any] | None = fetch_hoyolab_post(post_id)
if post_data:
webhook = create_hoyolab_webhook(webhook_url, entry, post_data)
execute_webhook(webhook, entry, reader=reader)
return None
logger.warning(
"Failed to create Hoyolab webhook for feed %s, falling back to regular processing",
entry.feed.url,
)
else:
logger.warning("No entry link found for feed %s, falling back to regular processing", entry.feed.url)
webhook_message: str = "" webhook_message: str = ""
# Try to get the custom message for the feed. If the user has none, we will use the default message. # Try to get the custom message for the feed. If the user has none, we will use the default message.
# This has to be a string for some reason so don't change it to "not custom_message.get_custom_message()" # This has to be a string for some reason so don't change it to "not custom_message.get_custom_message()"
if get_custom_message(reader, entry.feed) != "": # noqa: PLC1901 if get_custom_message(reader, entry.feed) != "": # noqa: PLC1901
webhook_message: str = replace_tags_in_text_message(entry=entry) webhook_message: str = replace_tags_in_text_message(entry=entry, reader=reader)
if not webhook_message: if not webhook_message:
webhook_message = "No message found." webhook_message = "No message found."
# Create the webhook. # Create the webhook.
try: try:
should_send_embed = bool(reader.get_tag(entry.feed, "should_send_embed")) should_send_embed = bool(reader.get_tag(entry.feed, "should_send_embed", True))
except TagNotFoundError:
logger.exception("No should_send_embed tag found for feed: %s", entry.feed.url)
should_send_embed = True
except StorageError: except StorageError:
logger.exception("Error getting should_send_embed tag for feed: %s", entry.feed.url) logger.exception("Error getting should_send_embed tag for feed: %s", entry.feed.url)
should_send_embed = True should_send_embed = True
# YouTube feeds should never use embeds
if is_youtube_feed(entry.feed.url):
should_send_embed = False
if should_send_embed: if should_send_embed:
webhook = create_embed_webhook(webhook_url, entry) webhook = create_embed_webhook(webhook_url, entry, reader=reader)
else: else:
webhook: DiscordWebhook = DiscordWebhook(url=webhook_url, content=webhook_message, rate_limit_retry=True) webhook: DiscordWebhook = DiscordWebhook(url=webhook_url, content=webhook_message, rate_limit_retry=True)
execute_webhook(webhook, entry) execute_webhook(webhook, entry, reader=reader)
return None return None
def send_discord_quest_notification(entry: Entry, webhook_url: str, reader: Reader) -> None:
"""Send a separate message to Discord if the entry is a quest notification."""
quest_regex: re.Pattern[str] = re.compile(r"https://discord\.com/quests/\d+")
def send_notification(quest_url: str) -> None:
"""Helper function to send quest notification to Discord."""
logger.info("Sending quest notification to Discord: %s", quest_url)
webhook = DiscordWebhook(
url=webhook_url,
content=quest_url,
rate_limit_retry=True,
)
execute_webhook(webhook, entry, reader=reader)
# Iterate through the content of the entry
for content in entry.content:
if content.type == "text" and content.value:
match = quest_regex.search(content.value)
if match:
send_notification(match.group(0))
return
elif content.type == "text/html" and content.value:
# Convert HTML to text and check for quest links
text_value = markdownify(
html=content.value,
strip=["img", "table", "td", "tr", "tbody", "thead"],
escape_misc=False,
heading_style="ATX",
)
match: re.Match[str] | None = quest_regex.search(text_value)
if match:
send_notification(match.group(0))
return
logger.info("No quest notification found in entry: %s", entry.id)
def set_description(custom_embed: CustomEmbed, discord_embed: DiscordEmbed) -> None: def set_description(custom_embed: CustomEmbed, discord_embed: DiscordEmbed) -> None:
"""Set the description of the embed. """Set the description of the embed.
@ -108,12 +234,17 @@ def set_title(custom_embed: CustomEmbed, discord_embed: DiscordEmbed) -> None:
discord_embed.set_title(embed_title) if embed_title else None discord_embed.set_title(embed_title) if embed_title else None
def create_embed_webhook(webhook_url: str, entry: Entry) -> DiscordWebhook: def create_embed_webhook( # noqa: C901
webhook_url: str,
entry: Entry,
reader: Reader,
) -> DiscordWebhook:
"""Create a webhook with an embed. """Create a webhook with an embed.
Args: Args:
webhook_url (str): The webhook URL. webhook_url (str): The webhook URL.
entry (Entry): The entry to send to Discord. entry (Entry): The entry to send to Discord.
reader (Reader): The Reader instance to use for getting embed data.
Returns: Returns:
DiscordWebhook: The webhook with the embed. DiscordWebhook: The webhook with the embed.
@ -122,7 +253,7 @@ def create_embed_webhook(webhook_url: str, entry: Entry) -> DiscordWebhook:
feed: Feed = entry.feed feed: Feed = entry.feed
# Get the embed data from the database. # Get the embed data from the database.
custom_embed: CustomEmbed = replace_tags_in_embed(feed=feed, entry=entry) custom_embed: CustomEmbed = replace_tags_in_embed(feed=feed, entry=entry, reader=reader)
discord_embed: DiscordEmbed = DiscordEmbed() discord_embed: DiscordEmbed = DiscordEmbed()
@ -184,13 +315,14 @@ def get_webhook_url(reader: Reader, entry: Entry) -> str:
str: The webhook URL. str: The webhook URL.
""" """
try: try:
webhook_url: str = str(reader.get_tag(entry.feed_url, "webhook")) webhook_url: str = str(reader.get_tag(entry.feed_url, "webhook", ""))
except TagNotFoundError:
logger.exception("No webhook URL found for feed: %s", entry.feed.url)
return ""
except StorageError: except StorageError:
logger.exception("Storage error getting webhook URL for feed: %s", entry.feed.url) logger.exception("Storage error getting webhook URL for feed: %s", entry.feed.url)
return "" return ""
if not webhook_url:
logger.error("No webhook URL found for feed: %s", entry.feed.url)
return ""
return webhook_url return webhook_url
@ -209,44 +341,53 @@ def set_entry_as_read(reader: Reader, entry: Entry) -> None:
logger.exception("Error setting entry to read: %s", entry.id) logger.exception("Error setting entry to read: %s", entry.id)
def send_to_discord(custom_reader: Reader | None = None, feed: Feed | None = None, *, do_once: bool = False) -> None: def send_to_discord(reader: Reader | None = None, feed: Feed | None = None, *, do_once: bool = False) -> None: # noqa: C901, PLR0912
"""Send entries to Discord. """Send entries to Discord.
If response was not ok, we will log the error and mark the entry as unread, so it will be sent again next time. If response was not ok, we will log the error and mark the entry as unread, so it will be sent again next time.
Args: Args:
custom_reader: If we should use a custom reader instead of the default one. reader: If we should use a custom reader instead of the default one.
feed: The feed to send to Discord. feed: The feed to send to Discord.
do_once: If we should only send one entry. This is used in the test. do_once: If we should only send one entry. This is used in the test.
""" """
logger.info("Starting to send entries to Discord.")
# Get the default reader if we didn't get a custom one. # Get the default reader if we didn't get a custom one.
reader: Reader = get_reader() if custom_reader is None else custom_reader effective_reader: Reader = get_reader() if reader is None else reader
# Check for new entries for every feed. # Check for new entries for every feed.
reader.update_feeds() effective_reader.update_feeds(
scheduled=True,
workers=os.cpu_count() or 1,
)
# Loop through the unread entries. # Loop through the unread entries.
entries: Iterable[Entry] = reader.get_entries(feed=feed, read=False) entries: Iterable[Entry] = effective_reader.get_entries(feed=feed, read=False)
for entry in entries: for entry in entries:
set_entry_as_read(reader, entry) set_entry_as_read(effective_reader, entry)
if entry.added < datetime.datetime.now(tz=entry.added.tzinfo) - datetime.timedelta(days=1): if entry.added < datetime.datetime.now(tz=entry.added.tzinfo) - datetime.timedelta(days=1):
logger.info("Entry is older than 24 hours: %s from %s", entry.id, entry.feed.url) logger.info("Entry is older than 24 hours: %s from %s", entry.id, entry.feed.url)
continue continue
webhook_url: str = get_webhook_url(reader, entry) webhook_url: str = get_webhook_url(effective_reader, entry)
if not webhook_url: if not webhook_url:
logger.info("No webhook URL found for feed: %s", entry.feed.url) logger.info("No webhook URL found for feed: %s", entry.feed.url)
continue continue
should_send_embed: bool = should_send_embed_check(reader, entry) should_send_embed: bool = should_send_embed_check(effective_reader, entry)
# Youtube feeds only need to send the link
if is_youtube_feed(entry.feed.url):
should_send_embed = False
if should_send_embed: if should_send_embed:
webhook = create_embed_webhook(webhook_url, entry) webhook = create_embed_webhook(webhook_url, entry, reader=effective_reader)
else: else:
# If the user has set the custom message to an empty string, we will use the default message, otherwise we # If the user has set the custom message to an empty string, we will use the default message, otherwise we
# will use the custom message. # will use the custom message.
if get_custom_message(reader, entry.feed) != "": # noqa: PLC1901 if get_custom_message(effective_reader, entry.feed) != "": # noqa: PLC1901
webhook_message = replace_tags_in_text_message(entry) webhook_message = replace_tags_in_text_message(entry, reader=effective_reader)
else: else:
webhook_message: str = str(default_custom_message) webhook_message: str = str(default_custom_message)
@ -256,19 +397,35 @@ def send_to_discord(custom_reader: Reader | None = None, feed: Feed | None = Non
webhook: DiscordWebhook = DiscordWebhook(url=webhook_url, content=webhook_message, rate_limit_retry=True) webhook: DiscordWebhook = DiscordWebhook(url=webhook_url, content=webhook_message, rate_limit_retry=True)
# Check if the entry is blacklisted, and if it is, we will skip it. # Check if the entry is blacklisted, and if it is, we will skip it.
if entry_should_be_skipped(reader, entry): if entry_should_be_skipped(effective_reader, entry):
logger.info("Entry was blacklisted: %s", entry.id) logger.info("Entry was blacklisted: %s", entry.id)
continue continue
# Check if the feed has a whitelist, and if it does, check if the entry is whitelisted. # Check if the feed has a whitelist, and if it does, check if the entry is whitelisted.
if has_white_tags(reader, entry.feed): if has_white_tags(effective_reader, entry.feed) and not should_be_sent(effective_reader, entry):
if should_be_sent(reader, entry): logger.info("Entry was not whitelisted: %s", entry.id)
execute_webhook(webhook, entry)
return
continue continue
# Use a custom webhook for Hoyolab feeds.
if is_c3kay_feed(entry.feed.url):
entry_link: str | None = entry.link
if entry_link:
post_id: str | None = extract_post_id_from_hoyolab_url(entry_link)
if post_id:
post_data: dict[str, Any] | None = fetch_hoyolab_post(post_id)
if post_data:
webhook = create_hoyolab_webhook(webhook_url, entry, post_data)
execute_webhook(webhook, entry, reader=effective_reader)
return
logger.warning(
"Failed to create Hoyolab webhook for feed %s, falling back to regular processing",
entry.feed.url,
)
else:
logger.warning("No entry link found for feed %s, falling back to regular processing", entry.feed.url)
# Send the entry to Discord as it is not blacklisted or feed has a whitelist. # Send the entry to Discord as it is not blacklisted or feed has a whitelist.
execute_webhook(webhook, entry) execute_webhook(webhook, entry, reader=effective_reader)
# If we only want to send one entry, we will break the loop. This is used when testing this function. # If we only want to send one entry, we will break the loop. This is used when testing this function.
if do_once: if do_once:
@ -276,14 +433,27 @@ def send_to_discord(custom_reader: Reader | None = None, feed: Feed | None = Non
break break
def execute_webhook(webhook: DiscordWebhook, entry: Entry) -> None: def execute_webhook(webhook: DiscordWebhook, entry: Entry, reader: Reader) -> None:
"""Execute the webhook. """Execute the webhook.
Args: Args:
webhook (DiscordWebhook): The webhook to execute. webhook (DiscordWebhook): The webhook to execute.
entry (Entry): The entry to send to Discord. entry (Entry): The entry to send to Discord.
reader (Reader): The Reader instance to use for checking feed status.
""" """
# If the feed has been paused or deleted, we will not send the entry to Discord.
entry_feed: Feed = entry.feed
if entry_feed.updates_enabled is False:
logger.warning("Feed is paused, not sending entry to Discord: %s", entry_feed.url)
return
try:
reader.get_feed(entry_feed.url)
except FeedNotFoundError:
logger.warning("Feed not found in reader, not sending entry to Discord: %s", entry_feed.url)
return
response: Response = webhook.execute() response: Response = webhook.execute()
if response.status_code not in {200, 204}: if response.status_code not in {200, 204}:
msg: str = f"Error sending entry to Discord: {response.text}\n{pprint.pformat(webhook.json)}" msg: str = f"Error sending entry to Discord: {response.text}\n{pprint.pformat(webhook.json)}"
@ -295,6 +465,18 @@ def execute_webhook(webhook: DiscordWebhook, entry: Entry) -> None:
logger.info("Sent entry to Discord: %s", entry.id) logger.info("Sent entry to Discord: %s", entry.id)
def is_youtube_feed(feed_url: str) -> bool:
"""Check if the feed is a YouTube feed.
Args:
feed_url: The feed URL to check.
Returns:
bool: True if the feed is a YouTube feed, False otherwise.
"""
return "youtube.com/feeds/videos.xml" in feed_url
def should_send_embed_check(reader: Reader, entry: Entry) -> bool: def should_send_embed_check(reader: Reader, entry: Entry) -> bool:
"""Check if we should send an embed to Discord. """Check if we should send an embed to Discord.
@ -305,11 +487,12 @@ def should_send_embed_check(reader: Reader, entry: Entry) -> bool:
Returns: Returns:
bool: True if we should send an embed, False otherwise. bool: True if we should send an embed, False otherwise.
""" """
# YouTube feeds should never use embeds - only links
if is_youtube_feed(entry.feed.url):
return False
try: try:
should_send_embed = bool(reader.get_tag(entry.feed, "should_send_embed")) should_send_embed = bool(reader.get_tag(entry.feed, "should_send_embed", True))
except TagNotFoundError:
logger.exception("No should_send_embed tag found for feed: %s", entry.feed.url)
should_send_embed = True
except ReaderError: except ReaderError:
logger.exception("Error getting should_send_embed tag for feed: %s", entry.feed.url) logger.exception("Error getting should_send_embed tag for feed: %s", entry.feed.url)
should_send_embed = True should_send_embed = True
@ -333,7 +516,7 @@ def truncate_webhook_message(webhook_message: str) -> str:
return webhook_message return webhook_message
def create_feed(reader: Reader, feed_url: str, webhook_dropdown: str) -> None: def create_feed(reader: Reader, feed_url: str, webhook_dropdown: str) -> None: # noqa: C901
"""Add a new feed, update it and mark every entry as read. """Add a new feed, update it and mark every entry as read.
Args: Args:
@ -364,9 +547,7 @@ def create_feed(reader: Reader, feed_url: str, webhook_dropdown: str) -> None:
reader.add_feed(clean_feed_url) reader.add_feed(clean_feed_url)
except FeedExistsError: except FeedExistsError:
# Add the webhook to an already added feed if it doesn't have a webhook instead of trying to create a new. # Add the webhook to an already added feed if it doesn't have a webhook instead of trying to create a new.
try: if not reader.get_tag(clean_feed_url, "webhook", ""):
reader.get_tag(clean_feed_url, "webhook")
except TagNotFoundError:
reader.set_tag(clean_feed_url, "webhook", webhook_url) # pyright: ignore[reportArgumentType] reader.set_tag(clean_feed_url, "webhook", webhook_url) # pyright: ignore[reportArgumentType]
except ReaderError as e: except ReaderError as e:
raise HTTPException(status_code=404, detail=f"Error adding feed: {e}") from e raise HTTPException(status_code=404, detail=f"Error adding feed: {e}") from e
@ -391,7 +572,8 @@ def create_feed(reader: Reader, feed_url: str, webhook_dropdown: str) -> None:
# This is the default message that will be sent to Discord. # This is the default message that will be sent to Discord.
reader.set_tag(clean_feed_url, "custom_message", default_custom_message) # pyright: ignore[reportArgumentType] reader.set_tag(clean_feed_url, "custom_message", default_custom_message) # pyright: ignore[reportArgumentType]
# Set the default embed tag when creating the feed
reader.set_tag(clean_feed_url, "embed", json.dumps(default_custom_embed))
# Update the full-text search index so our new feed is searchable. # Update the full-text search index so our new feed is searchable.
reader.update_search() reader.update_search()
add_missing_tags(reader)

View file

@ -2,59 +2,119 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from discord_rss_bot.filter.utils import is_regex_match
from discord_rss_bot.filter.utils import is_word_in_text from discord_rss_bot.filter.utils import is_word_in_text
if TYPE_CHECKING: if TYPE_CHECKING:
from reader import Entry, Feed, Reader from reader import Entry
from reader import Feed
from reader import Reader
def feed_has_blacklist_tags(custom_reader: Reader, feed: Feed) -> bool: def feed_has_blacklist_tags(reader: Reader, feed: Feed) -> bool:
"""Return True if the feed has blacklist tags. """Return True if the feed has blacklist tags.
The following tags are checked: The following tags are checked:
- blacklist_title - blacklist_author
- blacklist_content
- blacklist_summary - blacklist_summary
- blacklist_content. - blacklist_title
- regex_blacklist_author
- regex_blacklist_content
- regex_blacklist_summary
- regex_blacklist_title
Args: Args:
custom_reader: The reader. reader: The reader.
feed: The feed to check. feed: The feed to check.
Returns: Returns:
bool: If the feed has any of the tags. bool: If the feed has any of the tags.
""" """
blacklist_title: str = str(custom_reader.get_tag(feed, "blacklist_title", "")) blacklist_author: str = str(reader.get_tag(feed, "blacklist_author", "")).strip()
blacklist_summary: str = str(custom_reader.get_tag(feed, "blacklist_summary", "")) blacklist_content: str = str(reader.get_tag(feed, "blacklist_content", "")).strip()
blacklist_content: str = str(custom_reader.get_tag(feed, "blacklist_content", "")) blacklist_summary: str = str(reader.get_tag(feed, "blacklist_summary", "")).strip()
blacklist_title: str = str(reader.get_tag(feed, "blacklist_title", "")).strip()
return bool(blacklist_title or blacklist_summary or blacklist_content) regex_blacklist_author: str = str(reader.get_tag(feed, "regex_blacklist_author", "")).strip()
regex_blacklist_content: str = str(reader.get_tag(feed, "regex_blacklist_content", "")).strip()
regex_blacklist_summary: str = str(reader.get_tag(feed, "regex_blacklist_summary", "")).strip()
regex_blacklist_title: str = str(reader.get_tag(feed, "regex_blacklist_title", "")).strip()
return bool(
blacklist_title
or blacklist_author
or blacklist_content
or blacklist_summary
or regex_blacklist_author
or regex_blacklist_content
or regex_blacklist_summary
or regex_blacklist_title,
)
def entry_should_be_skipped(custom_reader: Reader, entry: Entry) -> bool: def entry_should_be_skipped(reader: Reader, entry: Entry) -> bool: # noqa: PLR0911
"""Return True if the entry is in the blacklist. """Return True if the entry is in the blacklist.
Args: Args:
custom_reader: The reader. reader: The reader.
entry: The entry to check. entry: The entry to check.
Returns: Returns:
bool: If the entry is in the blacklist. bool: If the entry is in the blacklist.
""" """
blacklist_title: str = str(custom_reader.get_tag(entry.feed, "blacklist_title", "")) feed = entry.feed
blacklist_summary: str = str(custom_reader.get_tag(entry.feed, "blacklist_summary", ""))
blacklist_content: str = str(custom_reader.get_tag(entry.feed, "blacklist_content", "")) blacklist_title: str = str(reader.get_tag(feed, "blacklist_title", "")).strip()
blacklist_author: str = str(custom_reader.get_tag(entry.feed, "blacklist_author", "")) blacklist_summary: str = str(reader.get_tag(feed, "blacklist_summary", "")).strip()
blacklist_content: str = str(reader.get_tag(feed, "blacklist_content", "")).strip()
blacklist_author: str = str(reader.get_tag(feed, "blacklist_author", "")).strip()
regex_blacklist_title: str = str(reader.get_tag(feed, "regex_blacklist_title", "")).strip()
regex_blacklist_summary: str = str(reader.get_tag(feed, "regex_blacklist_summary", "")).strip()
regex_blacklist_content: str = str(reader.get_tag(feed, "regex_blacklist_content", "")).strip()
regex_blacklist_author: str = str(reader.get_tag(feed, "regex_blacklist_author", "")).strip()
# TODO(TheLovinator): Also add support for entry_text and more. # TODO(TheLovinator): Also add support for entry_text and more.
# Check regular blacklist
if entry.title and blacklist_title and is_word_in_text(blacklist_title, entry.title): if entry.title and blacklist_title and is_word_in_text(blacklist_title, entry.title):
return True return True
if entry.summary and blacklist_summary and is_word_in_text(blacklist_summary, entry.summary): if entry.summary and blacklist_summary and is_word_in_text(blacklist_summary, entry.summary):
return True return True
if (
entry.content
and entry.content[0].value
and blacklist_content
and is_word_in_text(blacklist_content, entry.content[0].value)
):
return True
if entry.author and blacklist_author and is_word_in_text(blacklist_author, entry.author): if entry.author and blacklist_author and is_word_in_text(blacklist_author, entry.author):
return True return True
if (
entry.content
and entry.content[0].value
and blacklist_content
and is_word_in_text(blacklist_content, entry.content[0].value)
):
return True
# Check regex blacklist
if entry.title and regex_blacklist_title and is_regex_match(regex_blacklist_title, entry.title):
return True
if entry.summary and regex_blacklist_summary and is_regex_match(regex_blacklist_summary, entry.summary):
return True
if (
entry.content
and entry.content[0].value
and regex_blacklist_content
and is_regex_match(regex_blacklist_content, entry.content[0].value)
):
return True
if entry.author and regex_blacklist_author and is_regex_match(regex_blacklist_author, entry.author):
return True
return bool( return bool(
entry.content entry.content
and entry.content[0].value and entry.content[0].value
and blacklist_content and regex_blacklist_content
and is_word_in_text(blacklist_content, entry.content[0].value), and is_regex_match(regex_blacklist_content, entry.content[0].value),
) )

View file

@ -1,7 +1,10 @@
from __future__ import annotations from __future__ import annotations
import logging
import re import re
logger: logging.Logger = logging.getLogger(__name__)
def is_word_in_text(word_string: str, text: str) -> bool: def is_word_in_text(word_string: str, text: str) -> bool:
"""Check if any of the words are in the text. """Check if any of the words are in the text.
@ -20,3 +23,50 @@ def is_word_in_text(word_string: str, text: str) -> bool:
# Check if any pattern matches the text. # Check if any pattern matches the text.
return any(pattern.search(text) for pattern in patterns) return any(pattern.search(text) for pattern in patterns)
def is_regex_match(regex_string: str, text: str) -> bool:
"""Check if any of the regex patterns match the text.
Args:
regex_string: A string containing regex patterns, separated by newlines or commas.
text: The text to search in.
Returns:
bool: True if any regex pattern matches the text, otherwise False.
"""
if not regex_string or not text:
return False
# Split by newlines first, then by commas (for backward compatibility)
regex_list: list[str] = []
# First split by newlines
lines: list[str] = regex_string.split("\n")
for line in lines:
stripped_line: str = line.strip()
if stripped_line:
# For backward compatibility, also split by commas if there are any
if "," in stripped_line:
regex_list.extend([part.strip() for part in stripped_line.split(",") if part.strip()])
else:
regex_list.append(stripped_line)
# Attempt to compile and apply each regex pattern
for pattern_str in regex_list:
if not pattern_str:
logger.warning("Empty regex pattern found in the list.")
continue
try:
pattern: re.Pattern[str] = re.compile(pattern_str, re.IGNORECASE)
if pattern.search(text):
logger.info("Regex pattern matched: %s", pattern_str)
return True
except re.error:
logger.warning("Invalid regex pattern: %s", pattern_str)
continue
logger.info("No regex patterns matched.")
return False

View file

@ -2,59 +2,105 @@ from __future__ import annotations
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from discord_rss_bot.filter.utils import is_regex_match
from discord_rss_bot.filter.utils import is_word_in_text from discord_rss_bot.filter.utils import is_word_in_text
if TYPE_CHECKING: if TYPE_CHECKING:
from reader import Entry, Feed, Reader from reader import Entry
from reader import Feed
from reader import Reader
def has_white_tags(custom_reader: Reader, feed: Feed) -> bool: def has_white_tags(reader: Reader, feed: Feed) -> bool:
"""Return True if the feed has whitelist tags. """Return True if the feed has whitelist tags.
The following tags are checked: The following tags are checked:
- whitelist_title - regex_whitelist_author
- regex_whitelist_content
- regex_whitelist_summary
- regex_whitelist_title
- whitelist_author
- whitelist_content
- whitelist_summary - whitelist_summary
- whitelist_content. - whitelist_title
Args: Args:
custom_reader: The reader. reader: The reader.
feed: The feed to check. feed: The feed to check.
Returns: Returns:
bool: If the feed has any of the tags. bool: If the feed has any of the tags.
""" """
whitelist_title: str = str(custom_reader.get_tag(feed, "whitelist_title", "")) whitelist_title: str = str(reader.get_tag(feed, "whitelist_title", "")).strip()
whitelist_summary: str = str(custom_reader.get_tag(feed, "whitelist_summary", "")) whitelist_summary: str = str(reader.get_tag(feed, "whitelist_summary", "")).strip()
whitelist_content: str = str(custom_reader.get_tag(feed, "whitelist_content", "")) whitelist_content: str = str(reader.get_tag(feed, "whitelist_content", "")).strip()
whitelist_author: str = str(reader.get_tag(feed, "whitelist_author", "")).strip()
return bool(whitelist_title or whitelist_summary or whitelist_content) regex_whitelist_title: str = str(reader.get_tag(feed, "regex_whitelist_title", "")).strip()
regex_whitelist_summary: str = str(reader.get_tag(feed, "regex_whitelist_summary", "")).strip()
regex_whitelist_content: str = str(reader.get_tag(feed, "regex_whitelist_content", "")).strip()
regex_whitelist_author: str = str(reader.get_tag(feed, "regex_whitelist_author", "")).strip()
return bool(
whitelist_title
or whitelist_author
or whitelist_content
or whitelist_summary
or regex_whitelist_author
or regex_whitelist_content
or regex_whitelist_summary
or regex_whitelist_title,
)
def should_be_sent(custom_reader: Reader, entry: Entry) -> bool: def should_be_sent(reader: Reader, entry: Entry) -> bool: # noqa: PLR0911
"""Return True if the entry is in the whitelist. """Return True if the entry is in the whitelist.
Args: Args:
custom_reader: The reader. reader: The reader.
entry: The entry to check. entry: The entry to check.
Returns: Returns:
bool: If the entry is in the whitelist. bool: If the entry is in the whitelist.
""" """
feed: Feed = entry.feed feed: Feed = entry.feed
whitelist_title: str = str(custom_reader.get_tag(feed, "whitelist_title", "")) # Regular whitelist tags
whitelist_summary: str = str(custom_reader.get_tag(feed, "whitelist_summary", "")) whitelist_title: str = str(reader.get_tag(feed, "whitelist_title", "")).strip()
whitelist_content: str = str(custom_reader.get_tag(feed, "whitelist_content", "")) whitelist_summary: str = str(reader.get_tag(feed, "whitelist_summary", "")).strip()
whitelist_author: str = str(custom_reader.get_tag(feed, "whitelist_author", "")) whitelist_content: str = str(reader.get_tag(feed, "whitelist_content", "")).strip()
whitelist_author: str = str(reader.get_tag(feed, "whitelist_author", "")).strip()
# Regex whitelist tags
regex_whitelist_title: str = str(reader.get_tag(feed, "regex_whitelist_title", "")).strip()
regex_whitelist_summary: str = str(reader.get_tag(feed, "regex_whitelist_summary", "")).strip()
regex_whitelist_content: str = str(reader.get_tag(feed, "regex_whitelist_content", "")).strip()
regex_whitelist_author: str = str(reader.get_tag(feed, "regex_whitelist_author", "")).strip()
# Check regular whitelist
if entry.title and whitelist_title and is_word_in_text(whitelist_title, entry.title): if entry.title and whitelist_title and is_word_in_text(whitelist_title, entry.title):
return True return True
if entry.summary and whitelist_summary and is_word_in_text(whitelist_summary, entry.summary): if entry.summary and whitelist_summary and is_word_in_text(whitelist_summary, entry.summary):
return True return True
if entry.author and whitelist_author and is_word_in_text(whitelist_author, entry.author): if entry.author and whitelist_author and is_word_in_text(whitelist_author, entry.author):
return True return True
return bool( if (
entry.content entry.content
and entry.content[0].value and entry.content[0].value
and whitelist_content and whitelist_content
and is_word_in_text(whitelist_content, entry.content[0].value), and is_word_in_text(whitelist_content, entry.content[0].value)
):
return True
# Check regex whitelist
if entry.title and regex_whitelist_title and is_regex_match(regex_whitelist_title, entry.title):
return True
if entry.summary and regex_whitelist_summary and is_regex_match(regex_whitelist_summary, entry.summary):
return True
if entry.author and regex_whitelist_author and is_regex_match(regex_whitelist_author, entry.author):
return True
return bool(
entry.content
and entry.content[0].value
and regex_whitelist_content
and is_regex_match(regex_whitelist_content, entry.content[0].value),
) )

View file

@ -0,0 +1,243 @@
"""Git backup module for committing bot state changes to a private repository.
Configure the backup by setting these environment variables:
- ``GIT_BACKUP_PATH``: Local filesystem path for the backup git repository.
When set, the bot will initialise a git repo there (if one doesn't exist)
and commit an export of its state after every relevant change.
- ``GIT_BACKUP_REMOTE``: Optional remote URL (e.g. ``git@github.com:you/private-repo.git``).
When set, every commit is followed by a ``git push`` to this remote.
The exported state is written as ``state.json`` inside the backup repo. It
contains the list of feeds together with their webhook URL, filter settings
(blacklist / whitelist, regex variants), custom messages and embed settings.
Global webhooks are also included.
Example docker-compose snippet::
environment:
- GIT_BACKUP_PATH=/data/backup
- GIT_BACKUP_REMOTE=git@github.com:you/private-config.git
"""
from __future__ import annotations
import json
import logging
import os
import shutil
import subprocess # noqa: S404
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Any
if TYPE_CHECKING:
from reader import Reader
logger: logging.Logger = logging.getLogger(__name__)
GIT_EXECUTABLE: str = shutil.which("git") or "git"
type TAG_VALUE = (
dict[str, str | int | float | bool | dict[str, Any] | list[Any] | None]
| list[str | int | float | bool | dict[str, Any] | list[Any] | None]
| None
)
# Tags that are exported per-feed (empty values are omitted).
_FEED_TAGS: tuple[str, ...] = (
"webhook",
"custom_message",
"should_send_embed",
"embed",
"blacklist_title",
"blacklist_summary",
"blacklist_content",
"blacklist_author",
"regex_blacklist_title",
"regex_blacklist_summary",
"regex_blacklist_content",
"regex_blacklist_author",
"whitelist_title",
"whitelist_summary",
"whitelist_content",
"whitelist_author",
"regex_whitelist_title",
"regex_whitelist_summary",
"regex_whitelist_content",
"regex_whitelist_author",
".reader.update",
)
def get_backup_path() -> Path | None:
"""Return the configured backup path, or *None* if not configured.
Returns:
Path to the backup repository, or None if ``GIT_BACKUP_PATH`` is unset.
"""
raw: str = os.environ.get("GIT_BACKUP_PATH", "").strip()
return Path(raw) if raw else None
def get_backup_remote() -> str:
"""Return the configured remote URL, or an empty string if not set.
Returns:
The remote URL string from ``GIT_BACKUP_REMOTE``, or ``""`` if unset.
"""
return os.environ.get("GIT_BACKUP_REMOTE", "").strip()
def setup_backup_repo(backup_path: Path) -> bool:
"""Ensure the backup directory exists and contains a git repository.
If the directory does not yet contain a ``.git`` folder a new repository is
initialised. A basic git identity is configured locally so that commits
succeed even in environments where a global ``~/.gitconfig`` is absent.
Args:
backup_path: Local path for the backup repository.
Returns:
``True`` if the repository is ready, ``False`` on any error.
"""
try:
backup_path.mkdir(parents=True, exist_ok=True)
git_dir: Path = backup_path / ".git"
if not git_dir.exists():
subprocess.run([GIT_EXECUTABLE, "init", str(backup_path)], check=True, capture_output=True) # noqa: S603
logger.info("Initialised git backup repository at %s", backup_path)
# Ensure a local identity exists so that `git commit` always works.
for key, value in (("user.email", "discord-rss-bot@localhost"), ("user.name", "discord-rss-bot")):
result: subprocess.CompletedProcess[bytes] = subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "config", "--local", key],
check=False,
capture_output=True,
)
if result.returncode != 0:
subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "config", "--local", key, value],
check=True,
capture_output=True,
)
# Configure the remote if GIT_BACKUP_REMOTE is set.
remote_url: str = get_backup_remote()
if remote_url:
# Check if remote "origin" already exists.
check_remote: subprocess.CompletedProcess[bytes] = subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "remote", "get-url", "origin"],
check=False,
capture_output=True,
)
if check_remote.returncode != 0:
# Remote doesn't exist, add it.
subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "remote", "add", "origin", remote_url],
check=True,
capture_output=True,
)
logger.info("Added remote 'origin' with URL: %s", remote_url)
else:
# Remote exists, update it if the URL has changed.
current_url: str = check_remote.stdout.decode().strip()
if current_url != remote_url:
subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "remote", "set-url", "origin", remote_url],
check=True,
capture_output=True,
)
logger.info("Updated remote 'origin' URL from %s to %s", current_url, remote_url)
except Exception:
logger.exception("Failed to set up git backup repository at %s", backup_path)
return False
return True
def export_state(reader: Reader, backup_path: Path) -> None:
"""Serialise the current bot state to ``state.json`` inside *backup_path*.
Args:
reader: The :class:`reader.Reader` instance to read state from.
backup_path: Destination directory for the exported ``state.json``.
"""
feeds_state: list[dict] = []
for feed in reader.get_feeds():
feed_data: dict = {"url": feed.url}
for tag in _FEED_TAGS:
try:
value: TAG_VALUE = reader.get_tag(feed, tag, None)
if value is not None and value != "": # noqa: PLC1901
feed_data[tag] = value
except Exception:
logger.exception("Failed to read tag '%s' for feed '%s' during state export", tag, feed.url)
feeds_state.append(feed_data)
webhooks: list[str | int | float | bool | dict[str, Any] | list[Any] | None] = list(
reader.get_tag((), "webhooks", []),
)
# Export global update interval if set
global_update_interval: dict[str, Any] | None = None
global_update_config = reader.get_tag((), ".reader.update", None)
if isinstance(global_update_config, dict):
global_update_interval = global_update_config
state: dict = {"feeds": feeds_state, "webhooks": webhooks}
if global_update_interval is not None:
state["global_update_interval"] = global_update_interval
state_file: Path = backup_path / "state.json"
state_file.write_text(json.dumps(state, indent=2, default=str), encoding="utf-8")
def commit_state_change(reader: Reader, message: str) -> None:
"""Export current state and commit it to the backup repository.
This is a no-op when ``GIT_BACKUP_PATH`` is not configured. Errors are
logged but never raised so that a backup failure never interrupts normal
bot operation.
Args:
reader: The :class:`reader.Reader` instance to read state from.
message: Commit message describing the change (e.g. ``"Add feed example.com/rss.xml"``).
"""
backup_path: Path | None = get_backup_path()
if backup_path is None:
return
if not setup_backup_repo(backup_path):
return
try:
export_state(reader, backup_path)
subprocess.run([GIT_EXECUTABLE, "-C", str(backup_path), "add", "-A"], check=True, capture_output=True) # noqa: S603
# Only create a commit if there are staged changes.
diff_result: subprocess.CompletedProcess[bytes] = subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "diff", "--cached", "--exit-code"],
check=False,
capture_output=True,
)
if diff_result.returncode == 0:
logger.debug("No state changes to commit for: %s", message)
return
subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "commit", "-m", message],
check=True,
capture_output=True,
)
logger.info("Committed state change to backup repo: %s", message)
# Push to remote if configured.
if get_backup_remote():
subprocess.run( # noqa: S603
[GIT_EXECUTABLE, "-C", str(backup_path), "push", "origin", "HEAD"],
check=True,
capture_output=True,
)
logger.info("Pushed state change to remote 'origin': %s", message)
except Exception:
logger.exception("Failed to commit state change '%s' to backup repo", message)

View file

@ -0,0 +1,195 @@
from __future__ import annotations
import contextlib
import json
import logging
import re
from typing import TYPE_CHECKING
from typing import Any
import requests
from discord_webhook import DiscordEmbed
from discord_webhook import DiscordWebhook
if TYPE_CHECKING:
from reader import Entry
logger: logging.Logger = logging.getLogger(__name__)
def is_c3kay_feed(feed_url: str) -> bool:
"""Check if the feed is from c3kay.de.
Args:
feed_url: The feed URL to check.
Returns:
bool: True if the feed is from c3kay.de, False otherwise.
"""
return "feeds.c3kay.de" in feed_url
def extract_post_id_from_hoyolab_url(url: str) -> str | None:
"""Extract the post ID from a Hoyolab URL.
Args:
url: The Hoyolab URL to extract the post ID from.
For example: https://www.hoyolab.com/article/38588239
Returns:
str | None: The post ID if found, None otherwise.
"""
try:
match: re.Match[str] | None = re.search(r"/article/(\d+)", url)
if match:
return match.group(1)
except (ValueError, AttributeError, TypeError) as e:
logger.warning("Error extracting post ID from Hoyolab URL %s: %s", url, e)
return None
def fetch_hoyolab_post(post_id: str) -> dict[str, Any] | None:
"""Fetch post data from the Hoyolab API.
Args:
post_id: The post ID to fetch.
Returns:
dict[str, Any] | None: The post data if successful, None otherwise.
"""
if not post_id:
return None
http_ok = 200
try:
url: str = f"https://bbs-api-os.hoyolab.com/community/post/wapi/getPostFull?post_id={post_id}"
response: requests.Response = requests.get(url, timeout=10)
if response.status_code == http_ok:
data: dict[str, Any] = response.json()
if data.get("retcode") == 0 and "data" in data and "post" in data["data"]:
return data["data"]["post"]
logger.warning("Failed to fetch Hoyolab post %s: %s", post_id, response.text)
except (requests.RequestException, ValueError):
logger.exception("Error fetching Hoyolab post %s", post_id)
return None
def create_hoyolab_webhook(webhook_url: str, entry: Entry, post_data: dict[str, Any]) -> DiscordWebhook: # noqa: C901, PLR0912, PLR0914, PLR0915
"""Create a webhook with data from the Hoyolab API.
Args:
webhook_url: The webhook URL.
entry: The entry to send to Discord.
post_data: The post data from the Hoyolab API.
Returns:
DiscordWebhook: The webhook with the embed.
"""
entry_link: str = entry.link or entry.feed.url
webhook = DiscordWebhook(url=webhook_url, rate_limit_retry=True)
# Extract relevant data from the post
post: dict[str, Any] = post_data.get("post", {})
subject: str = post.get("subject", "")
content: str = post.get("content", "{}")
logger.debug("Post subject: %s", subject)
logger.debug("Post content: %s", content)
content_data: dict[str, str] = {}
with contextlib.suppress(json.JSONDecodeError, ValueError):
content_data = json.loads(content)
logger.debug("Content data: %s", content_data)
description: str = content_data.get("describe", "")
if not description:
description = post.get("desc", "")
# Create the embed
discord_embed = DiscordEmbed()
# Set title and description
discord_embed.set_title(subject)
discord_embed.set_url(entry_link)
# Get post.image_list
image_list: list[dict[str, Any]] = post_data.get("image_list", [])
if image_list:
image_url: str = str(image_list[0].get("url", ""))
image_height: int = int(image_list[0].get("height", 1080))
image_width: int = int(image_list[0].get("width", 1920))
logger.debug("Image URL: %s, Height: %s, Width: %s", image_url, image_height, image_width)
discord_embed.set_image(url=image_url, height=image_height, width=image_width)
video: dict[str, str | int | bool] = post_data.get("video", {})
if video and video.get("url"):
video_url: str = str(video.get("url", ""))
logger.debug("Video URL: %s", video_url)
with contextlib.suppress(requests.RequestException):
video_response: requests.Response = requests.get(video_url, stream=True, timeout=10)
if video_response.ok:
webhook.add_file(
file=video_response.content,
filename=f"{entry.id}.mp4",
)
game = post_data.get("game", {})
if game and game.get("color"):
game_color = str(game.get("color", ""))
discord_embed.set_color(game_color.removeprefix("#"))
user: dict[str, str | int | bool] = post_data.get("user", {})
author_name: str = str(user.get("nickname", ""))
avatar_url: str = str(user.get("avatar_url", ""))
if author_name:
webhook.avatar_url = avatar_url
webhook.username = author_name
classification = post_data.get("classification", {})
if classification and classification.get("name"):
footer = str(classification.get("name", ""))
discord_embed.set_footer(text=footer)
webhook.add_embed(discord_embed)
# Only show Youtube URL if available
structured_content: str = post.get("structured_content", "")
if structured_content: # noqa: PLR1702
try:
structured_content_data: list[dict[str, Any]] = json.loads(structured_content)
for item in structured_content_data:
if item.get("insert") and isinstance(item["insert"], dict):
video_url: str = str(item["insert"].get("video", ""))
if video_url:
video_id_match: re.Match[str] | None = re.search(r"embed/([a-zA-Z0-9_-]+)", video_url)
if video_id_match:
video_id: str = video_id_match.group(1)
logger.debug("Video ID: %s", video_id)
webhook.content = f"https://www.youtube.com/watch?v={video_id}"
webhook.remove_embeds()
except (json.JSONDecodeError, ValueError) as e:
logger.warning("Error parsing structured content: %s", e)
event_start_date: str = post.get("event_start_date", "")
if event_start_date and event_start_date != "0":
discord_embed.add_embed_field(name="Start", value=f"<t:{event_start_date}:R>")
event_end_date: str = post.get("event_end_date", "")
if event_end_date and event_end_date != "0":
discord_embed.add_embed_field(name="End", value=f"<t:{event_end_date}:R>")
created_at: str = post.get("created_at", "")
if created_at and created_at != "0":
discord_embed.set_timestamp(timestamp=created_at)
return webhook

View file

@ -1,6 +1,7 @@
from __future__ import annotations from __future__ import annotations
from urllib.parse import ParseResult, urlparse from urllib.parse import ParseResult
from urllib.parse import urlparse
def is_url_valid(url: str) -> bool: def is_url_valid(url: str) -> bool:

View file

@ -7,48 +7,65 @@ import typing
import urllib.parse import urllib.parse
from contextlib import asynccontextmanager from contextlib import asynccontextmanager
from dataclasses import dataclass from dataclasses import dataclass
from datetime import UTC, datetime from datetime import UTC
from datetime import datetime
from functools import lru_cache from functools import lru_cache
from typing import TYPE_CHECKING, Annotated, cast from typing import TYPE_CHECKING
from typing import Annotated
from typing import Any
from typing import cast
import httpx import httpx
import sentry_sdk import sentry_sdk
import uvicorn import uvicorn
from apscheduler.schedulers.asyncio import AsyncIOScheduler from apscheduler.schedulers.asyncio import AsyncIOScheduler
from fastapi import FastAPI, Form, HTTPException, Request from fastapi import Depends
from fastapi import FastAPI
from fastapi import Form
from fastapi import HTTPException
from fastapi import Request
from fastapi.responses import HTMLResponse from fastapi.responses import HTMLResponse
from fastapi.staticfiles import StaticFiles from fastapi.staticfiles import StaticFiles
from fastapi.templating import Jinja2Templates from fastapi.templating import Jinja2Templates
from httpx import Response from httpx import Response
from markdownify import markdownify from markdownify import markdownify
from reader import Entry, EntryNotFoundError, Feed, FeedNotFoundError, Reader, TagNotFoundError from reader import Entry
from reader import EntryNotFoundError
from reader import Feed
from reader import FeedExistsError
from reader import FeedNotFoundError
from reader import Reader
from reader import ReaderError
from reader import TagNotFoundError
from starlette.responses import RedirectResponse from starlette.responses import RedirectResponse
from discord_rss_bot import settings from discord_rss_bot import settings
from discord_rss_bot.custom_filters import ( from discord_rss_bot.custom_filters import entry_is_blacklisted
entry_is_blacklisted, from discord_rss_bot.custom_filters import entry_is_whitelisted
entry_is_whitelisted, from discord_rss_bot.custom_message import CustomEmbed
) from discord_rss_bot.custom_message import get_custom_message
from discord_rss_bot.custom_message import ( from discord_rss_bot.custom_message import get_embed
CustomEmbed, from discord_rss_bot.custom_message import get_first_image
get_custom_message, from discord_rss_bot.custom_message import replace_tags_in_text_message
get_embed, from discord_rss_bot.custom_message import save_embed
get_first_image, from discord_rss_bot.feeds import create_feed
replace_tags_in_text_message, from discord_rss_bot.feeds import extract_domain
save_embed, from discord_rss_bot.feeds import send_entry_to_discord
) from discord_rss_bot.feeds import send_to_discord
from discord_rss_bot.feeds import create_feed, send_entry_to_discord, send_to_discord from discord_rss_bot.git_backup import commit_state_change
from discord_rss_bot.missing_tags import add_missing_tags from discord_rss_bot.git_backup import get_backup_path
from discord_rss_bot.search import create_html_for_search_results from discord_rss_bot.is_url_valid import is_url_valid
from discord_rss_bot.search import create_search_context
from discord_rss_bot.settings import get_reader from discord_rss_bot.settings import get_reader
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import AsyncGenerator
from collections.abc import Iterable from collections.abc import Iterable
from reader.types import JSONType from reader.types import JSONType
LOGGING_CONFIG = { LOGGING_CONFIG: dict[str, Any] = {
"version": 1, "version": 1,
"disable_existing_loggers": True, "disable_existing_loggers": True,
"formatters": { "formatters": {
@ -84,18 +101,71 @@ LOGGING_CONFIG = {
logging.config.dictConfig(LOGGING_CONFIG) logging.config.dictConfig(LOGGING_CONFIG)
logger: logging.Logger = logging.getLogger(__name__) logger: logging.Logger = logging.getLogger(__name__)
reader: Reader = get_reader()
def get_reader_dependency() -> Reader:
"""Provide the app Reader instance as a FastAPI dependency.
Returns:
Reader: The shared Reader instance.
"""
return get_reader()
# Time constants for relative time formatting
SECONDS_PER_MINUTE = 60
SECONDS_PER_HOUR = 3600
SECONDS_PER_DAY = 86400
def relative_time(dt: datetime | None) -> str:
"""Convert a datetime to a relative time string (e.g., '2 hours ago', 'in 5 minutes').
Args:
dt: The datetime to convert (should be timezone-aware).
Returns:
A human-readable relative time string.
"""
if dt is None:
return "Never"
now = datetime.now(tz=UTC)
diff = dt - now
seconds = int(abs(diff.total_seconds()))
is_future = diff.total_seconds() > 0
# Determine the appropriate unit and value
if seconds < SECONDS_PER_MINUTE:
value = seconds
unit = "s"
elif seconds < SECONDS_PER_HOUR:
value = seconds // SECONDS_PER_MINUTE
unit = "m"
elif seconds < SECONDS_PER_DAY:
value = seconds // SECONDS_PER_HOUR
unit = "h"
else:
value = seconds // SECONDS_PER_DAY
unit = "d"
# Format based on future or past
return f"in {value}{unit}" if is_future else f"{value}{unit} ago"
@asynccontextmanager @asynccontextmanager
async def lifespan(app: FastAPI) -> typing.AsyncGenerator[None]: async def lifespan(app: FastAPI) -> AsyncGenerator[None]:
"""This is needed for the ASGI server to run.""" """Lifespan function for the FastAPI app."""
add_missing_tags(reader) reader: Reader = get_reader()
scheduler: AsyncIOScheduler = AsyncIOScheduler() scheduler: AsyncIOScheduler = AsyncIOScheduler(timezone=UTC)
scheduler.add_job(
# Update all feeds every 15 minutes. func=send_to_discord,
# TODO(TheLovinator): Make this configurable. trigger="interval",
scheduler.add_job(send_to_discord, "interval", minutes=15, next_run_time=datetime.now(tz=UTC)) minutes=1,
id="send_to_discord",
max_instances=1,
next_run_time=datetime.now(tz=UTC),
)
scheduler.start() scheduler.start()
logger.info("Scheduler started.") logger.info("Scheduler started.")
yield yield
@ -110,27 +180,29 @@ templates: Jinja2Templates = Jinja2Templates(directory="discord_rss_bot/template
# Add the filters to the Jinja2 environment so they can be used in html templates. # Add the filters to the Jinja2 environment so they can be used in html templates.
templates.env.filters["encode_url"] = lambda url: urllib.parse.quote(url) if url else "" templates.env.filters["encode_url"] = lambda url: urllib.parse.quote(url) if url else ""
templates.env.filters["entry_is_whitelisted"] = entry_is_whitelisted
templates.env.filters["entry_is_blacklisted"] = entry_is_blacklisted
templates.env.filters["discord_markdown"] = markdownify templates.env.filters["discord_markdown"] = markdownify
templates.env.filters["relative_time"] = relative_time
templates.env.globals["get_backup_path"] = get_backup_path
@app.post("/add_webhook") @app.post("/add_webhook")
async def post_add_webhook( async def post_add_webhook(
webhook_name: Annotated[str, Form()], webhook_name: Annotated[str, Form()],
webhook_url: Annotated[str, Form()], webhook_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse: ) -> RedirectResponse:
"""Add a feed to the database. """Add a feed to the database.
Args: Args:
webhook_name: The name of the webhook. webhook_name: The name of the webhook.
webhook_url: The url of the webhook. webhook_url: The url of the webhook.
reader: The Reader instance.
Raises:
HTTPException: If the webhook already exists.
Returns: Returns:
RedirectResponse: Redirect to the index page. RedirectResponse: Redirect to the index page.
Raises:
HTTPException: If the webhook already exists.
""" """
# Get current webhooks from the database if they exist otherwise use an empty list. # Get current webhooks from the database if they exist otherwise use an empty list.
webhooks = list(reader.get_tag((), "webhooks", [])) webhooks = list(reader.get_tag((), "webhooks", []))
@ -147,6 +219,8 @@ async def post_add_webhook(
reader.set_tag((), "webhooks", webhooks) # pyright: ignore[reportArgumentType] reader.set_tag((), "webhooks", webhooks) # pyright: ignore[reportArgumentType]
commit_state_change(reader, f"Add webhook {webhook_name.strip()}")
return RedirectResponse(url="/", status_code=303) return RedirectResponse(url="/", status_code=303)
# TODO(TheLovinator): Show this error on the page. # TODO(TheLovinator): Show this error on the page.
@ -155,17 +229,22 @@ async def post_add_webhook(
@app.post("/delete_webhook") @app.post("/delete_webhook")
async def post_delete_webhook(webhook_url: Annotated[str, Form()]) -> RedirectResponse: async def post_delete_webhook(
webhook_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Delete a webhook from the database. """Delete a webhook from the database.
Args: Args:
webhook_url: The url of the webhook. webhook_url: The url of the webhook.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the index page.
Raises: Raises:
HTTPException: If the webhook could not be deleted HTTPException: If the webhook could not be deleted
Returns:
RedirectResponse: Redirect to the index page.
""" """
# TODO(TheLovinator): Check if the webhook is in use by any feeds before deleting it. # TODO(TheLovinator): Check if the webhook is in use by any feeds before deleting it.
# TODO(TheLovinator): Replace HTTPException with a custom exception for both of these. # TODO(TheLovinator): Replace HTTPException with a custom exception for both of these.
@ -192,6 +271,8 @@ async def post_delete_webhook(webhook_url: Annotated[str, Form()]) -> RedirectRe
# Add our new list of webhooks to the database. # Add our new list of webhooks to the database.
reader.set_tag((), "webhooks", webhooks) # pyright: ignore[reportArgumentType] reader.set_tag((), "webhooks", webhooks) # pyright: ignore[reportArgumentType]
commit_state_change(reader, f"Delete webhook {webhook_url.strip()}")
return RedirectResponse(url="/", status_code=303) return RedirectResponse(url="/", status_code=303)
@ -199,27 +280,34 @@ async def post_delete_webhook(webhook_url: Annotated[str, Form()]) -> RedirectRe
async def post_create_feed( async def post_create_feed(
feed_url: Annotated[str, Form()], feed_url: Annotated[str, Form()],
webhook_dropdown: Annotated[str, Form()], webhook_dropdown: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse: ) -> RedirectResponse:
"""Add a feed to the database. """Add a feed to the database.
Args: Args:
feed_url: The feed to add. feed_url: The feed to add.
webhook_dropdown: The webhook to use. webhook_dropdown: The webhook to use.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
""" """
clean_feed_url: str = feed_url.strip() clean_feed_url: str = feed_url.strip()
create_feed(reader, feed_url, webhook_dropdown) create_feed(reader, feed_url, webhook_dropdown)
commit_state_change(reader, f"Add feed {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.post("/pause") @app.post("/pause")
async def post_pause_feed(feed_url: Annotated[str, Form()]) -> RedirectResponse: async def post_pause_feed(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Pause a feed. """Pause a feed.
Args: Args:
feed_url: The feed to pause. feed_url: The feed to pause.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
@ -230,11 +318,15 @@ async def post_pause_feed(feed_url: Annotated[str, Form()]) -> RedirectResponse:
@app.post("/unpause") @app.post("/unpause")
async def post_unpause_feed(feed_url: Annotated[str, Form()]) -> RedirectResponse: async def post_unpause_feed(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Unpause a feed. """Unpause a feed.
Args: Args:
feed_url: The Feed to unpause. feed_url: The Feed to unpause.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
@ -246,10 +338,15 @@ async def post_unpause_feed(feed_url: Annotated[str, Form()]) -> RedirectRespons
@app.post("/whitelist") @app.post("/whitelist")
async def post_set_whitelist( async def post_set_whitelist(
reader: Annotated[Reader, Depends(get_reader_dependency)],
whitelist_title: Annotated[str, Form()] = "", whitelist_title: Annotated[str, Form()] = "",
whitelist_summary: Annotated[str, Form()] = "", whitelist_summary: Annotated[str, Form()] = "",
whitelist_content: Annotated[str, Form()] = "", whitelist_content: Annotated[str, Form()] = "",
whitelist_author: Annotated[str, Form()] = "", whitelist_author: Annotated[str, Form()] = "",
regex_whitelist_title: Annotated[str, Form()] = "",
regex_whitelist_summary: Annotated[str, Form()] = "",
regex_whitelist_content: Annotated[str, Form()] = "",
regex_whitelist_author: Annotated[str, Form()] = "",
feed_url: Annotated[str, Form()] = "", feed_url: Annotated[str, Form()] = "",
) -> RedirectResponse: ) -> RedirectResponse:
"""Set what the whitelist should be sent, if you have this set only words in the whitelist will be sent. """Set what the whitelist should be sent, if you have this set only words in the whitelist will be sent.
@ -259,7 +356,12 @@ async def post_set_whitelist(
whitelist_summary: Whitelisted words for when checking the summary. whitelist_summary: Whitelisted words for when checking the summary.
whitelist_content: Whitelisted words for when checking the content. whitelist_content: Whitelisted words for when checking the content.
whitelist_author: Whitelisted words for when checking the author. whitelist_author: Whitelisted words for when checking the author.
regex_whitelist_title: Whitelisted regex for when checking the title.
regex_whitelist_summary: Whitelisted regex for when checking the summary.
regex_whitelist_content: Whitelisted regex for when checking the content.
regex_whitelist_author: Whitelisted regex for when checking the author.
feed_url: The feed we should set the whitelist for. feed_url: The feed we should set the whitelist for.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
@ -269,17 +371,28 @@ async def post_set_whitelist(
reader.set_tag(clean_feed_url, "whitelist_summary", whitelist_summary) # pyright: ignore[reportArgumentType][call-overload] reader.set_tag(clean_feed_url, "whitelist_summary", whitelist_summary) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "whitelist_content", whitelist_content) # pyright: ignore[reportArgumentType][call-overload] reader.set_tag(clean_feed_url, "whitelist_content", whitelist_content) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "whitelist_author", whitelist_author) # pyright: ignore[reportArgumentType][call-overload] reader.set_tag(clean_feed_url, "whitelist_author", whitelist_author) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_whitelist_title", regex_whitelist_title) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_whitelist_summary", regex_whitelist_summary) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_whitelist_content", regex_whitelist_content) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_whitelist_author", regex_whitelist_author) # pyright: ignore[reportArgumentType][call-overload]
commit_state_change(reader, f"Update whitelist for {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.get("/whitelist", response_class=HTMLResponse) @app.get("/whitelist", response_class=HTMLResponse)
async def get_whitelist(feed_url: str, request: Request): async def get_whitelist(
feed_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Get the whitelist. """Get the whitelist.
Args: Args:
feed_url: What feed we should get the whitelist for. feed_url: What feed we should get the whitelist for.
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The whitelist page. HTMLResponse: The whitelist page.
@ -287,11 +400,14 @@ async def get_whitelist(feed_url: str, request: Request):
clean_feed_url: str = feed_url.strip() clean_feed_url: str = feed_url.strip()
feed: Feed = reader.get_feed(urllib.parse.unquote(clean_feed_url)) feed: Feed = reader.get_feed(urllib.parse.unquote(clean_feed_url))
# Get previous data, this is used when creating the form.
whitelist_title: str = str(reader.get_tag(feed, "whitelist_title", "")) whitelist_title: str = str(reader.get_tag(feed, "whitelist_title", ""))
whitelist_summary: str = str(reader.get_tag(feed, "whitelist_summary", "")) whitelist_summary: str = str(reader.get_tag(feed, "whitelist_summary", ""))
whitelist_content: str = str(reader.get_tag(feed, "whitelist_content", "")) whitelist_content: str = str(reader.get_tag(feed, "whitelist_content", ""))
whitelist_author: str = str(reader.get_tag(feed, "whitelist_author", "")) whitelist_author: str = str(reader.get_tag(feed, "whitelist_author", ""))
regex_whitelist_title: str = str(reader.get_tag(feed, "regex_whitelist_title", ""))
regex_whitelist_summary: str = str(reader.get_tag(feed, "regex_whitelist_summary", ""))
regex_whitelist_content: str = str(reader.get_tag(feed, "regex_whitelist_content", ""))
regex_whitelist_author: str = str(reader.get_tag(feed, "regex_whitelist_author", ""))
context = { context = {
"request": request, "request": request,
@ -300,16 +416,25 @@ async def get_whitelist(feed_url: str, request: Request):
"whitelist_summary": whitelist_summary, "whitelist_summary": whitelist_summary,
"whitelist_content": whitelist_content, "whitelist_content": whitelist_content,
"whitelist_author": whitelist_author, "whitelist_author": whitelist_author,
"regex_whitelist_title": regex_whitelist_title,
"regex_whitelist_summary": regex_whitelist_summary,
"regex_whitelist_content": regex_whitelist_content,
"regex_whitelist_author": regex_whitelist_author,
} }
return templates.TemplateResponse(request=request, name="whitelist.html", context=context) return templates.TemplateResponse(request=request, name="whitelist.html", context=context)
@app.post("/blacklist") @app.post("/blacklist")
async def post_set_blacklist( async def post_set_blacklist(
reader: Annotated[Reader, Depends(get_reader_dependency)],
blacklist_title: Annotated[str, Form()] = "", blacklist_title: Annotated[str, Form()] = "",
blacklist_summary: Annotated[str, Form()] = "", blacklist_summary: Annotated[str, Form()] = "",
blacklist_content: Annotated[str, Form()] = "", blacklist_content: Annotated[str, Form()] = "",
blacklist_author: Annotated[str, Form()] = "", blacklist_author: Annotated[str, Form()] = "",
regex_blacklist_title: Annotated[str, Form()] = "",
regex_blacklist_summary: Annotated[str, Form()] = "",
regex_blacklist_content: Annotated[str, Form()] = "",
regex_blacklist_author: Annotated[str, Form()] = "",
feed_url: Annotated[str, Form()] = "", feed_url: Annotated[str, Form()] = "",
) -> RedirectResponse: ) -> RedirectResponse:
"""Set the blacklist. """Set the blacklist.
@ -322,7 +447,12 @@ async def post_set_blacklist(
blacklist_summary: Blacklisted words for when checking the summary. blacklist_summary: Blacklisted words for when checking the summary.
blacklist_content: Blacklisted words for when checking the content. blacklist_content: Blacklisted words for when checking the content.
blacklist_author: Blacklisted words for when checking the author. blacklist_author: Blacklisted words for when checking the author.
regex_blacklist_title: Blacklisted regex for when checking the title.
regex_blacklist_summary: Blacklisted regex for when checking the summary.
regex_blacklist_content: Blacklisted regex for when checking the content.
regex_blacklist_author: Blacklisted regex for when checking the author.
feed_url: What feed we should set the blacklist for. feed_url: What feed we should set the blacklist for.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
@ -332,28 +462,40 @@ async def post_set_blacklist(
reader.set_tag(clean_feed_url, "blacklist_summary", blacklist_summary) # pyright: ignore[reportArgumentType][call-overload] reader.set_tag(clean_feed_url, "blacklist_summary", blacklist_summary) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "blacklist_content", blacklist_content) # pyright: ignore[reportArgumentType][call-overload] reader.set_tag(clean_feed_url, "blacklist_content", blacklist_content) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "blacklist_author", blacklist_author) # pyright: ignore[reportArgumentType][call-overload] reader.set_tag(clean_feed_url, "blacklist_author", blacklist_author) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_blacklist_title", regex_blacklist_title) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_blacklist_summary", regex_blacklist_summary) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_blacklist_content", regex_blacklist_content) # pyright: ignore[reportArgumentType][call-overload]
reader.set_tag(clean_feed_url, "regex_blacklist_author", regex_blacklist_author) # pyright: ignore[reportArgumentType][call-overload]
commit_state_change(reader, f"Update blacklist for {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.get("/blacklist", response_class=HTMLResponse) @app.get("/blacklist", response_class=HTMLResponse)
async def get_blacklist(feed_url: str, request: Request): async def get_blacklist(
feed_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Get the blacklist. """Get the blacklist.
Args: Args:
feed_url: What feed we should get the blacklist for. feed_url: What feed we should get the blacklist for.
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The blacklist page. HTMLResponse: The blacklist page.
""" """
feed: Feed = reader.get_feed(urllib.parse.unquote(feed_url)) feed: Feed = reader.get_feed(urllib.parse.unquote(feed_url))
# Get previous data, this is used when creating the form.
blacklist_title: str = str(reader.get_tag(feed, "blacklist_title", "")) blacklist_title: str = str(reader.get_tag(feed, "blacklist_title", ""))
blacklist_summary: str = str(reader.get_tag(feed, "blacklist_summary", "")) blacklist_summary: str = str(reader.get_tag(feed, "blacklist_summary", ""))
blacklist_content: str = str(reader.get_tag(feed, "blacklist_content", "")) blacklist_content: str = str(reader.get_tag(feed, "blacklist_content", ""))
blacklist_author: str = str(reader.get_tag(feed, "blacklist_author", "")) blacklist_author: str = str(reader.get_tag(feed, "blacklist_author", ""))
regex_blacklist_title: str = str(reader.get_tag(feed, "regex_blacklist_title", ""))
regex_blacklist_summary: str = str(reader.get_tag(feed, "regex_blacklist_summary", ""))
regex_blacklist_content: str = str(reader.get_tag(feed, "regex_blacklist_content", ""))
regex_blacklist_author: str = str(reader.get_tag(feed, "regex_blacklist_author", ""))
context = { context = {
"request": request, "request": request,
@ -362,6 +504,10 @@ async def get_blacklist(feed_url: str, request: Request):
"blacklist_summary": blacklist_summary, "blacklist_summary": blacklist_summary,
"blacklist_content": blacklist_content, "blacklist_content": blacklist_content,
"blacklist_author": blacklist_author, "blacklist_author": blacklist_author,
"regex_blacklist_title": regex_blacklist_title,
"regex_blacklist_summary": regex_blacklist_summary,
"regex_blacklist_content": regex_blacklist_content,
"regex_blacklist_author": regex_blacklist_author,
} }
return templates.TemplateResponse(request=request, name="blacklist.html", context=context) return templates.TemplateResponse(request=request, name="blacklist.html", context=context)
@ -369,6 +515,7 @@ async def get_blacklist(feed_url: str, request: Request):
@app.post("/custom") @app.post("/custom")
async def post_set_custom( async def post_set_custom(
feed_url: Annotated[str, Form()], feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
custom_message: Annotated[str, Form()] = "", custom_message: Annotated[str, Form()] = "",
) -> RedirectResponse: ) -> RedirectResponse:
"""Set the custom message, this is used when sending the message. """Set the custom message, this is used when sending the message.
@ -376,6 +523,7 @@ async def post_set_custom(
Args: Args:
custom_message: The custom message. custom_message: The custom message.
feed_url: The feed we should set the custom message for. feed_url: The feed we should set the custom message for.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
@ -392,16 +540,22 @@ async def post_set_custom(
reader.set_tag(feed_url, "custom_message", default_custom_message) reader.set_tag(feed_url, "custom_message", default_custom_message)
clean_feed_url: str = feed_url.strip() clean_feed_url: str = feed_url.strip()
commit_state_change(reader, f"Update custom message for {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.get("/custom", response_class=HTMLResponse) @app.get("/custom", response_class=HTMLResponse)
async def get_custom(feed_url: str, request: Request): async def get_custom(
feed_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Get the custom message. This is used when sending the message to Discord. """Get the custom message. This is used when sending the message to Discord.
Args: Args:
feed_url: What feed we should get the custom message for. feed_url: What feed we should get the custom message for.
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The custom message page. HTMLResponse: The custom message page.
@ -422,12 +576,17 @@ async def get_custom(feed_url: str, request: Request):
@app.get("/embed", response_class=HTMLResponse) @app.get("/embed", response_class=HTMLResponse)
async def get_embed_page(feed_url: str, request: Request): async def get_embed_page(
feed_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Get the custom message. This is used when sending the message to Discord. """Get the custom message. This is used when sending the message to Discord.
Args: Args:
feed_url: What feed we should get the custom message for. feed_url: What feed we should get the custom message for.
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The embed page. HTMLResponse: The embed page.
@ -461,8 +620,9 @@ async def get_embed_page(feed_url: str, request: Request):
@app.post("/embed", response_class=HTMLResponse) @app.post("/embed", response_class=HTMLResponse)
async def post_embed( # noqa: PLR0913, PLR0917 async def post_embed( # noqa: C901
feed_url: Annotated[str, Form()], feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
title: Annotated[str, Form()] = "", title: Annotated[str, Form()] = "",
description: Annotated[str, Form()] = "", description: Annotated[str, Form()] = "",
color: Annotated[str, Form()] = "", color: Annotated[str, Form()] = "",
@ -488,7 +648,7 @@ async def post_embed( # noqa: PLR0913, PLR0917
author_icon_url: The author icon url of the embed. author_icon_url: The author icon url of the embed.
footer_text: The footer text of the embed. footer_text: The footer text of the embed.
footer_icon_url: The footer icon url of the embed. footer_icon_url: The footer icon url of the embed.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the embed page. RedirectResponse: Redirect to the embed page.
@ -497,59 +657,245 @@ async def post_embed( # noqa: PLR0913, PLR0917
feed: Feed = reader.get_feed(urllib.parse.unquote(clean_feed_url)) feed: Feed = reader.get_feed(urllib.parse.unquote(clean_feed_url))
custom_embed: CustomEmbed = get_embed(reader, feed) custom_embed: CustomEmbed = get_embed(reader, feed)
# Only overwrite fields that the user provided. This prevents accidental
# clearing of previously saved embed data when the form submits empty
# values for fields the user did not change.
if title:
custom_embed.title = title custom_embed.title = title
if description:
custom_embed.description = description custom_embed.description = description
if color:
custom_embed.color = color custom_embed.color = color
if image_url:
custom_embed.image_url = image_url custom_embed.image_url = image_url
if thumbnail_url:
custom_embed.thumbnail_url = thumbnail_url custom_embed.thumbnail_url = thumbnail_url
if author_name:
custom_embed.author_name = author_name custom_embed.author_name = author_name
if author_url:
custom_embed.author_url = author_url custom_embed.author_url = author_url
if author_icon_url:
custom_embed.author_icon_url = author_icon_url custom_embed.author_icon_url = author_icon_url
if footer_text:
custom_embed.footer_text = footer_text custom_embed.footer_text = footer_text
if footer_icon_url:
custom_embed.footer_icon_url = footer_icon_url custom_embed.footer_icon_url = footer_icon_url
# Save the data. # Save the data.
save_embed(reader, feed, custom_embed) save_embed(reader, feed, custom_embed)
commit_state_change(reader, f"Update embed settings for {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.post("/use_embed") @app.post("/use_embed")
async def post_use_embed(feed_url: Annotated[str, Form()]) -> RedirectResponse: async def post_use_embed(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Use embed instead of text. """Use embed instead of text.
Args: Args:
feed_url: The feed to change. feed_url: The feed to change.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
""" """
clean_feed_url: str = feed_url.strip() clean_feed_url: str = feed_url.strip()
reader.set_tag(clean_feed_url, "should_send_embed", True) # pyright: ignore[reportArgumentType] reader.set_tag(clean_feed_url, "should_send_embed", True) # pyright: ignore[reportArgumentType]
commit_state_change(reader, f"Enable embed mode for {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.post("/use_text") @app.post("/use_text")
async def post_use_text(feed_url: Annotated[str, Form()]) -> RedirectResponse: async def post_use_text(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Use text instead of embed. """Use text instead of embed.
Args: Args:
feed_url: The feed to change. feed_url: The feed to change.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
""" """
clean_feed_url: str = feed_url.strip() clean_feed_url: str = feed_url.strip()
reader.set_tag(clean_feed_url, "should_send_embed", False) # pyright: ignore[reportArgumentType] reader.set_tag(clean_feed_url, "should_send_embed", False) # pyright: ignore[reportArgumentType]
commit_state_change(reader, f"Disable embed mode for {clean_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.post("/set_update_interval")
async def post_set_update_interval(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
interval_minutes: Annotated[int | None, Form()] = None,
redirect_to: Annotated[str, Form()] = "",
) -> RedirectResponse:
"""Set the update interval for a feed.
Args:
feed_url: The feed to change.
interval_minutes: The update interval in minutes (None to reset to global default).
redirect_to: Optional redirect URL (defaults to feed page).
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the specified page or feed page.
"""
clean_feed_url: str = feed_url.strip()
# If no interval specified, reset to global default
if interval_minutes is None:
try:
reader.delete_tag(clean_feed_url, ".reader.update")
commit_state_change(reader, f"Reset update interval to default for {clean_feed_url}")
except TagNotFoundError:
pass
else:
# Validate interval (minimum 1 minute, no maximum)
interval_minutes = max(interval_minutes, 1)
reader.set_tag(clean_feed_url, ".reader.update", {"interval": interval_minutes}) # pyright: ignore[reportArgumentType]
commit_state_change(reader, f"Set update interval to {interval_minutes} minutes for {clean_feed_url}")
# Update the feed immediately to recalculate update_after with the new interval
try:
reader.update_feed(clean_feed_url)
logger.info("Updated feed after interval change: %s", clean_feed_url)
except Exception:
logger.exception("Failed to update feed after interval change: %s", clean_feed_url)
if redirect_to:
return RedirectResponse(url=redirect_to, status_code=303)
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.post("/change_feed_url")
async def post_change_feed_url(
old_feed_url: Annotated[str, Form()],
new_feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Change the URL for an existing feed.
Args:
old_feed_url: Current feed URL.
new_feed_url: New feed URL to change to.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the feed page for the resulting URL.
Raises:
HTTPException: If the old feed is not found, the new URL already exists, or change fails.
"""
clean_old_feed_url: str = old_feed_url.strip()
clean_new_feed_url: str = new_feed_url.strip()
if not clean_old_feed_url or not clean_new_feed_url:
raise HTTPException(status_code=400, detail="Feed URLs cannot be empty")
if clean_old_feed_url == clean_new_feed_url:
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_old_feed_url)}", status_code=303)
try:
reader.change_feed_url(clean_old_feed_url, clean_new_feed_url)
except FeedNotFoundError as e:
raise HTTPException(status_code=404, detail=f"Feed not found: {clean_old_feed_url}") from e
except FeedExistsError as e:
raise HTTPException(status_code=409, detail=f"Feed already exists: {clean_new_feed_url}") from e
except ReaderError as e:
raise HTTPException(status_code=400, detail=f"Failed to change feed URL: {e}") from e
# Update the feed with the new URL so we can discover what entries it returns.
# Then mark all unread entries as read so the scheduler doesn't resend them.
try:
reader.update_feed(clean_new_feed_url)
except Exception:
logger.exception("Failed to update feed after URL change: %s", clean_new_feed_url)
for entry in reader.get_entries(feed=clean_new_feed_url, read=False):
try:
reader.set_entry_read(entry, True)
except Exception:
logger.exception("Failed to mark entry as read after URL change: %s", entry.id)
commit_state_change(reader, f"Change feed URL from {clean_old_feed_url} to {clean_new_feed_url}")
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_new_feed_url)}", status_code=303)
@app.post("/reset_update_interval")
async def post_reset_update_interval(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
redirect_to: Annotated[str, Form()] = "",
) -> RedirectResponse:
"""Reset the update interval for a feed to use the global default.
Args:
feed_url: The feed to change.
redirect_to: Optional redirect URL (defaults to feed page).
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the specified page or feed page.
"""
clean_feed_url: str = feed_url.strip()
try:
reader.delete_tag(clean_feed_url, ".reader.update")
commit_state_change(reader, f"Reset update interval to default for {clean_feed_url}")
except TagNotFoundError:
# Tag doesn't exist, which is fine
pass
# Update the feed immediately to recalculate update_after with the new interval
try:
reader.update_feed(clean_feed_url)
logger.info("Updated feed after interval reset: %s", clean_feed_url)
except Exception:
logger.exception("Failed to update feed after interval reset: %s", clean_feed_url)
if redirect_to:
return RedirectResponse(url=redirect_to, status_code=303)
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303)
@app.post("/set_global_update_interval")
async def post_set_global_update_interval(
interval_minutes: Annotated[int, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Set the global default update interval.
Args:
interval_minutes: The update interval in minutes.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the settings page.
"""
# Validate interval (minimum 1 minute, no maximum)
interval_minutes = max(interval_minutes, 1)
reader.set_tag((), ".reader.update", {"interval": interval_minutes}) # pyright: ignore[reportArgumentType]
commit_state_change(reader, f"Set global update interval to {interval_minutes} minutes")
return RedirectResponse(url="/settings", status_code=303)
@app.get("/add", response_class=HTMLResponse) @app.get("/add", response_class=HTMLResponse)
def get_add(request: Request): def get_add(
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Page for adding a new feed. """Page for adding a new feed.
Args: Args:
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The add feed page. HTMLResponse: The add feed page.
@ -562,19 +908,25 @@ def get_add(request: Request):
@app.get("/feed", response_class=HTMLResponse) @app.get("/feed", response_class=HTMLResponse)
async def get_feed(feed_url: str, request: Request, starting_after: str = ""): async def get_feed( # noqa: C901, PLR0912, PLR0914, PLR0915
feed_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
starting_after: str = "",
):
"""Get a feed by URL. """Get a feed by URL.
Args: Args:
feed_url: The feed to add. feed_url: The feed to add.
request: The request object. request: The request object.
starting_after: The entry to start after. Used for pagination. starting_after: The entry to start after. Used for pagination.
reader: The Reader instance.
Raises:
HTTPException: If the feed is not found.
Returns: Returns:
HTMLResponse: The feed page. HTMLResponse: The feed page.
Raises:
HTTPException: If the feed is not found.
""" """
entries_per_page: int = 20 entries_per_page: int = 20
@ -587,7 +939,7 @@ async def get_feed(feed_url: str, request: Request, starting_after: str = ""):
# Only show button if more than 10 entries. # Only show button if more than 10 entries.
total_entries: int = reader.get_entry_counts(feed=feed).total or 0 total_entries: int = reader.get_entry_counts(feed=feed).total or 0
show_more_entires_button: bool = total_entries > entries_per_page is_show_more_entries_button_visible: bool = total_entries > entries_per_page
# Get entries from the feed. # Get entries from the feed.
if starting_after: if starting_after:
@ -598,7 +950,22 @@ async def get_feed(feed_url: str, request: Request, starting_after: str = ""):
except EntryNotFoundError as e: except EntryNotFoundError as e:
current_entries = list(reader.get_entries(feed=clean_feed_url)) current_entries = list(reader.get_entries(feed=clean_feed_url))
msg: str = f"{e}\n\n{[entry.id for entry in current_entries]}" msg: str = f"{e}\n\n{[entry.id for entry in current_entries]}"
html: str = create_html_for_feed(current_entries) html: str = create_html_for_feed(reader=reader, entries=current_entries, current_feed_url=clean_feed_url)
# Get feed and global intervals for error case too
feed_interval: int | None = None
feed_update_config = reader.get_tag(feed, ".reader.update", None)
if isinstance(feed_update_config, dict) and "interval" in feed_update_config:
interval_value = feed_update_config["interval"]
if isinstance(interval_value, int):
feed_interval = interval_value
global_interval: int = 60
global_update_config = reader.get_tag((), ".reader.update", None)
if isinstance(global_update_config, dict) and "interval" in global_update_config:
interval_value = global_update_config["interval"]
if isinstance(interval_value, int):
global_interval = interval_value
context = { context = {
"request": request, "request": request,
@ -609,8 +976,10 @@ async def get_feed(feed_url: str, request: Request, starting_after: str = ""):
"should_send_embed": False, "should_send_embed": False,
"last_entry": None, "last_entry": None,
"messages": msg, "messages": msg,
"show_more_entires_button": show_more_entires_button, "is_show_more_entries_button_visible": is_show_more_entries_button_visible,
"total_entries": total_entries, "total_entries": total_entries,
"feed_interval": feed_interval,
"global_interval": global_interval,
} }
return templates.TemplateResponse(request=request, name="feed.html", context=context) return templates.TemplateResponse(request=request, name="feed.html", context=context)
@ -631,13 +1000,25 @@ async def get_feed(feed_url: str, request: Request, starting_after: str = ""):
last_entry = entries[-1] last_entry = entries[-1]
# Create the html for the entries. # Create the html for the entries.
html: str = create_html_for_feed(entries) html: str = create_html_for_feed(reader=reader, entries=entries, current_feed_url=clean_feed_url)
try: should_send_embed: bool = bool(reader.get_tag(feed, "should_send_embed", True))
should_send_embed: bool = bool(reader.get_tag(feed, "should_send_embed"))
except TagNotFoundError: # Get the update interval for this feed
add_missing_tags(reader) feed_interval: int | None = None
should_send_embed: bool = bool(reader.get_tag(feed, "should_send_embed")) feed_update_config = reader.get_tag(feed, ".reader.update", None)
if isinstance(feed_update_config, dict) and "interval" in feed_update_config:
interval_value = feed_update_config["interval"]
if isinstance(interval_value, int):
feed_interval = interval_value
# Get the global default update interval
global_interval: int = 60 # Default to 60 minutes if not set
global_update_config = reader.get_tag((), ".reader.update", None)
if isinstance(global_update_config, dict) and "interval" in global_update_config:
interval_value = global_update_config["interval"]
if isinstance(interval_value, int):
global_interval = interval_value
context = { context = {
"request": request, "request": request,
@ -647,17 +1028,25 @@ async def get_feed(feed_url: str, request: Request, starting_after: str = ""):
"html": html, "html": html,
"should_send_embed": should_send_embed, "should_send_embed": should_send_embed,
"last_entry": last_entry, "last_entry": last_entry,
"show_more_entires_button": show_more_entires_button, "is_show_more_entries_button_visible": is_show_more_entries_button_visible,
"total_entries": total_entries, "total_entries": total_entries,
"feed_interval": feed_interval,
"global_interval": global_interval,
} }
return templates.TemplateResponse(request=request, name="feed.html", context=context) return templates.TemplateResponse(request=request, name="feed.html", context=context)
def create_html_for_feed(entries: Iterable[Entry]) -> str: def create_html_for_feed( # noqa: C901, PLR0914
reader: Reader,
entries: Iterable[Entry],
current_feed_url: str = "",
) -> str:
"""Create HTML for the search results. """Create HTML for the search results.
Args: Args:
reader: The Reader instance to use.
entries: The entries to create HTML for. entries: The entries to create HTML for.
current_feed_url: The feed URL currently being viewed in /feed.
Returns: Returns:
str: The HTML for the search results. str: The HTML for the search results.
@ -673,31 +1062,75 @@ def create_html_for_feed(entries: Iterable[Entry]) -> str:
first_image = get_first_image(summary, content) first_image = get_first_image(summary, content)
text: str = replace_tags_in_text_message(entry) or "<div class='text-muted'>No content available.</div>" text: str = replace_tags_in_text_message(entry, reader=reader) or (
"<div class='text-muted'>No content available.</div>"
)
published = "" published = ""
if entry.published: if entry.published:
published: str = entry.published.strftime("%Y-%m-%d %H:%M:%S") published: str = entry.published.strftime("%Y-%m-%d %H:%M:%S")
blacklisted: str = "" blacklisted: str = ""
if entry_is_blacklisted(entry): if entry_is_blacklisted(entry, reader=reader):
blacklisted = "<span class='badge bg-danger'>Blacklisted</span>" blacklisted = "<span class='badge bg-danger'>Blacklisted</span>"
whitelisted: str = "" whitelisted: str = ""
if entry_is_whitelisted(entry): if entry_is_whitelisted(entry, reader=reader):
whitelisted = "<span class='badge bg-success'>Whitelisted</span>" whitelisted = "<span class='badge bg-success'>Whitelisted</span>"
source_feed_url: str = getattr(entry, "original_feed_url", None) or entry.feed.url
from_another_feed: str = ""
if current_feed_url and source_feed_url != current_feed_url:
from_another_feed = f"<span class='badge bg-warning text-dark'>From another feed: {source_feed_url}</span>"
# Add feed link when viewing from webhook_entries or aggregated views
feed_link: str = ""
if not current_feed_url or source_feed_url != current_feed_url:
encoded_feed_url: str = urllib.parse.quote(source_feed_url)
feed_title: str = entry.feed.title if hasattr(entry.feed, "title") and entry.feed.title else source_feed_url
feed_link = (
f"<a class='text-muted' style='font-size: 0.85em;' "
f"href='/feed?feed_url={encoded_feed_url}'>{feed_title}</a><br>"
)
entry_id: str = urllib.parse.quote(entry.id) entry_id: str = urllib.parse.quote(entry.id)
to_discord_html: str = f"<a class='text-muted' href='/post_entry?entry_id={entry_id}'>Send to Discord</a>" encoded_source_feed_url: str = urllib.parse.quote(source_feed_url)
to_discord_html: str = (
f"<a class='text-muted' href='/post_entry?entry_id={entry_id}&feed_url={encoded_source_feed_url}'>"
"Send to Discord</a>"
)
# Check if this is a YouTube feed entry and the entry has a link
is_youtube_feed = "youtube.com/feeds/videos.xml" in entry.feed.url
video_embed_html = ""
if is_youtube_feed and entry.link:
# Extract the video ID and create an embed if possible
video_id: str | None = extract_youtube_video_id(entry.link)
if video_id:
video_embed_html: str = f"""
<div class="ratio ratio-16x9 mt-3 mb-3">
<iframe src="https://www.youtube.com/embed/{video_id}"
title="{entry.title}"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen>
</iframe>
</div>
"""
# Don't use the first image if we have a video embed
first_image = ""
image_html: str = f"<img src='{first_image}' class='img-fluid'>" if first_image else "" image_html: str = f"<img src='{first_image}' class='img-fluid'>" if first_image else ""
html += f"""<div class="p-2 mb-2 border border-dark"> html += f"""<div class="p-2 mb-2 border border-dark">
{blacklisted}{whitelisted}<a class="text-muted text-decoration-none" href="{entry.link}"><h2>{entry.title}</h2></a> {blacklisted}{whitelisted}{from_another_feed}<a class="text-muted text-decoration-none" href="{entry.link}"><h2>{entry.title}</h2></a>
{f"By {entry.author} @" if entry.author else ""}{published} - {to_discord_html} {feed_link}{f"By {entry.author} @" if entry.author else ""}{published} - {to_discord_html}
{text} {text}
{video_embed_html}
{image_html} {image_html}
</div> </div>
""" """ # noqa: E501
return html.strip() return html.strip()
@ -736,6 +1169,7 @@ def get_data_from_hook_url(hook_name: str, hook_url: str) -> WebhookInfo:
hook_name (str): The webhook name. hook_name (str): The webhook name.
hook_url (str): The webhook URL. hook_url (str): The webhook URL.
Returns: Returns:
WebhookInfo: The webhook username, avatar, guild id, etc. WebhookInfo: The webhook username, avatar, guild id, etc.
""" """
@ -756,12 +1190,64 @@ def get_data_from_hook_url(hook_name: str, hook_url: str) -> WebhookInfo:
return our_hook return our_hook
@app.get("/settings", response_class=HTMLResponse)
async def get_settings(
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Settings page.
Args:
request: The request object.
reader: The Reader instance.
Returns:
HTMLResponse: The settings page.
"""
# Get the global default update interval
global_interval: int = 60 # Default to 60 minutes if not set
global_update_config = reader.get_tag((), ".reader.update", None)
if isinstance(global_update_config, dict) and "interval" in global_update_config:
interval_value = global_update_config["interval"]
if isinstance(interval_value, int):
global_interval = interval_value
# Get all feeds with their intervals
feeds: Iterable[Feed] = reader.get_feeds()
feed_intervals = []
for feed in feeds:
feed_interval: int | None = None
feed_update_config = reader.get_tag(feed, ".reader.update", None)
if isinstance(feed_update_config, dict) and "interval" in feed_update_config:
interval_value = feed_update_config["interval"]
if isinstance(interval_value, int):
feed_interval = interval_value
feed_intervals.append({
"feed": feed,
"interval": feed_interval,
"effective_interval": feed_interval or global_interval,
"domain": extract_domain(feed.url),
})
context = {
"request": request,
"global_interval": global_interval,
"feed_intervals": feed_intervals,
}
return templates.TemplateResponse(request=request, name="settings.html", context=context)
@app.get("/webhooks", response_class=HTMLResponse) @app.get("/webhooks", response_class=HTMLResponse)
async def get_webhooks(request: Request): async def get_webhooks(
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Page for adding a new webhook. """Page for adding a new webhook.
Args: Args:
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The add webhook page. HTMLResponse: The add webhook page.
@ -782,136 +1268,241 @@ async def get_webhooks(request: Request):
@app.get("/", response_class=HTMLResponse) @app.get("/", response_class=HTMLResponse)
def get_index(request: Request): def get_index(
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
message: str = "",
):
"""This is the root of the website. """This is the root of the website.
Args: Args:
request: The request object. request: The request object.
message: Optional message to display to the user.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The index page. HTMLResponse: The index page.
""" """
return templates.TemplateResponse(request=request, name="index.html", context=make_context_index(request)) return templates.TemplateResponse(
request=request,
name="index.html",
context=make_context_index(request, message, reader),
)
def make_context_index(request: Request): def make_context_index(request: Request, message: str = "", reader: Reader | None = None):
"""Create the needed context for the index page. """Create the needed context for the index page.
Args: Args:
request: The request object. request: The request object.
message: Optional message to display to the user.
reader: The Reader instance.
Returns: Returns:
dict: The context for the index page. dict: The context for the index page.
""" """
hooks: list[dict[str, str]] = cast("list[dict[str, str]]", list(reader.get_tag((), "webhooks", []))) effective_reader: Reader = reader or get_reader_dependency()
hooks: list[dict[str, str]] = cast("list[dict[str, str]]", list(effective_reader.get_tag((), "webhooks", [])))
feed_list = [] feed_list: list[dict[str, JSONType | Feed | str]] = []
broken_feeds = [] broken_feeds: list[Feed] = []
feeds_without_attached_webhook = [] feeds_without_attached_webhook: list[Feed] = []
feeds: Iterable[Feed] = reader.get_feeds() # Get all feeds and organize them
feeds: Iterable[Feed] = effective_reader.get_feeds()
for feed in feeds: for feed in feeds:
try: webhook: str = str(effective_reader.get_tag(feed.url, "webhook", ""))
webhook = reader.get_tag(feed.url, "webhook") if not webhook:
feed_list.append({"feed": feed, "webhook": webhook})
except TagNotFoundError:
broken_feeds.append(feed) broken_feeds.append(feed)
continue continue
webhook_list = [hook["url"] for hook in hooks] feed_list.append({"feed": feed, "webhook": webhook, "domain": extract_domain(feed.url)})
webhook_list: list[str] = [hook["url"] for hook in hooks]
if webhook not in webhook_list: if webhook not in webhook_list:
feeds_without_attached_webhook.append(feed) feeds_without_attached_webhook.append(feed)
return { return {
"request": request, "request": request,
"feeds": feed_list, "feeds": feed_list,
"feed_count": reader.get_feed_counts(), "feed_count": effective_reader.get_feed_counts(),
"entry_count": reader.get_entry_counts(), "entry_count": effective_reader.get_entry_counts(),
"webhooks": hooks, "webhooks": hooks,
"broken_feeds": broken_feeds, "broken_feeds": broken_feeds,
"feeds_without_attached_webhook": feeds_without_attached_webhook, "feeds_without_attached_webhook": feeds_without_attached_webhook,
"messages": message or None,
} }
@app.post("/remove", response_class=HTMLResponse) @app.post("/remove", response_class=HTMLResponse)
async def remove_feed(feed_url: Annotated[str, Form()]): async def remove_feed(
feed_url: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Get a feed by URL. """Get a feed by URL.
Args: Args:
feed_url: The feed to add. feed_url: The feed to add.
reader: The Reader instance.
Raises:
HTTPException: Feed not found
Returns: Returns:
RedirectResponse: Redirect to the index page. RedirectResponse: Redirect to the index page.
Raises:
HTTPException: Feed not found
""" """
try: try:
reader.delete_feed(urllib.parse.unquote(feed_url)) reader.delete_feed(urllib.parse.unquote(feed_url))
except FeedNotFoundError as e: except FeedNotFoundError as e:
raise HTTPException(status_code=404, detail="Feed not found") from e raise HTTPException(status_code=404, detail="Feed not found") from e
commit_state_change(reader, f"Remove feed {urllib.parse.unquote(feed_url)}")
return RedirectResponse(url="/", status_code=303) return RedirectResponse(url="/", status_code=303)
@app.get("/update", response_class=HTMLResponse)
async def update_feed(
request: Request,
feed_url: str,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Update a feed.
Args:
request: The request object.
feed_url: The feed URL to update.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the feed page.
Raises:
HTTPException: If the feed is not found.
"""
try:
reader.update_feed(urllib.parse.unquote(feed_url))
except FeedNotFoundError as e:
raise HTTPException(status_code=404, detail="Feed not found") from e
logger.info("Manually updated feed: %s", feed_url)
return RedirectResponse(url="/feed?feed_url=" + urllib.parse.quote(feed_url), status_code=303)
@app.post("/backup")
async def manual_backup(
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
) -> RedirectResponse:
"""Manually trigger a git backup of the current state.
Args:
request: The request object.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the index page with a success or error message.
"""
backup_path = get_backup_path()
if backup_path is None:
message = "Git backup is not configured. Set GIT_BACKUP_PATH environment variable to enable backups."
logger.warning("Manual git backup attempted but GIT_BACKUP_PATH is not configured")
return RedirectResponse(url=f"/?message={urllib.parse.quote(message)}", status_code=303)
try:
commit_state_change(reader, "Manual backup triggered from web UI")
message = "Successfully created git backup!"
logger.info("Manual git backup completed successfully")
except Exception as e:
message = f"Failed to create git backup: {e}"
logger.exception("Manual git backup failed")
return RedirectResponse(url=f"/?message={urllib.parse.quote(message)}", status_code=303)
@app.get("/search", response_class=HTMLResponse) @app.get("/search", response_class=HTMLResponse)
async def search(request: Request, query: str): async def search(
request: Request,
query: str,
reader: Annotated[Reader, Depends(get_reader_dependency)],
):
"""Get entries matching a full-text search query. """Get entries matching a full-text search query.
Args: Args:
query: The query to search for. query: The query to search for.
request: The request object. request: The request object.
reader: The Reader instance.
Returns: Returns:
HTMLResponse: The search page. HTMLResponse: The search page.
""" """
reader.update_search() reader.update_search()
context = create_search_context(query, reader=reader)
context = { return templates.TemplateResponse(request=request, name="search.html", context={"request": request, **context})
"request": request,
"search_html": create_html_for_search_results(query),
"query": query,
"search_amount": reader.search_entry_counts(query),
}
return templates.TemplateResponse(request=request, name="search.html", context=context)
@app.get("/post_entry", response_class=HTMLResponse) @app.get("/post_entry", response_class=HTMLResponse)
async def post_entry(entry_id: str): async def post_entry(
entry_id: str,
reader: Annotated[Reader, Depends(get_reader_dependency)],
feed_url: str = "",
):
"""Send single entry to Discord. """Send single entry to Discord.
Args: Args:
entry_id: The entry to send. entry_id: The entry to send.
feed_url: Optional feed URL used to disambiguate entries with identical IDs.
reader: The Reader instance.
Returns: Returns:
RedirectResponse: Redirect to the feed page. RedirectResponse: Redirect to the feed page.
""" """
unquoted_entry_id: str = urllib.parse.unquote(entry_id) unquoted_entry_id: str = urllib.parse.unquote(entry_id)
entry: Entry | None = next((entry for entry in reader.get_entries() if entry.id == unquoted_entry_id), None) clean_feed_url: str = urllib.parse.unquote(feed_url.strip()) if feed_url else ""
# Prefer feed-scoped lookup when feed_url is provided. This avoids ambiguity when
# multiple feeds contain entries with the same ID.
entry: Entry | None = None
if clean_feed_url:
entry = next(
(entry for entry in reader.get_entries(feed=clean_feed_url) if entry.id == unquoted_entry_id),
None,
)
else:
entry = next((entry for entry in reader.get_entries() if entry.id == unquoted_entry_id), None)
if entry is None: if entry is None:
return HTMLResponse(status_code=404, content=f"Entry '{entry_id}' not found.") return HTMLResponse(status_code=404, content=f"Entry '{entry_id}' not found.")
if result := send_entry_to_discord(entry=entry): if result := send_entry_to_discord(entry=entry, reader=reader):
return result return result
# Redirect to the feed page. # Redirect to the feed page.
clean_feed_url: str = entry.feed.url.strip() redirect_feed_url: str = entry.feed.url.strip()
return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(clean_feed_url)}", status_code=303) return RedirectResponse(url=f"/feed?feed_url={urllib.parse.quote(redirect_feed_url)}", status_code=303)
@app.post("/modify_webhook", response_class=HTMLResponse) @app.post("/modify_webhook", response_class=HTMLResponse)
def modify_webhook(old_hook: Annotated[str, Form()], new_hook: Annotated[str, Form()]): def modify_webhook(
old_hook: Annotated[str, Form()],
new_hook: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
redirect_to: Annotated[str, Form()] = "",
):
"""Modify a webhook. """Modify a webhook.
Args: Args:
old_hook: The webhook to modify. old_hook: The webhook to modify.
new_hook: The new webhook. new_hook: The new webhook.
redirect_to: Optional redirect URL after the update.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to the webhook page.
Raises: Raises:
HTTPException: Webhook could not be modified. HTTPException: Webhook could not be modified.
Returns:
RedirectResponse: Redirect to the webhook page.
""" """
# Get current webhooks from the database if they exist otherwise use an empty list. # Get current webhooks from the database if they exist otherwise use an empty list.
webhooks = list(reader.get_tag((), "webhooks", [])) webhooks = list(reader.get_tag((), "webhooks", []))
@ -919,15 +1510,20 @@ def modify_webhook(old_hook: Annotated[str, Form()], new_hook: Annotated[str, Fo
# Webhooks are stored as a list of dictionaries. # Webhooks are stored as a list of dictionaries.
# Example: [{"name": "webhook_name", "url": "webhook_url"}] # Example: [{"name": "webhook_name", "url": "webhook_url"}]
webhooks = cast("list[dict[str, str]]", webhooks) webhooks = cast("list[dict[str, str]]", webhooks)
old_hook_clean: str = old_hook.strip()
new_hook_clean: str = new_hook.strip()
webhook_modified: bool = False
for hook in webhooks: for hook in webhooks:
if hook["url"] in old_hook.strip(): if hook["url"] in old_hook_clean:
hook["url"] = new_hook.strip() hook["url"] = new_hook_clean
# Check if it has been modified. # Check if it has been modified.
if hook["url"] != new_hook.strip(): if hook["url"] != new_hook_clean:
raise HTTPException(status_code=500, detail="Webhook could not be modified") raise HTTPException(status_code=500, detail="Webhook could not be modified")
webhook_modified = True
# Add our new list of webhooks to the database. # Add our new list of webhooks to the database.
reader.set_tag((), "webhooks", webhooks) # pyright: ignore[reportArgumentType] reader.set_tag((), "webhooks", webhooks) # pyright: ignore[reportArgumentType]
@ -935,16 +1531,506 @@ def modify_webhook(old_hook: Annotated[str, Form()], new_hook: Annotated[str, Fo
# matches the old one. # matches the old one.
feeds: Iterable[Feed] = reader.get_feeds() feeds: Iterable[Feed] = reader.get_feeds()
for feed in feeds: for feed in feeds:
webhook: str = str(reader.get_tag(feed, "webhook", ""))
if webhook == old_hook_clean:
reader.set_tag(feed.url, "webhook", new_hook_clean) # pyright: ignore[reportArgumentType]
if webhook_modified and old_hook_clean != new_hook_clean:
commit_state_change(reader, f"Modify webhook URL from {old_hook_clean} to {new_hook_clean}")
redirect_url: str = redirect_to.strip() or "/webhooks"
if redirect_to:
redirect_url = redirect_url.replace(urllib.parse.quote(old_hook_clean), urllib.parse.quote(new_hook_clean))
redirect_url = redirect_url.replace(old_hook_clean, new_hook_clean)
# Redirect to the requested page.
return RedirectResponse(url=redirect_url, status_code=303)
def extract_youtube_video_id(url: str) -> str | None:
"""Extract YouTube video ID from a YouTube video URL.
Args:
url: The YouTube video URL.
Returns:
The video ID if found, None otherwise.
"""
if not url:
return None
# Handle standard YouTube URLs (youtube.com/watch?v=VIDEO_ID)
if "youtube.com/watch" in url and "v=" in url:
return url.split("v=")[1].split("&", maxsplit=1)[0]
# Handle shortened YouTube URLs (youtu.be/VIDEO_ID)
if "youtu.be/" in url:
return url.split("youtu.be/")[1].split("?", maxsplit=1)[0]
return None
def resolve_final_feed_url(url: str) -> tuple[str, str | None]:
"""Resolve a feed URL by following redirects.
Args:
url: The feed URL to resolve.
Returns:
tuple[str, str | None]: A tuple with (resolved_url, error_message).
error_message is None when resolution succeeded.
"""
clean_url: str = url.strip()
if not clean_url:
return "", "URL is empty"
if not is_url_valid(clean_url):
return clean_url, "URL is invalid"
try: try:
webhook = reader.get_tag(feed, "webhook") response: Response = httpx.get(clean_url, follow_redirects=True, timeout=10.0)
except TagNotFoundError: except httpx.HTTPError as e:
return clean_url, str(e)
if not response.is_success:
return clean_url, f"HTTP {response.status_code}"
return str(response.url), None
def create_webhook_feed_url_preview(
webhook_feeds: list[Feed],
replace_from: str,
replace_to: str,
resolve_urls: bool, # noqa: FBT001
force_update: bool = False, # noqa: FBT001, FBT002
existing_feed_urls: set[str] | None = None,
) -> list[dict[str, str | bool | None]]:
"""Create preview rows for bulk feed URL replacement.
Args:
webhook_feeds: Feeds attached to a webhook.
replace_from: Text to replace in each URL.
replace_to: Replacement text.
resolve_urls: Whether to resolve resulting URLs via HTTP redirects.
force_update: Whether conflicts should be marked as force-overwritable.
existing_feed_urls: Optional set of all tracked feed URLs used for conflict detection.
Returns:
list[dict[str, str | bool | None]]: Rows used in the preview table.
"""
known_feed_urls: set[str] = existing_feed_urls or {feed.url for feed in webhook_feeds}
preview_rows: list[dict[str, str | bool | None]] = []
for feed in webhook_feeds:
old_url: str = feed.url
has_match: bool = bool(replace_from and replace_from in old_url)
candidate_url: str = old_url
if has_match:
candidate_url = old_url.replace(replace_from, replace_to)
resolved_url: str = candidate_url
resolution_error: str | None = None
if has_match and candidate_url != old_url and resolve_urls:
resolved_url, resolution_error = resolve_final_feed_url(candidate_url)
will_force_ignore_errors: bool = bool(
force_update and bool(resolution_error) and has_match and old_url != candidate_url,
)
target_exists: bool = bool(
has_match and not resolution_error and resolved_url != old_url and resolved_url in known_feed_urls,
)
will_force_overwrite: bool = bool(target_exists and force_update)
will_change: bool = bool(
has_match
and old_url != (candidate_url if will_force_ignore_errors else resolved_url)
and (not target_exists or will_force_overwrite)
and (not resolution_error or will_force_ignore_errors),
)
preview_rows.append({
"old_url": old_url,
"candidate_url": candidate_url,
"resolved_url": resolved_url,
"has_match": has_match,
"will_change": will_change,
"target_exists": target_exists,
"will_force_overwrite": will_force_overwrite,
"will_force_ignore_errors": will_force_ignore_errors,
"resolution_error": resolution_error,
})
return preview_rows
def build_webhook_mass_update_context(
webhook_feeds: list[Feed],
all_feeds: list[Feed],
replace_from: str,
replace_to: str,
resolve_urls: bool, # noqa: FBT001
force_update: bool = False, # noqa: FBT001, FBT002
) -> dict[str, str | bool | int | list[dict[str, str | bool | None]] | dict[str, int]]:
"""Build context data used by the webhook mass URL update preview UI.
Args:
webhook_feeds: Feeds attached to the selected webhook.
all_feeds: All tracked feeds.
replace_from: Text to replace in URLs.
replace_to: Replacement text.
resolve_urls: Whether to resolve resulting URLs.
force_update: Whether to allow overwriting existing target URLs.
Returns:
dict[str, ...]: Context values for rendering preview controls and table.
"""
clean_replace_from: str = replace_from.strip()
clean_replace_to: str = replace_to.strip()
preview_rows: list[dict[str, str | bool | None]] = []
if clean_replace_from:
preview_rows = create_webhook_feed_url_preview(
webhook_feeds=webhook_feeds,
replace_from=clean_replace_from,
replace_to=clean_replace_to,
resolve_urls=resolve_urls,
force_update=force_update,
existing_feed_urls={feed.url for feed in all_feeds},
)
preview_summary: dict[str, int] = {
"total": len(preview_rows),
"matched": sum(1 for row in preview_rows if row["has_match"]),
"will_update": sum(1 for row in preview_rows if row["will_change"]),
"conflicts": sum(1 for row in preview_rows if row["target_exists"] and not row["will_force_overwrite"]),
"force_overwrite": sum(1 for row in preview_rows if row["will_force_overwrite"]),
"force_ignore_errors": sum(1 for row in preview_rows if row["will_force_ignore_errors"]),
"resolve_errors": sum(1 for row in preview_rows if row["resolution_error"]),
}
preview_summary["no_match"] = preview_summary["total"] - preview_summary["matched"]
preview_summary["no_change"] = sum(
1 for row in preview_rows if row["has_match"] and not row["resolution_error"] and not row["will_change"]
)
return {
"replace_from": clean_replace_from,
"replace_to": clean_replace_to,
"resolve_urls": resolve_urls,
"force_update": force_update,
"preview_rows": preview_rows,
"preview_summary": preview_summary,
"preview_change_count": preview_summary["will_update"],
}
@app.get("/webhook_entries_mass_update_preview", response_class=HTMLResponse)
async def get_webhook_entries_mass_update_preview(
webhook_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
replace_from: str = "",
replace_to: str = "",
resolve_urls: bool = True, # noqa: FBT001, FBT002
force_update: bool = False, # noqa: FBT001, FBT002
) -> HTMLResponse:
"""Render the mass-update preview fragment for a webhook using HTMX.
Args:
webhook_url: Webhook URL whose feeds are being updated.
request: The request object.
reader: The Reader instance.
replace_from: Text to find in URLs.
replace_to: Replacement text.
resolve_urls: Whether to resolve resulting URLs.
force_update: Whether to allow overwriting existing target URLs.
Returns:
HTMLResponse: Rendered partial template containing summary + preview table.
"""
clean_webhook_url: str = urllib.parse.unquote(webhook_url.strip())
all_feeds: list[Feed] = list(reader.get_feeds())
webhook_feeds: list[Feed] = [
feed for feed in all_feeds if str(reader.get_tag(feed.url, "webhook", "")) == clean_webhook_url
]
context = {
"request": request,
"webhook_url": clean_webhook_url,
**build_webhook_mass_update_context(
webhook_feeds=webhook_feeds,
all_feeds=all_feeds,
replace_from=replace_from,
replace_to=replace_to,
resolve_urls=resolve_urls,
force_update=force_update,
),
}
return templates.TemplateResponse(request=request, name="_webhook_mass_update_preview.html", context=context)
@app.get("/webhook_entries", response_class=HTMLResponse)
async def get_webhook_entries( # noqa: C901, PLR0914
webhook_url: str,
request: Request,
reader: Annotated[Reader, Depends(get_reader_dependency)],
starting_after: str = "",
replace_from: str = "",
replace_to: str = "",
resolve_urls: bool = True, # noqa: FBT001, FBT002
force_update: bool = False, # noqa: FBT001, FBT002
message: str = "",
) -> HTMLResponse:
"""Get all latest entries from all feeds for a specific webhook.
Args:
webhook_url: The webhook URL to get entries for.
request: The request object.
starting_after: The entry to start after. Used for pagination.
replace_from: Optional URL substring to find for bulk URL replacement preview.
replace_to: Optional replacement substring used in bulk URL replacement preview.
resolve_urls: Whether to resolve replaced URLs by following redirects.
force_update: Whether to allow overwriting existing target URLs during apply.
message: Optional status message shown in the UI.
reader: The Reader instance.
Returns:
HTMLResponse: The webhook entries page.
Raises:
HTTPException: If no feeds are found for this webhook or webhook doesn't exist.
"""
entries_per_page: int = 20
clean_webhook_url: str = urllib.parse.unquote(webhook_url.strip())
# Get the webhook name from the webhooks list
webhooks: list[dict[str, str]] = cast("list[dict[str, str]]", list(reader.get_tag((), "webhooks", [])))
webhook_name: str = ""
for hook in webhooks:
if hook["url"] == clean_webhook_url:
webhook_name = hook["name"]
break
if not webhook_name:
raise HTTPException(status_code=404, detail=f"Webhook not found: {clean_webhook_url}")
hook_info: WebhookInfo = get_data_from_hook_url(hook_name=webhook_name, hook_url=clean_webhook_url)
# Get all feeds associated with this webhook
all_feeds: list[Feed] = list(reader.get_feeds())
webhook_feeds: list[Feed] = []
for feed in all_feeds:
feed_webhook: str = str(reader.get_tag(feed.url, "webhook", ""))
if feed_webhook == clean_webhook_url:
webhook_feeds.append(feed)
# Get all entries from all feeds for this webhook, sorted by published date
all_entries: list[Entry] = [entry for feed in webhook_feeds for entry in reader.get_entries(feed=feed)]
# Sort entries by published date (newest first), with undated entries last.
all_entries.sort(
key=lambda e: (
e.published is not None,
e.published or datetime.min.replace(tzinfo=UTC),
),
reverse=True,
)
# Handle pagination
if starting_after:
try:
start_after_entry: Entry | None = reader.get_entry((
starting_after.split("|", maxsplit=1)[0],
starting_after.split("|")[1],
))
except (FeedNotFoundError, EntryNotFoundError):
start_after_entry = None
else:
start_after_entry = None
# Find the index of the starting entry
start_index: int = 0
if start_after_entry:
for idx, entry in enumerate(all_entries):
if entry.id == start_after_entry.id and entry.feed.url == start_after_entry.feed.url:
start_index = idx + 1
break
# Get the page of entries
paginated_entries: list[Entry] = all_entries[start_index : start_index + entries_per_page]
# Get the last entry for pagination
last_entry: Entry | None = None
if paginated_entries:
last_entry = paginated_entries[-1]
# Create the html for the entries
html: str = create_html_for_feed(reader=reader, entries=paginated_entries)
mass_update_context = build_webhook_mass_update_context(
webhook_feeds=webhook_feeds,
all_feeds=all_feeds,
replace_from=replace_from,
replace_to=replace_to,
resolve_urls=resolve_urls,
force_update=force_update,
)
# Check if there are more entries available
total_entries: int = len(all_entries)
is_show_more_entries_button_visible: bool = (start_index + entries_per_page) < total_entries
context = {
"request": request,
"hook_info": hook_info,
"webhook_name": webhook_name,
"webhook_url": clean_webhook_url,
"webhook_feeds": webhook_feeds,
"entries": paginated_entries,
"html": html,
"last_entry": last_entry,
"is_show_more_entries_button_visible": is_show_more_entries_button_visible,
"total_entries": total_entries,
"feeds_count": len(webhook_feeds),
"message": urllib.parse.unquote(message) if message else "",
**mass_update_context,
}
return templates.TemplateResponse(request=request, name="webhook_entries.html", context=context)
@app.post("/bulk_change_feed_urls", response_class=HTMLResponse)
async def post_bulk_change_feed_urls( # noqa: C901, PLR0914, PLR0912, PLR0915
webhook_url: Annotated[str, Form()],
replace_from: Annotated[str, Form()],
reader: Annotated[Reader, Depends(get_reader_dependency)],
replace_to: Annotated[str, Form()] = "",
resolve_urls: Annotated[bool, Form()] = True, # noqa: FBT002
force_update: Annotated[bool, Form()] = False, # noqa: FBT002
) -> RedirectResponse:
"""Bulk-change feed URLs attached to a webhook.
Args:
webhook_url: The webhook URL whose feeds should be updated.
replace_from: Text to find in each URL.
replace_to: Text to replace with.
resolve_urls: Whether to resolve resulting URLs via redirects.
force_update: Whether existing target feed URLs should be overwritten.
reader: The Reader instance.
Returns:
RedirectResponse: Redirect to webhook detail with status message.
Raises:
HTTPException: If webhook is missing or replace_from is empty.
"""
clean_webhook_url: str = urllib.parse.unquote(webhook_url.strip())
clean_replace_from: str = replace_from.strip()
clean_replace_to: str = replace_to.strip()
if not clean_replace_from:
raise HTTPException(status_code=400, detail="replace_from cannot be empty")
webhooks: list[dict[str, str]] = cast("list[dict[str, str]]", list(reader.get_tag((), "webhooks", [])))
if not any(hook["url"] == clean_webhook_url for hook in webhooks):
raise HTTPException(status_code=404, detail=f"Webhook not found: {clean_webhook_url}")
all_feeds: list[Feed] = list(reader.get_feeds())
webhook_feeds: list[Feed] = []
for feed in all_feeds:
feed_webhook: str = str(reader.get_tag(feed.url, "webhook", ""))
if feed_webhook == clean_webhook_url:
webhook_feeds.append(feed)
preview_rows: list[dict[str, str | bool | None]] = create_webhook_feed_url_preview(
webhook_feeds=webhook_feeds,
replace_from=clean_replace_from,
replace_to=clean_replace_to,
resolve_urls=resolve_urls,
force_update=force_update,
existing_feed_urls={feed.url for feed in all_feeds},
)
changed_count: int = 0
skipped_count: int = 0
failed_count: int = 0
conflict_count: int = 0
force_overwrite_count: int = 0
for row in preview_rows:
if not row["has_match"]:
continue continue
if webhook == old_hook.strip(): if row["resolution_error"] and not force_update:
reader.set_tag(feed.url, "webhook", new_hook.strip()) # pyright: ignore[reportArgumentType] skipped_count += 1
continue
# Redirect to the webhook page. if row["target_exists"] and not force_update:
return RedirectResponse(url="/webhooks", status_code=303) conflict_count += 1
skipped_count += 1
continue
old_url: str = str(row["old_url"])
new_url: str = str(row["candidate_url"] if row["will_force_ignore_errors"] else row["resolved_url"])
if old_url == new_url:
skipped_count += 1
continue
if row["target_exists"] and force_update:
try:
reader.delete_feed(new_url)
force_overwrite_count += 1
except FeedNotFoundError:
pass
except ReaderError:
failed_count += 1
continue
try:
reader.change_feed_url(old_url, new_url)
except FeedExistsError:
skipped_count += 1
continue
except FeedNotFoundError:
skipped_count += 1
continue
except ReaderError:
failed_count += 1
continue
try:
reader.update_feed(new_url)
except Exception:
logger.exception("Failed to update feed after URL change: %s", new_url)
for entry in reader.get_entries(feed=new_url, read=False):
try:
reader.set_entry_read(entry, True)
except Exception:
logger.exception("Failed to mark entry as read after URL change: %s", entry.id)
changed_count += 1
if changed_count > 0:
commit_state_change(
reader,
f"Bulk change {changed_count} feed URL(s) for webhook {clean_webhook_url}",
)
status_message: str = (
f"Updated {changed_count} feed URL(s). "
f"Force overwrote {force_overwrite_count}. "
f"Conflicts {conflict_count}. "
f"Skipped {skipped_count}. "
f"Failed {failed_count}."
)
redirect_url: str = (
f"/webhook_entries?webhook_url={urllib.parse.quote(clean_webhook_url)}"
f"&message={urllib.parse.quote(status_message)}"
)
return RedirectResponse(url=redirect_url, status_code=303)
if __name__ == "__main__": if __name__ == "__main__":
@ -957,9 +2043,9 @@ if __name__ == "__main__":
uvicorn.run( uvicorn.run(
"main:app", "main:app",
log_level="info", log_level="debug",
host="0.0.0.0", # noqa: S104 host="0.0.0.0", # noqa: S104
port=5000, port=3000,
proxy_headers=True, proxy_headers=True,
forwarded_allow_ips="*", forwarded_allow_ips="*",
) )

View file

@ -1,106 +0,0 @@
from __future__ import annotations
from reader import Feed, Reader, TagNotFoundError
from discord_rss_bot.settings import default_custom_embed, default_custom_message
def add_custom_message(reader: Reader, feed: Feed) -> None:
"""Add the custom message tag to the feed if it doesn't exist.
Args:
reader: What Reader to use.
feed: The feed to add the tag to.
"""
try:
reader.get_tag(feed, "custom_message")
except TagNotFoundError:
reader.set_tag(feed.url, "custom_message", default_custom_message) # pyright: ignore[reportArgumentType]
reader.set_tag(feed.url, "has_custom_message", True) # pyright: ignore[reportArgumentType]
def add_has_custom_message(reader: Reader, feed: Feed) -> None:
"""Add the has_custom_message tag to the feed if it doesn't exist.
Args:
reader: What Reader to use.
feed: The feed to add the tag to.
"""
try:
reader.get_tag(feed, "has_custom_message")
except TagNotFoundError:
if reader.get_tag(feed, "custom_message") == default_custom_message:
reader.set_tag(feed.url, "has_custom_message", False) # pyright: ignore[reportArgumentType]
else:
reader.set_tag(feed.url, "has_custom_message", True) # pyright: ignore[reportArgumentType]
def add_if_embed(reader: Reader, feed: Feed) -> None:
"""Add the if_embed tag to the feed if it doesn't exist.
Args:
reader: What Reader to use.
feed: The feed to add the tag to.
"""
try:
reader.get_tag(feed, "if_embed")
except TagNotFoundError:
reader.set_tag(feed.url, "if_embed", True) # pyright: ignore[reportArgumentType]
def add_custom_embed(reader: Reader, feed: Feed) -> None:
"""Add the custom embed tag to the feed if it doesn't exist.
Args:
reader: What Reader to use.
feed: The feed to add the tag to.
"""
try:
reader.get_tag(feed, "embed")
except TagNotFoundError:
reader.set_tag(feed.url, "embed", default_custom_embed) # pyright: ignore[reportArgumentType]
reader.set_tag(feed.url, "has_custom_embed", True) # pyright: ignore[reportArgumentType]
def add_has_custom_embed(reader: Reader, feed: Feed) -> None:
"""Add the has_custom_embed tag to the feed if it doesn't exist.
Args:
reader: What Reader to use.
feed: The feed to add the tag to.
"""
try:
reader.get_tag(feed, "has_custom_embed")
except TagNotFoundError:
if reader.get_tag(feed, "embed") == default_custom_embed:
reader.set_tag(feed.url, "has_custom_embed", False) # pyright: ignore[reportArgumentType]
else:
reader.set_tag(feed.url, "has_custom_embed", True) # pyright: ignore[reportArgumentType]
def add_should_send_embed(reader: Reader, feed: Feed) -> None:
"""Add the should_send_embed tag to the feed if it doesn't exist.
Args:
reader: What Reader to use.
feed: The feed to add the tag to.
"""
try:
reader.get_tag(feed, "should_send_embed")
except TagNotFoundError:
reader.set_tag(feed.url, "should_send_embed", True) # pyright: ignore[reportArgumentType]
def add_missing_tags(reader: Reader) -> None:
"""Add missing tags to feeds.
Args:
reader: What Reader to use.
"""
for feed in reader.get_feeds():
add_custom_message(reader, feed)
add_has_custom_message(reader, feed)
add_if_embed(reader, feed)
add_custom_embed(reader, feed)
add_has_custom_embed(reader, feed)
add_should_send_embed(reader, feed)

View file

@ -3,66 +3,78 @@ from __future__ import annotations
import urllib.parse import urllib.parse
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from discord_rss_bot.settings import get_reader
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Iterable from collections.abc import Iterable
from reader import EntrySearchResult, Feed, HighlightedString, Reader from reader import EntrySearchResult
from reader import Feed
from reader import HighlightedString
from reader import Reader
def create_html_for_search_results(query: str, custom_reader: Reader | None = None) -> str: def create_search_context(query: str, reader: Reader) -> dict:
"""Create HTML for the search results. """Build context for search.html template.
Args: Args:
query: Our search query query (str): The search query.
custom_reader: The reader. If None, we will get the reader from the settings. reader (Reader): Custom Reader instance.
Returns: Returns:
str: The HTML. dict: Context dictionary for rendering the search results.
""" """
# TODO(TheLovinator): There is a .content that also contains text, we should use that if .summary is not available.
# TODO(TheLovinator): We should also add <span> tags to the title.
# Get the default reader if we didn't get a custom one.
reader: Reader = get_reader() if custom_reader is None else custom_reader
search_results: Iterable[EntrySearchResult] = reader.search_entries(query) search_results: Iterable[EntrySearchResult] = reader.search_entries(query)
html: str = "" results: list[dict] = []
for result in search_results: for result in search_results:
if ".summary" in result.content:
result_summary: str = add_span_with_slice(result.content[".summary"])
feed: Feed = reader.get_feed(result.feed_url) feed: Feed = reader.get_feed(result.feed_url)
feed_url: str = urllib.parse.quote(feed.url) feed_url: str = urllib.parse.quote(feed.url)
html += f""" # Prefer summary, fall back to content
<div class="p-2 mb-2 border border-dark"> if ".summary" in result.content:
<a class="text-muted text-decoration-none" href="/feed?feed_url={feed_url}"> highlighted = result.content[".summary"]
<h2>{result.metadata[".title"]}</h2> else:
</a> content_keys = [k for k in result.content if k.startswith(".content")]
{result_summary} highlighted = result.content[content_keys[0]] if content_keys else None
</div>
"""
return html summary: str = add_spans(highlighted) if highlighted else "(no preview available)"
results.append({
"title": add_spans(result.metadata.get(".title")),
"summary": summary,
"feed_url": feed_url,
})
return {
"query": query,
"search_amount": {"total": len(results)},
"results": results,
}
def add_span_with_slice(highlighted_string: HighlightedString) -> str: def add_spans(highlighted_string: HighlightedString | None) -> str:
"""Add span tags to the string to highlight the search results. """Wrap all highlighted parts with <span> tags.
Args: Args:
highlighted_string: The highlighted string. highlighted_string (HighlightedString | None): The highlighted string to process.
Returns: Returns:
str: The string with added <span> tags. str: The processed string with <span> tags around highlighted parts.
""" """
# TODO(TheLovinator): We are looping through the highlights and only using the last one. We should use all of them. if highlighted_string is None:
before_span, span_part, after_span = "", "", "" return ""
value: str = highlighted_string.value
parts: list[str] = []
last_index = 0
for txt_slice in highlighted_string.highlights: for txt_slice in highlighted_string.highlights:
before_span: str = f"{highlighted_string.value[: txt_slice.start]}" parts.extend((
span_part: str = f"<span class='bg-warning'>{highlighted_string.value[txt_slice.start : txt_slice.stop]}</span>" value[last_index : txt_slice.start],
after_span: str = f"{highlighted_string.value[txt_slice.stop :]}" f"<span class='bg-warning'>{value[txt_slice.start : txt_slice.stop]}</span>",
))
last_index = txt_slice.stop
return f"{before_span}{span_part}{after_span}" # add any trailing text
parts.append(value[last_index:])
return "".join(parts)

View file

@ -1,16 +1,23 @@
from __future__ import annotations from __future__ import annotations
import os
import typing import typing
from functools import lru_cache from functools import lru_cache
from pathlib import Path from pathlib import Path
from platformdirs import user_data_dir from platformdirs import user_data_dir
from reader import Reader, make_reader from reader import Reader
from reader import make_reader
if typing.TYPE_CHECKING: if typing.TYPE_CHECKING:
from reader.types import JSONType from reader.types import JSONType
data_dir: str = user_data_dir(appname="discord_rss_bot", appauthor="TheLovinator", roaming=True, ensure_exists=True) data_dir: str = os.getenv("DISCORD_RSS_BOT_DATA_DIR", "").strip() or user_data_dir(
appname="discord_rss_bot",
appauthor="TheLovinator",
roaming=True,
ensure_exists=True,
)
# TODO(TheLovinator): Add default things to the database and make the edible. # TODO(TheLovinator): Add default things to the database and make the edible.
@ -24,7 +31,7 @@ default_custom_embed: dict[str, str] = {
} }
@lru_cache @lru_cache(maxsize=1)
def get_reader(custom_location: Path | None = None) -> Reader: def get_reader(custom_location: Path | None = None) -> Reader:
"""Get the reader. """Get the reader.
@ -35,5 +42,13 @@ def get_reader(custom_location: Path | None = None) -> Reader:
The reader. The reader.
""" """
db_location: Path = custom_location or Path(data_dir) / "db.sqlite" db_location: Path = custom_location or Path(data_dir) / "db.sqlite"
reader: Reader = make_reader(url=str(db_location))
return make_reader(url=str(db_location)) # https://reader.readthedocs.io/en/latest/api.html#reader.types.UpdateConfig
# Set the default update interval to 15 minutes if not already configured
# Users can change this via the Settings page or per-feed in the feed page
if reader.get_tag((), ".reader.update", None) is None:
# Set default
reader.set_tag((), ".reader.update", {"interval": 15})
return reader

View file

@ -13,3 +13,7 @@ body {
.form-text { .form-text {
color: #acabab; color: #acabab;
} }
.interval-input {
max-width: 120px;
}

View file

@ -0,0 +1,73 @@
{% if preview_rows %}
<p class="small text-muted mb-1">
{{ preview_change_count }} feed URL{{ 's' if preview_change_count != 1 else '' }} ready to update.
</p>
<div class="small text-muted mb-2 d-flex flex-wrap gap-2">
<span class="badge bg-secondary">Total: {{ preview_summary.total }}</span>
<span class="badge bg-info text-dark">Matched: {{ preview_summary.matched }}</span>
<span class="badge bg-success">Will update: {{ preview_summary.will_update }}</span>
<span class="badge bg-warning text-dark">Conflicts: {{ preview_summary.conflicts }}</span>
<span class="badge bg-warning">Force overwrite: {{ preview_summary.force_overwrite }}</span>
<span class="badge bg-warning text-dark">Force ignore errors: {{ preview_summary.force_ignore_errors }}</span>
<span class="badge bg-danger">Resolve errors: {{ preview_summary.resolve_errors }}</span>
<span class="badge bg-secondary">No change: {{ preview_summary.no_change }}</span>
<span class="badge bg-secondary">No match: {{ preview_summary.no_match }}</span>
</div>
<form action="/bulk_change_feed_urls" method="post" class="mb-2">
<input type="hidden" name="webhook_url" value="{{ webhook_url }}" />
<input type="hidden" name="replace_from" value="{{ replace_from }}" />
<input type="hidden" name="replace_to" value="{{ replace_to }}" />
<input type="hidden"
name="resolve_urls"
value="{{ 'true' if resolve_urls else 'false' }}" />
<input type="hidden"
name="force_update"
value="{{ 'true' if force_update else 'false' }}" />
<button type="submit"
class="btn btn-warning w-100"
{% if preview_change_count == 0 %}disabled{% endif %}
onclick="return confirm('Apply these feed URL updates?');">Apply mass update</button>
</form>
<div class="table-responsive mt-2">
<table class="table table-sm table-dark table-striped align-middle mb-0">
<thead>
<tr>
<th scope="col">Old URL</th>
<th scope="col">New URL</th>
<th scope="col">Status</th>
</tr>
</thead>
<tbody>
{% for row in preview_rows %}
<tr>
<td>
<code>{{ row.old_url }}</code>
</td>
<td>
<code>{{ row.resolved_url if resolve_urls else row.candidate_url }}</code>
</td>
<td>
{% if not row.has_match %}
<span class="badge bg-secondary">No match</span>
{% elif row.will_force_ignore_errors %}
<span class="badge bg-warning text-dark">Will force update (ignore resolve error)</span>
{% elif row.resolution_error %}
<span class="badge bg-danger">{{ row.resolution_error }}</span>
{% elif row.will_force_overwrite %}
<span class="badge bg-warning">Will force overwrite</span>
{% elif row.target_exists %}
<span class="badge bg-warning text-dark">Conflict: target URL exists</span>
{% elif row.will_change %}
<span class="badge bg-success">Will update</span>
{% else %}
<span class="badge bg-secondary">No change</span>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% elif replace_from %}
<p class="small text-muted mb-0">No preview rows found for that replacement pattern.</p>
{% endif %}

View file

@ -1,6 +1,5 @@
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="UTF-8" /> <meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" /> <meta name="viewport" content="width=device-width, initial-scale=1" />
@ -18,7 +17,6 @@
{% block head %} {% block head %}
{% endblock head %} {% endblock head %}
</head> </head>
<body class="text-white-50"> <body class="text-white-50">
{% include "nav.html" %} {% include "nav.html" %}
<div class="p-2 mb-2"> <div class="p-2 mb-2">
@ -27,10 +25,12 @@
{% if messages %} {% if messages %}
<div class="alert alert-warning alert-dismissible fade show" role="alert"> <div class="alert alert-warning alert-dismissible fade show" role="alert">
<pre>{{ messages }}</pre> <pre>{{ messages }}</pre>
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button> <button type="button"
class="btn-close"
data-bs-dismiss="alert"
aria-label="Close"></button>
</div> </div>
{% endif %} {% endif %}
{% block content %} {% block content %}
{% endblock content %} {% endblock content %}
<footer class="d-flex flex-wrap justify-content-between align-items-center py-3 my-4 border-top"> <footer class="d-flex flex-wrap justify-content-between align-items-center py-3 my-4 border-top">
@ -52,7 +52,9 @@
</div> </div>
</div> </div>
</div> </div>
<script src="https://cdn.jsdelivr.net/npm/htmx.org@2.0.8/dist/htmx.min.js"
integrity="sha384-/TgkGk7p307TH7EXJDuUlgG3Ce1UVolAOFopFekQkkXihi5u/6OCvVKyz1W+idaz"
crossorigin="anonymous"></script>
<script src="/static/bootstrap.min.js" defer></script> <script src="/static/bootstrap.min.js" defer></script>
</body> </body>
</html> </html>

View file

@ -42,6 +42,49 @@
<label for="blacklist_author" class="col-sm-6 col-form-label">Blacklist - Author</label> <label for="blacklist_author" class="col-sm-6 col-form-label">Blacklist - Author</label>
<input name="blacklist_author" type="text" class="form-control bg-dark border-dark text-muted" <input name="blacklist_author" type="text" class="form-control bg-dark border-dark text-muted"
id="blacklist_author" value="{%- if blacklist_author -%}{{ blacklist_author }}{%- endif -%}" /> id="blacklist_author" value="{%- if blacklist_author -%}{{ blacklist_author }}{%- endif -%}" />
<div class="mt-4">
<div class="form-text">
<ul class="list-inline">
<li>
Regular expression patterns for advanced filtering. Each pattern should be on a new
line.
</li>
<li>Patterns are case-insensitive.</li>
<li>
Examples:
<code>
<pre>
^New Release:.*
\b(update|version|patch)\s+\d+\.\d+
.*\[(important|notice)\].*
</pre>
</code>
</li>
</ul>
</div>
<label for="regex_blacklist_title" class="col-sm-6 col-form-label">Regex Blacklist - Title</label>
<textarea name="regex_blacklist_title" class="form-control bg-dark border-dark text-muted"
id="regex_blacklist_title"
rows="3">{%- if regex_blacklist_title -%}{{ regex_blacklist_title }}{%- endif -%}</textarea>
<label for="regex_blacklist_summary" class="col-sm-6 col-form-label">Regex Blacklist -
Summary</label>
<textarea name="regex_blacklist_summary" class="form-control bg-dark border-dark text-muted"
id="regex_blacklist_summary"
rows="3">{%- if regex_blacklist_summary -%}{{ regex_blacklist_summary }}{%- endif -%}</textarea>
<label for="regex_blacklist_content" class="col-sm-6 col-form-label">Regex Blacklist -
Content</label>
<textarea name="regex_blacklist_content" class="form-control bg-dark border-dark text-muted"
id="regex_blacklist_content"
rows="3">{%- if regex_blacklist_content -%}{{ regex_blacklist_content }}{%- endif -%}</textarea>
<label for="regex_blacklist_author" class="col-sm-6 col-form-label">Regex Blacklist - Author</label>
<textarea name="regex_blacklist_author" class="form-control bg-dark border-dark text-muted"
id="regex_blacklist_author"
rows="3">{%- if regex_blacklist_author -%}{{ regex_blacklist_author }}{%- endif -%}</textarea>
</div>
</div> </div>
</div> </div>
<!-- Add a hidden feed_url field to the form --> <!-- Add a hidden feed_url field to the form -->

View file

@ -14,90 +14,90 @@
<li>You can use \n to create a new line.</li> <li>You can use \n to create a new line.</li>
<li> <li>
You can remove the embed from links by adding < and> around the link. (For example < You can remove the embed from links by adding < and> around the link. (For example <
{% raw %} {{ entry_link }} {% endraw %}>) {% raw %} {{entry_link}} {% endraw %}>)
</li> </li>
<br /> <br />
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_author }} {{feed_author}}
{% endraw %} {% endraw %}
</code>{{ feed.author }} </code>{{ feed.author }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_added }} {{feed_added}}
{% endraw %} {% endraw %}
</code>{{ feed.added }} </code>{{ feed.added }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_last_exception }} {{feed_last_exception}}
{% endraw %} {% endraw %}
</code>{{ feed.last_exception }} </code>{{ feed.last_exception }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_last_updated }} {{feed_last_updated}}
{% endraw %} {% endraw %}
</code>{{ feed.last_updated }} </code>{{ feed.last_updated }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_link }} {{feed_link}}
{% endraw %} {% endraw %}
</code>{{ feed.link }} </code>{{ feed.link }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_subtitle }} {{feed_subtitle}}
{% endraw %} {% endraw %}
</code>{{ feed.subtitle }} </code>{{ feed.subtitle }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_title }} {{feed_title}}
{% endraw %} {% endraw %}
</code>{{ feed.title }} </code>{{ feed.title }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_updated }} {{feed_updated}}
{% endraw %} {% endraw %}
</code>{{ feed.updated }} </code>{{ feed.updated }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_updates_enabled }} {{feed_updates_enabled}}
{% endraw %} {% endraw %}
</code>{{ feed.updates_enabled }} </code>{{ feed.updates_enabled }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_url }} {{feed_url}}
{% endraw %} {% endraw %}
</code>{{ feed.url }} </code>{{ feed.url }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_user_title }} {{feed_user_title}}
{% endraw %} {% endraw %}
</code>{{ feed.user_title }} </code>{{ feed.user_title }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_version }} {{feed_version}}
{% endraw %} {% endraw %}
</code>{{ feed.version }} </code>{{ feed.version }}
</li> </li>
@ -106,14 +106,14 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_added }} {{entry_added}}
{% endraw %} {% endraw %}
</code>{{ entry.added }} </code>{{ entry.added }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_author }} {{entry_author}}
{% endraw %} {% endraw %}
</code>{{ entry.author }} </code>{{ entry.author }}
</li> </li>
@ -121,14 +121,14 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_content }} {{entry_content}}
{% endraw %} {% endraw %}
</code>{{ entry.content[0].value|discord_markdown }} </code>{{ entry.content[0].value|discord_markdown }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_content_raw }} {{entry_content_raw}}
{% endraw %} {% endraw %}
</code>{{ entry.content[0].value }} </code>{{ entry.content[0].value }}
</li> </li>
@ -136,42 +136,42 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_id }} {{entry_id}}
{% endraw %} {% endraw %}
</code>{{ entry.id }} </code>{{ entry.id }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_important }} {{entry_important}}
{% endraw %} {% endraw %}
</code>{{ entry.important }} </code>{{ entry.important }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_link }} {{entry_link}}
{% endraw %} {% endraw %}
</code>{{ entry.link }} </code>{{ entry.link }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_published }} {{entry_published}}
{% endraw %} {% endraw %}
</code>{{ entry.published }} </code>{{ entry.published }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_read }} {{entry_read}}
{% endraw %} {% endraw %}
</code>{{ entry.read }} </code>{{ entry.read }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_read_modified }} {{entry_read_modified}}
{% endraw %} {% endraw %}
</code>{{ entry.read_modified }} </code>{{ entry.read_modified }}
</li> </li>
@ -179,14 +179,14 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_summary }} {{entry_summary}}
{% endraw %} {% endraw %}
</code>{{ entry.summary|discord_markdown }} </code>{{ entry.summary|discord_markdown }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_summary_raw }} {{entry_summary_raw}}
{% endraw %} {% endraw %}
</code>{{ entry.summary }} </code>{{ entry.summary }}
</li> </li>
@ -194,21 +194,21 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_title }} {{entry_title}}
{% endraw %} {% endraw %}
</code>{{ entry.title }} </code>{{ entry.title }}
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_text }} {{entry_text}}
{% endraw %} {% endraw %}
</code> Same as entry_content if it exists, otherwise entry_summary </code> Same as entry_content if it exists, otherwise entry_summary
</li> </li>
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ entry_updated }} {{entry_updated}}
{% endraw %} {% endraw %}
</code>{{ entry.updated }} </code>{{ entry.updated }}
</li> </li>
@ -216,7 +216,7 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ image_1 }} {{image_1}}
{% endraw %} {% endraw %}
</code>First image in the entry if it exists </code>First image in the entry if it exists
</li> </li>
@ -226,7 +226,7 @@
<li> <li>
<code> <code>
{% raw %} {% raw %}
{{ feed_title }}\n{{ entry_content }} {{feed_title}}\n{{entry_content}}
{% endraw %} {% endraw %}
</code> </code>
</li> </li>

View file

@ -1,84 +1,172 @@
{% extends "base.html" %} {% extends "base.html" %}
{% block title %} {% block title %}
| {{ feed.title }} | {{ feed.title }}
{% endblock title %} {% endblock title %}
{% block content %} {% block content %}
<div class="card mb-3 border border-dark p-3 text-light"> <div class="card mb-3 border border-dark p-3 text-light">
<!-- Feed Title --> <!-- Feed Title -->
<h2> <h2>
<a class="text-muted" href="{{ feed.url }}">{{ feed.title }}</a> ({{ total_entries }} entries) <a class="text-muted" href="{{ feed.url }}">{{ feed.title }}</a> ({{ total_entries }} entries)
</h2> </h2>
{% if not feed.updates_enabled %} {% if not feed.updates_enabled %}<span class="badge bg-danger">Disabled</span>{% endif %}
<span class="badge bg-danger">Disabled</span>
{% endif %}
{% if feed.last_exception %} {% if feed.last_exception %}
<div class="mt-3"> <div class="mt-3">
<h5 class="text-danger">{{ feed.last_exception.type_name }}:</h5> <h5 class="text-danger">{{ feed.last_exception.type_name }}:</h5>
<code class="d-block">{{ feed.last_exception.value_str }}</code> <code class="d-block">{{ feed.last_exception.value_str }}</code>
<button class="btn btn-secondary btn-sm mt-2" type="button" data-bs-toggle="collapse" <button class="btn btn-secondary btn-sm mt-2"
data-bs-target="#exceptionDetails" aria-expanded="false" aria-controls="exceptionDetails"> type="button"
Show Traceback data-bs-toggle="collapse"
</button> data-bs-target="#exceptionDetails"
aria-expanded="false"
aria-controls="exceptionDetails">Show Traceback</button>
<div class="collapse" id="exceptionDetails"> <div class="collapse" id="exceptionDetails">
<pre><code>{{ feed.last_exception.traceback_str }}</code></pre> <pre><code>{{ feed.last_exception.traceback_str }}</code></pre>
</div> </div>
</div> </div>
{% endif %} {% endif %}
<!-- Feed Actions --> <!-- Feed Actions -->
<div class="mt-3 d-flex flex-wrap gap-2"> <div class="mt-3 d-flex flex-wrap gap-2">
<a href="/update?feed_url={{ feed.url|encode_url }}"
class="btn btn-primary btn-sm">Update</a>
<form action="/remove" method="post" class="d-inline"> <form action="/remove" method="post" class="d-inline">
<button class="btn btn-danger btn-sm" name="feed_url" value="{{ feed.url }}" <button class="btn btn-danger btn-sm"
name="feed_url"
value="{{ feed.url }}"
onclick="return confirm('Are you sure you want to delete this feed?')">Remove</button> onclick="return confirm('Are you sure you want to delete this feed?')">Remove</button>
</form> </form>
{% if not feed.updates_enabled %} {% if not feed.updates_enabled %}
<form action="/unpause" method="post" class="d-inline"> <form action="/unpause" method="post" class="d-inline">
<button class="btn btn-secondary btn-sm" name="feed_url" value="{{ feed.url }}">Unpause</button> <button class="btn btn-secondary btn-sm"
name="feed_url"
value="{{ feed.url }}">Unpause</button>
</form> </form>
{% else %} {% else %}
<form action="/pause" method="post" class="d-inline"> <form action="/pause" method="post" class="d-inline">
<button class="btn btn-danger btn-sm" name="feed_url" value="{{ feed.url }}">Pause</button> <button class="btn btn-danger btn-sm" name="feed_url" value="{{ feed.url }}">Pause</button>
</form> </form>
{% endif %} {% endif %}
{% if not "youtube.com/feeds/videos.xml" in feed.url %}
{% if should_send_embed %} {% if should_send_embed %}
<form action="/use_text" method="post" class="d-inline"> <form action="/use_text" method="post" class="d-inline">
<button class="btn btn-dark btn-sm" name="feed_url" value="{{ feed.url }}"> <button class="btn btn-dark btn-sm" name="feed_url" value="{{ feed.url }}">Send text message instead of embed</button>
Send text message instead of embed
</button>
</form> </form>
{% else %} {% else %}
<form action="/use_embed" method="post" class="d-inline"> <form action="/use_embed" method="post" class="d-inline">
<button class="btn btn-dark btn-sm" name="feed_url" value="{{ feed.url }}"> <button class="btn btn-dark btn-sm" name="feed_url" value="{{ feed.url }}">Send embed instead of text message</button>
Send embed instead of text message </form>
</button> {% endif %}
{% endif %}
</div>
<!-- Additional Links -->
<div class="mt-3">
<a class="text-muted d-block"
href="/whitelist?feed_url={{ feed.url|encode_url }}">Whitelist</a>
<a class="text-muted d-block"
href="/blacklist?feed_url={{ feed.url|encode_url }}">Blacklist</a>
<a class="text-muted d-block"
href="/custom?feed_url={{ feed.url|encode_url }}">
Customize message
{% if not should_send_embed %}(Currently active){% endif %}
</a>
{% if not "youtube.com/feeds/videos.xml" in feed.url %}
<a class="text-muted d-block"
href="/embed?feed_url={{ feed.url|encode_url }}">
Customize embed
{% if should_send_embed %}(Currently active){% endif %}
</a>
{% endif %}
</div>
<!-- Feed URL Configuration -->
<div class="mt-4 border-top border-secondary pt-3">
<h5 class="mb-3">Feed URL</h5>
<form action="/change_feed_url" method="post" class="mb-2">
<input type="hidden" name="old_feed_url" value="{{ feed.url }}" />
<div class="input-group input-group-sm mb-2">
<input type="url"
class="form-control form-control-sm"
name="new_feed_url"
value="{{ feed.url }}"
required />
<button class="btn btn-warning" type="submit">Update URL</button>
</div>
</form>
</div>
<!-- Feed Metadata -->
<div class="mt-4 border-top border-secondary pt-3">
<h5 class="mb-3">Feed Information</h5>
<div class="row text-muted">
<div class="col-md-6 mb-2">
<small><strong>Added:</strong> {{ feed.added | relative_time }}</small>
</div>
<div class="col-md-6 mb-2">
<small><strong>Last Updated:</strong> {{ feed.last_updated | relative_time }}</small>
</div>
<div class="col-md-6 mb-2">
<small><strong>Last Retrieved:</strong> {{ feed.last_retrieved | relative_time }}</small>
</div>
<div class="col-md-6 mb-2">
<small><strong>Next Update:</strong> {{ feed.update_after | relative_time }}</small>
</div>
<div class="col-md-6 mb-2">
<small><strong>Updates:</strong> <span class="badge {{ 'bg-success' if feed.updates_enabled else 'bg-danger' }}">{{ 'Enabled' if feed.updates_enabled else 'Disabled' }}</span></small>
</div>
</div>
</div>
<!-- Update Interval Configuration -->
<div class="mt-4 border-top border-secondary pt-3">
<h5 class="mb-3">
Update Interval <span class="badge
{% if feed_interval %}
bg-info
{% else %}
bg-secondary
{% endif %}">
{% if feed_interval %}
Custom
{% else %}
Using global default
{% endif %}
</span>
</h5>
<div class="d-flex align-items-center gap-2 flex-wrap">
<span class="text-muted">Current: <strong>
{% if feed_interval %}
{{ feed_interval }}
{% if feed_interval >= 60 %}({{ (feed_interval / 60) | round(1) }} hours){% endif %}
{% else %}
{{ global_interval }}
{% if global_interval >= 60 %}({{ (global_interval / 60) | round(1) }} hours){% endif %}
{% endif %}
minutes</strong></span>
<form action="/set_update_interval"
method="post"
class="d-inline-flex gap-2 align-items-center">
<input type="hidden" name="feed_url" value="{{ feed.url }}" />
<input type="number"
class="form-control form-control-sm interval-input"
style="width: 100px"
name="interval_minutes"
placeholder="Minutes"
min="1"
value="{{ feed_interval if feed_interval else global_interval }}"
required />
<button class="btn btn-primary btn-sm" type="submit">Set Interval</button>
</form>
{% if feed_interval %}
<form action="/reset_update_interval" method="post" class="d-inline">
<input type="hidden" name="feed_url" value="{{ feed.url }}" />
<button class="btn btn-secondary btn-sm" type="submit">Reset to Global Default</button>
</form> </form>
{% endif %} {% endif %}
</div> </div>
<!-- Additional Links -->
<div class="mt-3">
<a class="text-muted d-block" href="/whitelist?feed_url={{ feed.url|encode_url }}">Whitelist</a>
<a class="text-muted d-block" href="/blacklist?feed_url={{ feed.url|encode_url }}">Blacklist</a>
<a class="text-muted d-block" href="/custom?feed_url={{ feed.url|encode_url }}">
Customize message {% if not should_send_embed %}(Currently active){% endif %}
</a>
<a class="text-muted d-block" href="/embed?feed_url={{ feed.url|encode_url }}">
Customize embed {% if should_send_embed %}(Currently active){% endif %}
</a>
</div> </div>
</div> </div>
{# Rendered HTML content #} {# Rendered HTML content #}
<pre>{{ html|safe }}</pre> <pre>{{ html|safe }}</pre>
{% if is_show_more_entries_button_visible %}
{% if show_more_entires_button %} <a class="btn btn-dark mt-3"
<a class="btn btn-dark mt-3"
href="/feed?feed_url={{ feed.url|encode_url }}&starting_after={{ last_entry.id|encode_url }}"> href="/feed?feed_url={{ feed.url|encode_url }}&starting_after={{ last_entry.id|encode_url }}">
Show more entries Show more entries
</a> </a>
{% endif %} {% endif %}
{% endblock content %} {% endblock content %}

View file

@ -1,7 +1,7 @@
{% extends "base.html" %} {% extends "base.html" %}
{% block content %} {% block content %}
<!-- List all feeds --> <!-- List all feeds -->
<ul> <ul>
<!-- Check if any feeds --> <!-- Check if any feeds -->
{% if feeds %} {% if feeds %}
<p> <p>
@ -28,41 +28,77 @@
{{ entry_count.averages[2]|round(1) }}) {{ entry_count.averages[2]|round(1) }})
</abbr> </abbr>
</p> </p>
<!-- Loop through the webhooks and add the feeds connected to them. --> <!-- Loop through the webhooks and add the feeds grouped by domain -->
{% for hook_from_context in webhooks %} {% for hook_from_context in webhooks %}
<div class="p-2 mb-2 border border-dark"> <div class="p-2 mb-3 border border-dark">
<h2 class="h5"> <div class="d-flex justify-content-between align-items-center mb-3">
<a class="text-muted" href="/webhooks">{{ hook_from_context.name }}</a> <h2 class="h5 mb-0">{{ hook_from_context.name }}</h2>
</h2> <a class="text-muted fs-6 btn btn-outline-light btn-sm ms-auto me-2"
<ul class="list-group"> href="/webhook_entries?webhook_url={{ hook_from_context.url|encode_url }}">Settings</a>
{% for feed_webhook in feeds %} <a class="text-muted fs-6 btn btn-outline-light btn-sm"
{% set feed = feed_webhook["feed"] %} href="/webhook_entries?webhook_url={{ hook_from_context.url|encode_url }}">View Latest Entries</a>
{% set hook_from_feed = feed_webhook["webhook"] %}
{% if hook_from_context.url == hook_from_feed %}
<div>
<a class="text-muted" href="/feed?feed_url={{ feed.url|encode_url }}">{{ feed.url }}</a>
{% if not feed.updates_enabled %}<span class="text-warning">Disabled</span>{% endif %}
{% if feed.last_exception %}<span
class="text-danger">({{ feed.last_exception.value_str }})</span>{% endif %}
</div> </div>
<!-- Group feeds by domain within each webhook -->
{% set feeds_for_hook = [] %}
{% for feed_webhook in feeds %}
{% if hook_from_context.url == feed_webhook.webhook %}
{% set _ = feeds_for_hook.append(feed_webhook) %}
{% endif %} {% endif %}
{% endfor %} {% endfor %}
{% if feeds_for_hook %}
<!-- Create a dictionary to hold feeds grouped by domain -->
{% set domains = {} %}
{% for feed_item in feeds_for_hook %}
{% set feed = feed_item.feed %}
{% set domain = feed_item.domain %}
{% if domain not in domains %}
{% set _ = domains.update({domain: []}) %}
{% endif %}
{% set _ = domains[domain].append(feed) %}
{% endfor %}
<!-- Display domains and their feeds -->
{% for domain, domain_feeds in domains.items() %}
<div class="card bg-dark border border-dark mb-2">
<div class="card-header">
<h3 class="h6 mb-0 text-white-50">{{ domain }} ({{ domain_feeds|length }})</h3>
</div>
<div class="card-body p-2">
<ul class="list-group list-unstyled mb-0">
{% for feed in domain_feeds %}
<li>
<a class="text-muted" href="/feed?feed_url={{ feed.url|encode_url }}">
{% if feed.title %}
{{ feed.title }}
{% else %}
{{ feed.url }}
{% endif %}
</a>
{% if not feed.updates_enabled %}<span class="text-warning">Disabled</span>{% endif %}
{% if feed.last_exception %}<span class="text-danger">({{ feed.last_exception.value_str }})</span>{% endif %}
</li>
{% endfor %}
</ul> </ul>
</div> </div>
</div>
{% endfor %}
{% else %}
<p class="text-muted">No feeds associated with this webhook.</p>
{% endif %}
</div>
{% endfor %} {% endfor %}
{% else %} {% else %}
<p> <p>
Hello there! Hello there!
<br> <br />
<br />
You need to add a webhook <a class="text-muted" href="/add_webhook">here</a> to get started. After that, you can You need to add a webhook <a class="text-muted" href="/add_webhook">here</a> to get started. After that, you can
add feeds <a class="text-muted" href="/add">here</a>. You can find both of these links in the navigation bar add feeds <a class="text-muted" href="/add">here</a>. You can find both of these links in the navigation bar
above. above.
<br> <br />
<br> <br />
If you have any questions or suggestions, feel free to contact me on <a class="text-muted" If you have any questions or suggestions, feel free to contact me on <a class="text-muted" href="mailto:tlovinator@gmail.com">tlovinator@gmail.com</a> or TheLovinator#9276 on Discord.
href="mailto:tlovinator@gmail.com">tlovinator@gmail.com</a> or TheLovinator#9276 on Discord. <br />
<br> <br />
<br>
Thanks! Thanks!
</p> </p>
{% endif %} {% endif %}
@ -72,7 +108,21 @@
<ul class="list-group text-danger"> <ul class="list-group text-danger">
Feeds without webhook: Feeds without webhook:
{% for broken_feed in broken_feeds %} {% for broken_feed in broken_feeds %}
<a class="text-muted" href="/feed?feed_url={{ broken_feed.url|encode_url }}">{{ broken_feed.url }}</a> <a class="text-muted"
href="/feed?feed_url={{ broken_feed.url|encode_url }}">
{# Display username@youtube for YouTube feeds #}
{% if "youtube.com/feeds/videos.xml" in broken_feed.url %}
{% if "user=" in broken_feed.url %}
{{ broken_feed.url.split("user=")[1] }}@youtube
{% elif "channel_id=" in broken_feed.url %}
{{ broken_feed.title if broken_feed.title else broken_feed.url.split("channel_id=")[1] }}@youtube
{% else %}
{{ broken_feed.url }}
{% endif %}
{% else %}
{{ broken_feed.url }}
{% endif %}
</a>
{% endfor %} {% endfor %}
</ul> </ul>
</div> </div>
@ -83,10 +133,23 @@
<ul class="list-group text-danger"> <ul class="list-group text-danger">
Feeds without attached webhook: Feeds without attached webhook:
{% for feed in feeds_without_attached_webhook %} {% for feed in feeds_without_attached_webhook %}
<a class="text-muted" href="/feed?feed_url={{ feed.url|encode_url }}">{{ feed.url }}</a> <a class="text-muted" href="/feed?feed_url={{ feed.url|encode_url }}">
{# Display username@youtube for YouTube feeds #}
{% if "youtube.com/feeds/videos.xml" in feed.url %}
{% if "user=" in feed.url %}
{{ feed.url.split("user=")[1] }}@youtube
{% elif "channel_id=" in feed.url %}
{{ feed.title if feed.title else feed.url.split("channel_id=")[1] }}@youtube
{% else %}
{{ feed.url }}
{% endif %}
{% else %}
{{ feed.url }}
{% endif %}
</a>
{% endfor %} {% endfor %}
</ul> </ul>
</div> </div>
{% endif %} {% endif %}
</ul> </ul>
{% endblock content %} {% endblock content %}

View file

@ -1,6 +1,9 @@
<nav class="navbar navbar-expand-md navbar-dark p-2 mb-3 border-bottom border-warning"> <nav class="navbar navbar-expand-md navbar-dark p-2 mb-3 border-bottom border-warning">
<div class="container-fluid"> <div class="container-fluid">
<button class="navbar-toggler ms-auto" type="button" data-bs-toggle="collapse" data-bs-target="#collapseNavbar"> <button class="navbar-toggler ms-auto"
type="button"
data-bs-toggle="collapse"
data-bs-target="#collapseNavbar">
<span class="navbar-toggler-icon"></span> <span class="navbar-toggler-icon"></span>
</button> </button>
<div class="navbar-collapse collapse" id="collapseNavbar"> <div class="navbar-collapse collapse" id="collapseNavbar">
@ -16,10 +19,28 @@
<li class="nav-item"> <li class="nav-item">
<a class="nav-link" href="/webhooks">Webhooks</a> <a class="nav-link" href="/webhooks">Webhooks</a>
</li> </li>
<li class="nav-item nav-link d-none d-md-block">|</li>
<li class="nav-item">
<a class="nav-link" href="/settings">Settings</a>
</li>
{% if get_backup_path() %}
<li class="nav-item nav-link d-none d-md-block">|</li>
<li class="nav-item">
<form action="/backup" method="post" class="d-inline">
<button type="submit"
class="nav-link btn btn-link text-decoration-none"
onclick="return confirm('Create a manual git backup of the current state?');">
Backup
</button>
</form>
</li>
{% endif %}
</ul> </ul>
{# Search #} {# Search #}
<form action="/search" method="get" class="ms-auto w-50 input-group"> <form action="/search" method="get" class="ms-auto w-50 input-group">
<input name="query" class="form-control bg-dark border-dark text-muted" type="search" <input name="query"
class="form-control bg-dark border-dark text-muted"
type="search"
placeholder="Search" /> placeholder="Search" />
</form> </form>
{# Donate button #} {# Donate button #}

View file

@ -1,10 +1,18 @@
{% extends "base.html" %} {% extends "base.html" %}
{% block title %} {% block title %}
| Search | Search
{% endblock title %} {% endblock title %}
{% block content %} {% block content %}
<div class="p-2 border border-dark text-muted"> <div class="p-2 border border-dark text-muted">
Your search for "{{- query -}}" returned {{- search_amount.total -}} results. Your search for "{{ query }}" returned {{ search_amount.total }} results.
</div> </div>
{{- search_html | safe -}} {% for result in results %}
<div class="p-2 mb-2 border border-dark">
<a class="text-muted text-decoration-none"
href="/feed?feed_url={{ result.feed_url }}">
<h2>{{ result.title|safe }}</h2>
</a>
<div class="text-muted">{{ result.summary|safe }}</div>
</div>
{% endfor %}
{% endblock content %} {% endblock content %}

View file

@ -0,0 +1,122 @@
{% extends "base.html" %}
{% block title %}
| Settings
{% endblock title %}
{% block content %}
<section>
<div class="text-light">
<div class="d-flex flex-wrap justify-content-between align-items-center gap-2">
<h2 class="mb-0">Global Settings</h2>
</div>
<p class="text-muted mt-2 mb-4">
Set a default interval for all feeds. Individual feeds can still override this value.
</p>
<div class="mb-4">
<div>
Current default is {{ global_interval }} min.
Even though we check ETags and Last-Modified headers, choosing a very low interval may cause issues with some feeds or cause excessive load on the server hosting the feed. Remember to be kind.
</div>
</div>
</div>
<form action="/set_global_update_interval" method="post" class="mb-2">
<div class="settings-form-row mb-2">
<label for="interval_minutes" class="form-label mb-1">Default interval (minutes)</label>
<div class="input-group input-group-lg">
<input id="interval_minutes"
type="number"
class="form-control settings-input"
name="interval_minutes"
placeholder="Minutes"
min="1"
value="{{ global_interval }}"
required />
<button class="btn btn-primary px-4" type="submit">Save</button>
</div>
</div>
</form>
</section>
<section class="mt-5">
<div class="text-light">
<div class="d-flex flex-wrap justify-content-between align-items-center gap-2">
<h2 class="mb-0">Feed Update Intervals</h2>
</div>
<p class="text-muted mt-2 mb-4">
Customize the update interval for individual feeds. Leave empty or reset to use the global default.
</p>
</div>
{% if feed_intervals %}
<div class="table-responsive">
<table class="table table-dark table-hover">
<thead>
<tr>
<th>Feed</th>
<th>Domain</th>
<th>Status</th>
<th>Interval</th>
<th>Last Updated</th>
<th>Next Update</th>
<th>Set Interval (min)</th>
<th>Actions</th>
</tr>
</thead>
<tbody>
{% for item in feed_intervals %}
<tr>
<td>
<a href="/feed?feed_url={{ item.feed.url|encode_url }}"
class="text-light text-decoration-none">{{ item.feed.title }}</a>
</td>
<td>
<span class="text-muted small">{{ item.domain }}</span>
</td>
<td>
<span class="badge {{ 'bg-success' if item.feed.updates_enabled else 'bg-danger' }}">
{{ 'Enabled' if item.feed.updates_enabled else 'Disabled' }}
</span>
</td>
<td>
<span>{{ item.effective_interval }} min</span>
{% if item.interval %}
<span class="badge bg-info ms-1">Custom</span>
{% else %}
<span class="badge bg-secondary ms-1">Global</span>
{% endif %}
</td>
<td>
<small class="text-muted">{{ item.feed.last_updated | relative_time }}</small>
</td>
<td>
<small class="text-muted">{{ item.feed.update_after | relative_time }}</small>
</td>
<td>
<form action="/set_update_interval" method="post" class="d-flex gap-2">
<input type="hidden" name="feed_url" value="{{ item.feed.url }}" />
<input type="hidden" name="redirect_to" value="/settings" />
<input type="number"
class="form-control form-control-sm interval-input"
name="interval_minutes"
placeholder="Minutes"
min="1"
value="{{ item.interval if item.interval else global_interval }}" />
<button class="btn btn-primary btn-sm" type="submit">Set</button>
</form>
</td>
<td>
{% if item.interval %}
<form action="/reset_update_interval" method="post" class="d-inline">
<input type="hidden" name="feed_url" value="{{ item.feed.url }}" />
<input type="hidden" name="redirect_to" value="/settings" />
<button class="btn btn-outline-secondary btn-sm" type="submit">Reset</button>
</form>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
{% else %}
<p class="text-muted">No feeds added yet.</p>
{% endif %}
</section>
{% endblock content %}

View file

@ -0,0 +1,167 @@
{% extends "base.html" %}
{% block title %}
| {{ webhook_name }}
{% endblock title %}
{% block content %}
{% if message %}<div class="alert alert-info" role="alert">{{ message }}</div>{% endif %}
<div class="card mb-3 border border-dark p-3 text-light">
<div class="d-flex flex-column flex-md-row justify-content-between gap-3">
<div>
<h2 class="mb-2">{{ webhook_name }}</h2>
<p class="text-muted mb-1">
{{ total_entries }} total from {{ feeds_count }} feed{{ 's' if feeds_count != 1 else '' }}
</p>
<p class="text-muted mb-0">
<code>{{ webhook_url }}</code>
</p>
</div>
<div class="d-flex gap-2 align-items-start">
<a class="btn btn-outline-light btn-sm" href="/">Back to dashboard</a>
<a class="btn btn-outline-info btn-sm" href="/webhooks">All webhooks</a>
</div>
</div>
</div>
<div class="row g-3 mb-3">
<div class="col-lg-5">
<div class="card border border-dark p-3 text-light h-100">
<h3 class="h5">Settings</h3>
<ul class="list-unstyled text-muted mb-3">
<li>
<strong>Custom name:</strong> {{ hook_info.custom_name }}
</li>
<li>
<strong>Discord name:</strong> {{ hook_info.name or 'Unavailable' }}
</li>
<li>
<strong>Webhook:</strong>
<a class="text-muted" href="{{ hook_info.url }}">{{ hook_info.url | replace('https://discord.com/api/webhooks', '') }}</a>
</li>
</ul>
<form action="/modify_webhook" method="post" class="row g-3 mb-3">
<input type="hidden" name="old_hook" value="{{ webhook_url }}" />
<input type="hidden"
name="redirect_to"
value="/webhook_entries?webhook_url={{ webhook_url|encode_url }}" />
<div class="col-12">
<label for="new_hook" class="form-label">Modify Webhook</label>
<input type="text"
name="new_hook"
id="new_hook"
class="form-control border text-muted bg-dark"
placeholder="Enter new webhook URL" />
</div>
<div class="col-12">
<button type="submit" class="btn btn-primary w-100">Save Webhook URL</button>
</div>
</form>
<form action="/delete_webhook" method="post">
<input type="hidden" name="webhook_url" value="{{ webhook_url }}" />
<button type="submit"
class="btn btn-danger w-100"
onclick="return confirm('Are you sure you want to delete this webhook?');">
Delete Webhook
</button>
</form>
<hr class="border-secondary my-3" />
<h3 class="h6">Mass update feed URLs</h3>
<p class="text-muted small mb-2">Replace part of feed URLs for all feeds attached to this webhook.</p>
<form action="/webhook_entries"
method="get"
class="row g-2 mb-2"
hx-get="/webhook_entries_mass_update_preview"
hx-target="#mass-update-preview"
hx-swap="innerHTML">
<input type="hidden" name="webhook_url" value="{{ webhook_url|encode_url }}" />
<div class="col-12">
<label for="replace_from" class="form-label small">Replace this</label>
<input type="text"
name="replace_from"
id="replace_from"
class="form-control border text-muted bg-dark"
value="{{ replace_from }}"
placeholder="https://old-domain.example" />
</div>
<div class="col-12">
<label for="replace_to" class="form-label small">With this</label>
<input type="text"
name="replace_to"
id="replace_to"
class="form-control border text-muted bg-dark"
value="{{ replace_to }}"
placeholder="https://new-domain.example" />
</div>
<div class="col-12 form-check ms-1">
<input class="form-check-input"
type="checkbox"
value="true"
id="resolve_urls"
name="resolve_urls"
{% if resolve_urls %}checked{% endif %} />
<label class="form-check-label small" for="resolve_urls">Resolve final URL with redirects</label>
</div>
<div class="col-12 form-check ms-1">
<input class="form-check-input"
type="checkbox"
value="true"
id="force_update"
name="force_update"
{% if force_update %}checked{% endif %} />
<label class="form-check-label small" for="force_update">Force update (overwrite conflicting target feed URLs)</label>
</div>
<div class="col-12">
<button type="submit" class="btn btn-outline-warning w-100">Preview changes</button>
</div>
</form>
<div id="mass-update-preview">{% include "_webhook_mass_update_preview.html" %}</div>
</div>
</div>
<div class="col-lg-7">
<div class="card border border-dark p-3 text-light h-100">
<h3 class="h5">Attached feeds</h3>
{% if webhook_feeds %}
<ul class="list-group list-unstyled mb-0">
{% for feed in webhook_feeds %}
<li class="mb-2">
<a class="text-muted" href="/feed?feed_url={{ feed.url|encode_url }}">
{% if feed.title %}
{{ feed.title }}
{% else %}
{{ feed.url }}
{% endif %}
</a>
{% if feed.title %}<span class="text-muted">- {{ feed.url }}</span>{% endif %}
{% if not feed.updates_enabled %}<span class="text-warning">Disabled</span>{% endif %}
{% if feed.last_exception %}<span class="text-danger">({{ feed.last_exception.value_str }})</span>{% endif %}
</li>
{% endfor %}
</ul>
{% else %}
<p class="text-muted mb-0">No feeds are attached to this webhook yet.</p>
{% endif %}
</div>
</div>
</div>
{# Rendered HTML content #}
{% if entries %}
<h3 class="h5 text-light">Latest entries</h3>
<pre>{{ html|safe }}</pre>
{% if is_show_more_entries_button_visible and last_entry %}
<a class="btn btn-dark mt-3"
href="/webhook_entries?webhook_url={{ webhook_url|encode_url }}&starting_after={{ last_entry.feed.url|encode_url }}|{{ last_entry.id|encode_url }}">
Show more entries
</a>
{% endif %}
{% elif feeds_count == 0 %}
<div>
<p>
No feeds found for {{ webhook_name }}. <a href="/add" class="alert-link">Add feeds</a> to this webhook to see entries here.
</p>
</div>
{% else %}
<div>
<p>
No entries found for {{ webhook_name }}. <a href="/settings" class="alert-link">Update feeds</a> to fetch new entries.
</p>
</div>
{% endif %}
{% endblock content %}

View file

@ -1,9 +1,9 @@
{% extends "base.html" %} {% extends "base.html" %}
{% block title %} {% block title %}
| Webhooks | Webhooks
{% endblock title %} {% endblock title %}
{% block content %} {% block content %}
<div class="container my-4 text-light"> <div class="container my-4 text-light">
{% for hook in hooks_with_data %} {% for hook in hooks_with_data %}
<div class="border border-dark mb-4 shadow-sm p-3"> <div class="border border-dark mb-4 shadow-sm p-3">
<div class="text-muted"> <div class="text-muted">
@ -17,29 +17,37 @@
</li> </li>
<li> <li>
<strong>Webhook:</strong> <strong>Webhook:</strong>
<a class="text-muted" <a class="text-muted" href="{{ hook.url }}">{{ hook.url | replace('https://discord.com/api/webhooks', '') }}</a>
href="{{ hook.url }}">{{ hook.url | replace("https://discord.com/api/webhooks", "") }}</a>
</li> </li>
</ul> </ul>
<hr> <hr />
<form action="/modify_webhook" method="post" class="row g-3"> <form action="/modify_webhook" method="post" class="row g-3">
<input type="hidden" name="old_hook" value="{{ hook.url }}" /> <input type="hidden" name="old_hook" value="{{ hook.url }}" />
<div class="col-md-8"> <div class="col-md-8">
<label for="new_hook" class="form-label">Modify Webhook</label> <label for="new_hook" class="form-label">Modify Webhook</label>
<input type="text" name="new_hook" id="new_hook" class="form-control border text-muted bg-dark" <input type="text"
name="new_hook"
id="new_hook"
class="form-control border text-muted bg-dark"
placeholder="Enter new webhook URL" /> placeholder="Enter new webhook URL" />
</div> </div>
<div class="col-md-4 d-flex align-items-end"> <div class="col-md-4 d-flex align-items-end">
<button type="submit" class="btn btn-primary w-100">Modify</button> <button type="submit" class="btn btn-primary w-100">Modify</button>
</div> </div>
</form> </form>
</div> </div>
<div class="d-flex justify-content-between mt-2"> <div class="d-flex justify-content-between mt-2 gap-2">
<div>
<a href="/webhook_entries?webhook_url={{ hook.url|encode_url }}"
class="btn btn-info btn-sm">View Latest Entries</a>
</div>
<form action="/delete_webhook" method="post"> <form action="/delete_webhook" method="post">
<input type="hidden" name="webhook_url" value="{{ hook.url }}" /> <input type="hidden" name="webhook_url" value="{{ hook.url }}" />
<button type="submit" class="btn btn-danger" <button type="submit"
onclick="return confirm('Are you sure you want to delete this webhook?');">Delete</button> class="btn btn-danger"
onclick="return confirm('Are you sure you want to delete this webhook?');">
Delete
</button>
</form> </form>
</div> </div>
</div> </div>
@ -47,9 +55,9 @@
<div class="border border-dark p-3"> <div class="border border-dark p-3">
You can append <code>?thread_id=THREAD_ID</code> to the URL to send messages to a thread. You can append <code>?thread_id=THREAD_ID</code> to the URL to send messages to a thread.
</div> </div>
<br> <br />
<div class="text-end"> <div class="text-end">
<a class="btn btn-primary mb-3" href="/add_webhook">Add New Webhook</a> <a class="btn btn-primary mb-3" href="/add_webhook">Add New Webhook</a>
</div> </div>
</div> </div>
{% endblock content %} {% endblock content %}

View file

@ -1,6 +1,6 @@
{% extends "base.html" %} {% extends "base.html" %}
{% block title %} {% block title %}
| Blacklist | Whitelist
{% endblock title %} {% endblock title %}
{% block content %} {% block content %}
<div class="p-2 border border-dark"> <div class="p-2 border border-dark">
@ -42,6 +42,49 @@
<label for="whitelist_author" class="col-sm-6 col-form-label">Whitelist - Author</label> <label for="whitelist_author" class="col-sm-6 col-form-label">Whitelist - Author</label>
<input name="whitelist_author" type="text" class="form-control bg-dark border-dark text-muted" <input name="whitelist_author" type="text" class="form-control bg-dark border-dark text-muted"
id="whitelist_author" value="{%- if whitelist_author -%} {{ whitelist_author }} {%- endif -%}" /> id="whitelist_author" value="{%- if whitelist_author -%} {{ whitelist_author }} {%- endif -%}" />
<div class="mt-4">
<div class="form-text">
<ul class="list-inline">
<li>
Regular expression patterns for advanced filtering. Each pattern should be on a new
line.
</li>
<li>Patterns are case-insensitive.</li>
<li>
Examples:
<code>
<pre>
^New Release:.*
\b(update|version|patch)\s+\d+\.\d+
.*\[(important|notice)\].*
</pre>
</code>
</li>
</ul>
</div>
<label for="regex_whitelist_title" class="col-sm-6 col-form-label">Regex Whitelist - Title</label>
<textarea name="regex_whitelist_title" class="form-control bg-dark border-dark text-muted"
id="regex_whitelist_title"
rows="3">{%- if regex_whitelist_title -%}{{ regex_whitelist_title }}{%- endif -%}</textarea>
<label for="regex_whitelist_summary" class="col-sm-6 col-form-label">Regex Whitelist -
Summary</label>
<textarea name="regex_whitelist_summary" class="form-control bg-dark border-dark text-muted"
id="regex_whitelist_summary"
rows="3">{%- if regex_whitelist_summary -%}{{ regex_whitelist_summary }}{%- endif -%}</textarea>
<label for="regex_whitelist_content" class="col-sm-6 col-form-label">Regex Whitelist -
Content</label>
<textarea name="regex_whitelist_content" class="form-control bg-dark border-dark text-muted"
id="regex_whitelist_content"
rows="3">{%- if regex_whitelist_content -%}{{ regex_whitelist_content }}{%- endif -%}</textarea>
<label for="regex_whitelist_author" class="col-sm-6 col-form-label">Regex Whitelist - Author</label>
<textarea name="regex_whitelist_author" class="form-control bg-dark border-dark text-muted"
id="regex_whitelist_author"
rows="3">{%- if regex_whitelist_author -%}{{ regex_whitelist_author }}{%- endif -%}</textarea>
</div>
</div> </div>
</div> </div>
<!-- Add a hidden feed_url field to the form --> <!-- Add a hidden feed_url field to the form -->

View file

@ -10,7 +10,7 @@ services:
# - /Docker/Bots/discord-rss-bot:/home/botuser/.local/share/discord_rss_bot/ # - /Docker/Bots/discord-rss-bot:/home/botuser/.local/share/discord_rss_bot/
- data:/home/botuser/.local/share/discord_rss_bot/ - data:/home/botuser/.local/share/discord_rss_bot/
healthcheck: healthcheck:
test: ["CMD", "python", "discord_rss_bot/healthcheck.py"] test: [ "CMD", "uv", "run", "./discord_rss_bot/healthcheck.py" ]
interval: 1m interval: 1m
timeout: 10s timeout: 10s
retries: 3 retries: 3

View file

@ -5,7 +5,7 @@ description = "RSS bot for Discord"
readme = "README.md" readme = "README.md"
requires-python = ">=3.12" requires-python = ">=3.12"
dependencies = [ dependencies = [
"apscheduler", "apscheduler>=3.11.0",
"discord-webhook", "discord-webhook",
"fastapi", "fastapi",
"httpx", "httpx",
@ -17,54 +17,30 @@ dependencies = [
"python-multipart", "python-multipart",
"reader", "reader",
"sentry-sdk[fastapi]", "sentry-sdk[fastapi]",
"tldextract",
"uvicorn", "uvicorn",
] ]
[dependency-groups] [dependency-groups]
dev = ["pytest"] dev = ["djlint", "pytest", "pytest-randomly", "pytest-xdist"]
[tool.poetry]
name = "discord-rss-bot"
version = "1.0.0"
description = "RSS bot for Discord"
authors = ["Joakim Hellsén <tlovinator@gmail.com>"]
[tool.poetry.dependencies]
python = "^3.12"
apscheduler = "*"
discord-webhook = "*"
fastapi = "*"
httpx = "*"
jinja2 = "*"
lxml = "*"
markdownify = "*"
platformdirs = "*"
python-dotenv = "*"
python-multipart = "*"
reader = "*"
sentry-sdk = {version = "*", extras = ["fastapi"]}
uvicorn = "*"
[tool.poetry.group.dev.dependencies]
pytest = "*"
[build-system] [build-system]
requires = ["poetry-core>=1.0.0"] requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api" build-backend = "poetry.core.masonry.api"
[tool.djlint]
ignore = "D004,D018,J018,T001,J004"
profile = "jinja"
max_line_length = 120
format_attribute_template_tags = true
[tool.ruff] [tool.ruff]
preview = true preview = true
unsafe-fixes = true
fix = true
line-length = 120 line-length = 120
lint.select = ["ALL"] lint.select = ["ALL"]
lint.unfixable = ["F841"] # Don't automatically remove unused variables
lint.pydocstyle.convention = "google" lint.pydocstyle.convention = "google"
lint.isort.required-imports = ["from __future__ import annotations"] lint.isort.required-imports = ["from __future__ import annotations"]
lint.pycodestyle.ignore-overlong-task-comments = true lint.isort.force-single-line = true
lint.ignore = [ lint.ignore = [
"ANN201", # Checks that public functions and methods have return type annotations. "ANN201", # Checks that public functions and methods have return type annotations.
@ -86,6 +62,8 @@ lint.ignore = [
"PLR6301", # Checks for the presence of unused self parameter in methods definitions. "PLR6301", # Checks for the presence of unused self parameter in methods definitions.
"RUF029", # Checks for functions declared async that do not await or otherwise use features requiring the function to be declared async. "RUF029", # Checks for functions declared async that do not await or otherwise use features requiring the function to be declared async.
"TD003", # Checks that a TODO comment is associated with a link to a relevant issue or ticket. "TD003", # Checks that a TODO comment is associated with a link to a relevant issue or ticket.
"PLR0913", # Checks for function definitions that include too many arguments.
"PLR0917", # Checks for function definitions that include too many positional arguments.
# Conflicting lint rules when using Ruff's formatter # Conflicting lint rules when using Ruff's formatter
# https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules # https://docs.astral.sh/ruff/formatter/#conflicting-lint-rules
@ -108,15 +86,8 @@ lint.ignore = [
[tool.ruff.lint.per-file-ignores] [tool.ruff.lint.per-file-ignores]
"tests/*" = ["S101", "D103", "PLR2004"] "tests/*" = ["S101", "D103", "PLR2004"]
[tool.ruff.lint.mccabe]
max-complexity = 15 # Don't judge lol
[tool.pytest.ini_options] [tool.pytest.ini_options]
python_files = ["test_*.py"] addopts = "-n 5 --dist loadfile"
log_cli = true
log_cli_level = "DEBUG"
log_cli_format = "%(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)"
log_cli_date_format = "%Y-%m-%d %H:%M:%S"
filterwarnings = [ filterwarnings = [
"ignore::bs4.GuessedAtParserWarning", "ignore::bs4.GuessedAtParserWarning",
"ignore:functools\\.partial will be a method descriptor in future Python versions; wrap it in staticmethod\\(\\) if you want to preserve the old behavior:FutureWarning", "ignore:functools\\.partial will be a method descriptor in future Python versions; wrap it in staticmethod\\(\\) if you want to preserve the old behavior:FutureWarning",

View file

@ -1,13 +0,0 @@
apscheduler
discord-webhook
fastapi
httpx
jinja2
lxml
markdownify
platformdirs
python-dotenv
python-multipart
reader
sentry-sdk[fastapi]
uvicorn

68
tests/conftest.py Normal file
View file

@ -0,0 +1,68 @@
from __future__ import annotations
import os
import shutil
import sys
import tempfile
import warnings
from contextlib import suppress
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Any
from bs4 import MarkupResemblesLocatorWarning
if TYPE_CHECKING:
import pytest
def pytest_addoption(parser: pytest.Parser) -> None:
"""Register custom command-line options for optional integration tests."""
parser.addoption(
"--run-real-git-backup-tests",
action="store_true",
default=False,
help="Run tests that push git backup state to a real repository.",
)
def pytest_sessionstart(session: pytest.Session) -> None:
"""Isolate persistent app state per xdist worker to avoid cross-worker test interference."""
worker_id: str = os.environ.get("PYTEST_XDIST_WORKER", "gw0")
worker_data_dir: Path = Path(tempfile.gettempdir()) / "discord-rss-bot-tests" / worker_id
# Start each worker from a clean state.
shutil.rmtree(worker_data_dir, ignore_errors=True)
worker_data_dir.mkdir(parents=True, exist_ok=True)
os.environ["DISCORD_RSS_BOT_DATA_DIR"] = str(worker_data_dir)
# Tests call markdownify which may invoke BeautifulSoup on strings that look
# like URLs; that triggers MarkupResemblesLocatorWarning from bs4. Silence
# that warning during tests to avoid noisy output.
warnings.filterwarnings("ignore", category=MarkupResemblesLocatorWarning)
# If modules were imported before this hook (unlikely), force them to use
# the worker-specific location.
settings_module: Any = sys.modules.get("discord_rss_bot.settings")
if settings_module is not None:
settings_module.data_dir = str(worker_data_dir)
get_reader: Any = getattr(settings_module, "get_reader", None)
if get_reader is not None and hasattr(get_reader, "cache_clear"):
get_reader.cache_clear()
main_module: Any = sys.modules.get("discord_rss_bot.main")
if main_module is not None and settings_module is not None:
with suppress(Exception):
current_reader = getattr(main_module, "reader", None)
if current_reader is not None:
current_reader.close()
get_reader: Any = getattr(settings_module, "get_reader", None)
if callable(get_reader):
get_reader()
def pytest_collection_modifyitems(config: pytest.Config, items: list[pytest.Item]) -> None:
"""Skip real git-repo push tests unless explicitly requested."""
if config.getoption("--run-real-git-backup-tests"):
return

View file

@ -4,9 +4,13 @@ import tempfile
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from reader import Entry, Feed, Reader, make_reader from reader import Entry
from reader import Feed
from reader import Reader
from reader import make_reader
from discord_rss_bot.filter.blacklist import entry_should_be_skipped, feed_has_blacklist_tags from discord_rss_bot.filter.blacklist import entry_should_be_skipped
from discord_rss_bot.filter.blacklist import feed_has_blacklist_tags
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Iterable from collections.abc import Iterable
@ -34,11 +38,18 @@ def test_has_black_tags() -> None:
# Test feed without any blacklist tags # Test feed without any blacklist tags
assert_msg: str = "Feed should not have any blacklist tags" assert_msg: str = "Feed should not have any blacklist tags"
assert feed_has_blacklist_tags(custom_reader=get_reader(), feed=feed) is False, assert_msg assert feed_has_blacklist_tags(reader=get_reader(), feed=feed) is False, assert_msg
check_if_has_tag(reader, feed, "blacklist_title") check_if_has_tag(reader, feed, "blacklist_title")
check_if_has_tag(reader, feed, "blacklist_summary") check_if_has_tag(reader, feed, "blacklist_summary")
check_if_has_tag(reader, feed, "blacklist_content") check_if_has_tag(reader, feed, "blacklist_content")
check_if_has_tag(reader, feed, "blacklist_author")
# Test regex blacklist tags
check_if_has_tag(reader, feed, "regex_blacklist_title")
check_if_has_tag(reader, feed, "regex_blacklist_summary")
check_if_has_tag(reader, feed, "regex_blacklist_content")
check_if_has_tag(reader, feed, "regex_blacklist_author")
# Clean up # Clean up
reader.delete_feed(feed_url) reader.delete_feed(feed_url)
@ -47,11 +58,11 @@ def test_has_black_tags() -> None:
def check_if_has_tag(reader: Reader, feed: Feed, blacklist_name: str) -> None: def check_if_has_tag(reader: Reader, feed: Feed, blacklist_name: str) -> None:
reader.set_tag(feed, blacklist_name, "a") # pyright: ignore[reportArgumentType] reader.set_tag(feed, blacklist_name, "a") # pyright: ignore[reportArgumentType]
assert_msg: str = f"Feed should have blacklist tags: {blacklist_name}" assert_msg: str = f"Feed should have blacklist tags: {blacklist_name}"
assert feed_has_blacklist_tags(custom_reader=reader, feed=feed) is True, assert_msg assert feed_has_blacklist_tags(reader=reader, feed=feed) is True, assert_msg
asset_msg: str = f"Feed should not have any blacklist tags: {blacklist_name}" asset_msg: str = f"Feed should not have any blacklist tags: {blacklist_name}"
reader.delete_tag(feed, blacklist_name) reader.delete_tag(feed, blacklist_name)
assert feed_has_blacklist_tags(custom_reader=reader, feed=feed) is False, asset_msg assert feed_has_blacklist_tags(reader=reader, feed=feed) is False, asset_msg
def test_should_be_skipped() -> None: def test_should_be_skipped() -> None:
@ -74,6 +85,7 @@ def test_should_be_skipped() -> None:
# Test entry without any blacklists # Test entry without any blacklists
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}" assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test standard blacklist functionality
reader.set_tag(feed, "blacklist_title", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag(feed, "blacklist_title", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, f"Entry should be skipped: {first_entry[0]}" assert entry_should_be_skipped(reader, first_entry[0]) is True, f"Entry should be skipped: {first_entry[0]}"
reader.delete_tag(feed, "blacklist_title") reader.delete_tag(feed, "blacklist_title")
@ -113,3 +125,81 @@ def test_should_be_skipped() -> None:
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}" assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
reader.delete_tag(feed, "blacklist_author") reader.delete_tag(feed, "blacklist_author")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}" assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
def test_regex_should_be_skipped() -> None:
"""Test the regex filtering functionality for blacklist."""
reader: Reader = get_reader()
# Add feed and update entries
reader.add_feed(feed_url)
feed: Feed = reader.get_feed(feed_url)
reader.update_feeds()
# Get first entry
first_entry: list[Entry] = []
entries: Iterable[Entry] = reader.get_entries(feed=feed)
assert entries is not None, f"Entries should not be None: {entries}"
for entry in entries:
first_entry.append(entry)
break
assert len(first_entry) == 1, f"First entry should be added: {first_entry}"
# Test entry without any regex blacklists
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test regex blacklist for title
reader.set_tag(feed, "regex_blacklist_title", r"fvnnn\w+") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, (
f"Entry should be skipped with regex title match: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_title")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test regex blacklist for summary
reader.set_tag(feed, "regex_blacklist_summary", r"ffdnfdn\w+") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, (
f"Entry should be skipped with regex summary match: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_summary")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test regex blacklist for content
reader.set_tag(feed, "regex_blacklist_content", r"ffdnfdnfdn\w+") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, (
f"Entry should be skipped with regex content match: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_content")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test regex blacklist for author
reader.set_tag(feed, "regex_blacklist_author", r"TheLovinator\d*") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, (
f"Entry should be skipped with regex author match: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_author")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test invalid regex pattern (should not raise an exception)
reader.set_tag(feed, "regex_blacklist_title", r"[incomplete") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is False, (
f"Entry should not be skipped with invalid regex: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_title")
# Test multiple regex patterns separated by commas
reader.set_tag(feed, "regex_blacklist_author", r"pattern1,TheLovinator\d*,pattern3") # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, (
f"Entry should be skipped with one matching pattern in list: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_author")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"
# Test newline-separated regex patterns
newline_patterns = "pattern1\nTheLovinator\\d*\npattern3"
reader.set_tag(feed, "regex_blacklist_author", newline_patterns) # pyright: ignore[reportArgumentType]
assert entry_should_be_skipped(reader, first_entry[0]) is True, (
f"Entry should be skipped with newline-separated patterns: {first_entry[0]}"
)
reader.delete_tag(feed, "regex_blacklist_author")
assert entry_should_be_skipped(reader, first_entry[0]) is False, f"Entry should not be skipped: {first_entry[0]}"

View file

@ -5,7 +5,9 @@ import tempfile
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from discord_rss_bot.custom_filters import encode_url, entry_is_blacklisted, entry_is_whitelisted from discord_rss_bot.custom_filters import encode_url
from discord_rss_bot.custom_filters import entry_is_blacklisted
from discord_rss_bot.custom_filters import entry_is_whitelisted
from discord_rss_bot.settings import get_reader from discord_rss_bot.settings import get_reader
if TYPE_CHECKING: if TYPE_CHECKING:
@ -43,39 +45,39 @@ def test_entry_is_whitelisted() -> None:
Path.mkdir(Path(temp_dir), exist_ok=True) Path.mkdir(Path(temp_dir), exist_ok=True)
custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite") custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite")
custom_reader: Reader = get_reader(custom_location=str(custom_loc)) reader: Reader = get_reader(custom_location=str(custom_loc))
# Add a feed to the database. # Add a feed to the database.
custom_reader.add_feed("https://lovinator.space/rss_test.xml") reader.add_feed("https://lovinator.space/rss_test.xml")
custom_reader.update_feed("https://lovinator.space/rss_test.xml") reader.update_feed("https://lovinator.space/rss_test.xml")
# whitelist_title # whitelist_title
custom_reader.set_tag("https://lovinator.space/rss_test.xml", "whitelist_title", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag("https://lovinator.space/rss_test.xml", "whitelist_title", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
for entry in custom_reader.get_entries(): for entry in reader.get_entries():
if entry_is_whitelisted(entry) is True: if entry_is_whitelisted(entry, reader=reader) is True:
assert entry.title == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.title}" assert entry.title == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.title}"
break break
custom_reader.delete_tag("https://lovinator.space/rss_test.xml", "whitelist_title") reader.delete_tag("https://lovinator.space/rss_test.xml", "whitelist_title")
# whitelist_summary # whitelist_summary
custom_reader.set_tag("https://lovinator.space/rss_test.xml", "whitelist_summary", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag("https://lovinator.space/rss_test.xml", "whitelist_summary", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
for entry in custom_reader.get_entries(): for entry in reader.get_entries():
if entry_is_whitelisted(entry) is True: if entry_is_whitelisted(entry, reader=reader) is True:
assert entry.summary == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.summary}" assert entry.summary == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.summary}"
break break
custom_reader.delete_tag("https://lovinator.space/rss_test.xml", "whitelist_summary") reader.delete_tag("https://lovinator.space/rss_test.xml", "whitelist_summary")
# whitelist_content # whitelist_content
custom_reader.set_tag("https://lovinator.space/rss_test.xml", "whitelist_content", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag("https://lovinator.space/rss_test.xml", "whitelist_content", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
for entry in custom_reader.get_entries(): for entry in reader.get_entries():
if entry_is_whitelisted(entry) is True: if entry_is_whitelisted(entry, reader=reader) is True:
assert_msg = f"Expected: <p>ffdnfdnfdnfdnfdndfn</p>, Got: {entry.content[0].value}" assert_msg = f"Expected: <p>ffdnfdnfdnfdnfdndfn</p>, Got: {entry.content[0].value}"
assert entry.content[0].value == "<p>ffdnfdnfdnfdnfdndfn</p>", assert_msg assert entry.content[0].value == "<p>ffdnfdnfdnfdnfdndfn</p>", assert_msg
break break
custom_reader.delete_tag("https://lovinator.space/rss_test.xml", "whitelist_content") reader.delete_tag("https://lovinator.space/rss_test.xml", "whitelist_content")
# Close the reader, so we can delete the directory. # Close the reader, so we can delete the directory.
custom_reader.close() reader.close()
def test_entry_is_blacklisted() -> None: def test_entry_is_blacklisted() -> None:
@ -85,36 +87,36 @@ def test_entry_is_blacklisted() -> None:
Path.mkdir(Path(temp_dir), exist_ok=True) Path.mkdir(Path(temp_dir), exist_ok=True)
custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite") custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite")
custom_reader: Reader = get_reader(custom_location=str(custom_loc)) reader: Reader = get_reader(custom_location=str(custom_loc))
# Add a feed to the database. # Add a feed to the database.
custom_reader.add_feed("https://lovinator.space/rss_test.xml") reader.add_feed("https://lovinator.space/rss_test.xml")
custom_reader.update_feed("https://lovinator.space/rss_test.xml") reader.update_feed("https://lovinator.space/rss_test.xml")
# blacklist_title # blacklist_title
custom_reader.set_tag("https://lovinator.space/rss_test.xml", "blacklist_title", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag("https://lovinator.space/rss_test.xml", "blacklist_title", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
for entry in custom_reader.get_entries(): for entry in reader.get_entries():
if entry_is_blacklisted(entry) is True: if entry_is_blacklisted(entry, reader=reader) is True:
assert entry.title == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.title}" assert entry.title == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.title}"
break break
custom_reader.delete_tag("https://lovinator.space/rss_test.xml", "blacklist_title") reader.delete_tag("https://lovinator.space/rss_test.xml", "blacklist_title")
# blacklist_summary # blacklist_summary
custom_reader.set_tag("https://lovinator.space/rss_test.xml", "blacklist_summary", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag("https://lovinator.space/rss_test.xml", "blacklist_summary", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
for entry in custom_reader.get_entries(): for entry in reader.get_entries():
if entry_is_blacklisted(entry) is True: if entry_is_blacklisted(entry, reader=reader) is True:
assert entry.summary == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.summary}" assert entry.summary == "fvnnnfnfdnfdnfd", f"Expected: fvnnnfnfdnfdnfd, Got: {entry.summary}"
break break
custom_reader.delete_tag("https://lovinator.space/rss_test.xml", "blacklist_summary") reader.delete_tag("https://lovinator.space/rss_test.xml", "blacklist_summary")
# blacklist_content # blacklist_content
custom_reader.set_tag("https://lovinator.space/rss_test.xml", "blacklist_content", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType] reader.set_tag("https://lovinator.space/rss_test.xml", "blacklist_content", "fvnnnfnfdnfdnfd") # pyright: ignore[reportArgumentType]
for entry in custom_reader.get_entries(): for entry in reader.get_entries():
if entry_is_blacklisted(entry) is True: if entry_is_blacklisted(entry, reader=reader) is True:
assert_msg = f"Expected: <p>ffdnfdnfdnfdnfdndfn</p>, Got: {entry.content[0].value}" assert_msg = f"Expected: <p>ffdnfdnfdnfdnfdndfn</p>, Got: {entry.content[0].value}"
assert entry.content[0].value == "<p>ffdnfdnfdnfdnfdndfn</p>", assert_msg assert entry.content[0].value == "<p>ffdnfdnfdnfdnfdndfn</p>", assert_msg
break break
custom_reader.delete_tag("https://lovinator.space/rss_test.xml", "blacklist_content") reader.delete_tag("https://lovinator.space/rss_test.xml", "blacklist_content")
# Close the reader, so we can delete the directory. # Close the reader, so we can delete the directory.
custom_reader.close() reader.close()

View file

@ -0,0 +1,140 @@
from __future__ import annotations
import typing
from types import SimpleNamespace
from unittest.mock import MagicMock
from unittest.mock import patch
import pytest
from discord_rss_bot.custom_message import CustomEmbed
from discord_rss_bot.custom_message import format_entry_html_for_discord
from discord_rss_bot.custom_message import replace_tags_in_embed
from discord_rss_bot.custom_message import replace_tags_in_text_message
if typing.TYPE_CHECKING:
from reader import Entry
# https://docs.discord.com/developers/reference#message-formatting
TIMESTAMP_FORMATS: tuple[str, ...] = (
"<t:1773461490>",
"<t:1773461490:F>",
"<t:1773461490:f>",
"<t:1773461490:D>",
"<t:1773461490:d>",
"<t:1773461490:t>",
"<t:1773461490:T>",
"<t:1773461490:R>",
"<t:1773461490:s>",
"<t:1773461490:S>",
)
def make_feed() -> SimpleNamespace:
return SimpleNamespace(
added=None,
author="Feed Author",
last_exception=None,
last_updated=None,
link="https://example.com/feed",
subtitle="",
title="Example Feed",
updated=None,
updates_enabled=True,
url="https://example.com/feed.xml",
user_title="",
version="atom10",
)
def make_entry(summary: str) -> SimpleNamespace:
feed: SimpleNamespace = make_feed()
return SimpleNamespace(
added=None,
author="Entry Author",
content=[],
feed=feed,
feed_url=feed.url,
id="entry-1",
important=False,
link="https://example.com/entry-1",
published=None,
read=False,
read_modified=None,
summary=summary,
title="Entry Title",
updated=None,
)
@pytest.mark.parametrize("timestamp_tag", TIMESTAMP_FORMATS)
def test_format_entry_html_for_discord_preserves_timestamp_tags(timestamp_tag: str) -> None:
escaped_timestamp_tag: str = timestamp_tag.replace("<", "&lt;").replace(">", "&gt;")
html_summary: str = f"<p>Starts: 2026-03-13 23:30 UTC ({escaped_timestamp_tag})</p>"
rendered: str = format_entry_html_for_discord(html_summary)
assert timestamp_tag in rendered
assert "DISCORDTIMESTAMPPLACEHOLDER" not in rendered
def test_format_entry_html_for_discord_empty_text_returns_empty_string() -> None:
rendered: str = format_entry_html_for_discord("")
assert not rendered
def test_format_entry_html_for_discord_cleans_markdownified_https_link_text() -> None:
html_summary: str = "[https://example.com](https://example.com)"
rendered: str = format_entry_html_for_discord(html_summary)
assert "[example.com](https://example.com)" in rendered
assert "[https://example.com]" not in rendered
def test_format_entry_html_for_discord_does_not_preserve_invalid_timestamp_style() -> None:
invalid_timestamp: str = "<t:1773461490:Z>"
html_summary: str = f"<p>Invalid style ({invalid_timestamp.replace('<', '&lt;').replace('>', '&gt;')})</p>"
rendered: str = format_entry_html_for_discord(html_summary)
assert invalid_timestamp not in rendered
@patch("discord_rss_bot.custom_message.get_custom_message")
def test_replace_tags_in_text_message_preserves_timestamp_tags(
mock_get_custom_message: MagicMock,
) -> None:
mock_reader = MagicMock()
mock_get_custom_message.return_value = "{{entry_summary}}"
summary_parts: list[str] = [
f"<p>Format {index}: ({timestamp_tag.replace('<', '&lt;').replace('>', '&gt;')})</p>"
for index, timestamp_tag in enumerate(TIMESTAMP_FORMATS, start=1)
]
entry_ns: SimpleNamespace = make_entry("".join(summary_parts))
entry: Entry = typing.cast("Entry", entry_ns)
rendered: str = replace_tags_in_text_message(entry, reader=mock_reader)
for timestamp_tag in TIMESTAMP_FORMATS:
assert timestamp_tag in rendered
@patch("discord_rss_bot.custom_message.get_embed")
def test_replace_tags_in_embed_preserves_timestamp_tags(
mock_get_embed: MagicMock,
) -> None:
mock_reader = MagicMock()
mock_get_embed.return_value = CustomEmbed(description="{{entry_summary}}")
summary_parts: list[str] = [
f"<p>Format {index}: ({timestamp_tag.replace('<', '&lt;').replace('>', '&gt;')})</p>"
for index, timestamp_tag in enumerate(TIMESTAMP_FORMATS, start=1)
]
entry_ns: SimpleNamespace = make_entry("".join(summary_parts))
entry: Entry = typing.cast("Entry", entry_ns)
embed: CustomEmbed = replace_tags_in_embed(entry_ns.feed, entry, reader=mock_reader)
for timestamp_tag in TIMESTAMP_FORMATS:
assert timestamp_tag in embed.description

View file

@ -4,12 +4,20 @@ import os
import tempfile import tempfile
from pathlib import Path from pathlib import Path
from typing import LiteralString from typing import LiteralString
from unittest.mock import MagicMock
from unittest.mock import patch
import pytest import pytest
from reader import Feed, Reader, make_reader from reader import Feed
from reader import Reader
from reader import make_reader
from discord_rss_bot.feeds import send_to_discord, truncate_webhook_message from discord_rss_bot.feeds import extract_domain
from discord_rss_bot.missing_tags import add_missing_tags from discord_rss_bot.feeds import is_youtube_feed
from discord_rss_bot.feeds import send_entry_to_discord
from discord_rss_bot.feeds import send_to_discord
from discord_rss_bot.feeds import should_send_embed_check
from discord_rss_bot.feeds import truncate_webhook_message
def test_send_to_discord() -> None: def test_send_to_discord() -> None:
@ -26,8 +34,6 @@ def test_send_to_discord() -> None:
# Add a feed to the reader. # Add a feed to the reader.
reader.add_feed("https://www.reddit.com/r/Python/.rss") reader.add_feed("https://www.reddit.com/r/Python/.rss")
add_missing_tags(reader)
# Update the feed to get the entries. # Update the feed to get the entries.
reader.update_feeds() reader.update_feeds()
@ -49,7 +55,7 @@ def test_send_to_discord() -> None:
assert reader.get_tag(feed, "webhook") == webhook_url, f"The webhook URL should be '{webhook_url}'." assert reader.get_tag(feed, "webhook") == webhook_url, f"The webhook URL should be '{webhook_url}'."
# Send the feed to Discord. # Send the feed to Discord.
send_to_discord(custom_reader=reader, feed=feed, do_once=True) send_to_discord(reader=reader, feed=feed, do_once=True)
# Close the reader, so we can delete the directory. # Close the reader, so we can delete the directory.
reader.close() reader.close()
@ -85,3 +91,186 @@ def test_truncate_webhook_message_long_message():
# Test the end of the message # Test the end of the message
assert_msg = "The end of the truncated message should be '...' to indicate truncation." assert_msg = "The end of the truncated message should be '...' to indicate truncation."
assert truncated_message[-half_length:] == "A" * half_length, assert_msg assert truncated_message[-half_length:] == "A" * half_length, assert_msg
def test_is_youtube_feed():
"""Test the is_youtube_feed function."""
# YouTube feed URLs
assert is_youtube_feed("https://www.youtube.com/feeds/videos.xml?channel_id=123456") is True
assert is_youtube_feed("https://www.youtube.com/feeds/videos.xml?user=username") is True
# Non-YouTube feed URLs
assert is_youtube_feed("https://www.example.com/feed.xml") is False
assert is_youtube_feed("https://www.youtube.com/watch?v=123456") is False
assert is_youtube_feed("https://www.reddit.com/r/Python/.rss") is False
@patch("discord_rss_bot.feeds.logger")
def test_should_send_embed_check_youtube_feeds(mock_logger: MagicMock) -> None:
"""Test should_send_embed_check returns False for YouTube feeds regardless of settings."""
# Create mocks
mock_reader = MagicMock()
mock_entry = MagicMock()
# Configure a YouTube feed
mock_entry.feed.url = "https://www.youtube.com/feeds/videos.xml?channel_id=123456"
# Set reader to return True for should_send_embed (would normally create an embed)
mock_reader.get_tag.return_value = True
# Result should be False, overriding the feed settings
result = should_send_embed_check(mock_reader, mock_entry)
assert result is False, "YouTube feeds should never use embeds"
# Function should not even call get_tag for YouTube feeds
mock_reader.get_tag.assert_not_called()
@patch("discord_rss_bot.feeds.logger")
def test_should_send_embed_check_normal_feeds(mock_logger: MagicMock) -> None:
"""Test should_send_embed_check returns feed settings for non-YouTube feeds."""
# Create mocks
mock_reader = MagicMock()
mock_entry = MagicMock()
# Configure a normal feed
mock_entry.feed.url = "https://www.example.com/feed.xml"
# Test with should_send_embed set to True
mock_reader.get_tag.return_value = True
result = should_send_embed_check(mock_reader, mock_entry)
assert result is True, "Normal feeds should use embeds when enabled"
# Test with should_send_embed set to False
mock_reader.get_tag.return_value = False
result = should_send_embed_check(mock_reader, mock_entry)
assert result is False, "Normal feeds should not use embeds when disabled"
@patch("discord_rss_bot.feeds.get_reader")
@patch("discord_rss_bot.feeds.get_custom_message")
@patch("discord_rss_bot.feeds.replace_tags_in_text_message")
@patch("discord_rss_bot.feeds.create_embed_webhook")
@patch("discord_rss_bot.feeds.DiscordWebhook")
@patch("discord_rss_bot.feeds.execute_webhook")
def test_send_entry_to_discord_youtube_feed(
mock_execute_webhook: MagicMock,
mock_discord_webhook: MagicMock,
mock_create_embed: MagicMock,
mock_replace_tags: MagicMock,
mock_get_custom_message: MagicMock,
mock_get_reader: MagicMock,
):
"""Test send_entry_to_discord function with YouTube feeds."""
# Set up mocks
mock_reader = MagicMock()
mock_get_reader.return_value = mock_reader
mock_entry = MagicMock()
mock_feed = MagicMock()
# Configure a YouTube feed
mock_entry.feed = mock_feed
mock_entry.feed.url = "https://www.youtube.com/feeds/videos.xml?channel_id=123456"
mock_entry.feed_url = "https://www.youtube.com/feeds/videos.xml?channel_id=123456"
# Mock the tags
mock_reader.get_tag.side_effect = lambda feed, tag, default=None: { # noqa: ARG005
"webhook": "https://discord.com/api/webhooks/123/abc",
"should_send_embed": True, # This should be ignored for YouTube feeds
}.get(tag, default)
# Mock custom message
mock_get_custom_message.return_value = "Custom message"
mock_replace_tags.return_value = "Formatted message with {{entry_link}}"
# Mock webhook
mock_webhook = MagicMock()
mock_discord_webhook.return_value = mock_webhook
# Call the function
send_entry_to_discord(mock_entry, mock_reader)
# Assertions
mock_create_embed.assert_not_called()
mock_discord_webhook.assert_called_once()
# Check webhook was created with the right message
webhook_call_kwargs = mock_discord_webhook.call_args[1]
assert "content" in webhook_call_kwargs, "Webhook should have content"
assert webhook_call_kwargs["url"] == "https://discord.com/api/webhooks/123/abc"
# Verify execute_webhook was called
mock_execute_webhook.assert_called_once_with(mock_webhook, mock_entry, reader=mock_reader)
def test_extract_domain_youtube_feed() -> None:
"""Test extract_domain for YouTube feeds."""
url: str = "https://www.youtube.com/feeds/videos.xml?channel_id=123456"
assert extract_domain(url) == "YouTube", "YouTube feeds should return 'YouTube' as the domain."
def test_extract_domain_reddit_feed() -> None:
"""Test extract_domain for Reddit feeds."""
url: str = "https://www.reddit.com/r/Python/.rss"
assert extract_domain(url) == "Reddit", "Reddit feeds should return 'Reddit' as the domain."
def test_extract_domain_github_feed() -> None:
"""Test extract_domain for GitHub feeds."""
url: str = "https://www.github.com/user/repo"
assert extract_domain(url) == "GitHub", "GitHub feeds should return 'GitHub' as the domain."
def test_extract_domain_custom_domain() -> None:
"""Test extract_domain for custom domains."""
url: str = "https://www.example.com/feed"
assert extract_domain(url) == "Example", "Custom domains should return the capitalized first part of the domain."
def test_extract_domain_no_www_prefix() -> None:
"""Test extract_domain removes 'www.' prefix."""
url: str = "https://www.example.com/feed"
assert extract_domain(url) == "Example", "The 'www.' prefix should be removed from the domain."
def test_extract_domain_no_tld() -> None:
"""Test extract_domain for domains without a TLD."""
url: str = "https://localhost/feed"
assert extract_domain(url) == "Localhost", "Domains without a TLD should return the capitalized domain."
def test_extract_domain_invalid_url() -> None:
"""Test extract_domain for invalid URLs."""
url: str = "not-a-valid-url"
assert extract_domain(url) == "Other", "Invalid URLs should return 'Other' as the domain."
def test_extract_domain_empty_url() -> None:
"""Test extract_domain for empty URLs."""
url: str = ""
assert extract_domain(url) == "Other", "Empty URLs should return 'Other' as the domain."
def test_extract_domain_special_characters() -> None:
"""Test extract_domain for URLs with special characters."""
url: str = "https://www.ex-ample.com/feed"
assert extract_domain(url) == "Ex-ample", "Domains with special characters should return the capitalized domain."
@pytest.mark.parametrize(
argnames=("url", "expected"),
argvalues=[
("https://blog.something.com", "Something"),
("https://www.something.com", "Something"),
("https://subdomain.example.co.uk", "Example"),
("https://github.com/user/repo", "GitHub"),
("https://youtube.com/feeds/videos.xml?channel_id=abc", "YouTube"),
("https://reddit.com/r/python/.rss", "Reddit"),
("", "Other"),
("not a url", "Other"),
("https://www.example.com", "Example"),
("https://foo.bar.baz.com", "Baz"),
],
)
def test_extract_domain(url: str, expected: str) -> None:
assert extract_domain(url) == expected

475
tests/test_git_backup.py Normal file
View file

@ -0,0 +1,475 @@
from __future__ import annotations
import contextlib
import json
import shutil
import subprocess # noqa: S404
from pathlib import Path
from typing import TYPE_CHECKING
from typing import Any
from unittest.mock import MagicMock
from unittest.mock import patch
import pytest
from fastapi.testclient import TestClient
from discord_rss_bot.git_backup import commit_state_change
from discord_rss_bot.git_backup import export_state
from discord_rss_bot.git_backup import get_backup_path
from discord_rss_bot.git_backup import get_backup_remote
from discord_rss_bot.git_backup import setup_backup_repo
from discord_rss_bot.main import app
if TYPE_CHECKING:
from pathlib import Path
SKIP_IF_NO_GIT: pytest.MarkDecorator = pytest.mark.skipif(
shutil.which("git") is None,
reason="git executable not found",
)
def test_get_backup_path_unset(monkeypatch: pytest.MonkeyPatch) -> None:
"""get_backup_path returns None when GIT_BACKUP_PATH is not set."""
monkeypatch.delenv("GIT_BACKUP_PATH", raising=False)
assert get_backup_path() is None
def test_get_backup_path_set(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""get_backup_path returns a Path when GIT_BACKUP_PATH is set."""
monkeypatch.setenv("GIT_BACKUP_PATH", str(tmp_path))
result: Path | None = get_backup_path()
assert result == tmp_path
def test_get_backup_path_strips_whitespace(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""get_backup_path strips surrounding whitespace from the env var value."""
monkeypatch.setenv("GIT_BACKUP_PATH", f" {tmp_path} ")
result: Path | None = get_backup_path()
assert result == tmp_path
def test_get_backup_remote_unset(monkeypatch: pytest.MonkeyPatch) -> None:
"""get_backup_remote returns empty string when GIT_BACKUP_REMOTE is not set."""
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
assert not get_backup_remote()
def test_get_backup_remote_set(monkeypatch: pytest.MonkeyPatch) -> None:
"""get_backup_remote returns the configured remote URL."""
monkeypatch.setenv("GIT_BACKUP_REMOTE", "git@github.com:user/repo.git")
assert get_backup_remote() == "git@github.com:user/repo.git"
@SKIP_IF_NO_GIT
def test_setup_backup_repo_creates_git_repo(tmp_path: Path) -> None:
"""setup_backup_repo initialises a git repo in a fresh directory."""
backup_path: Path = tmp_path / "backup"
result: bool = setup_backup_repo(backup_path)
assert result is True
assert (backup_path / ".git").exists()
@SKIP_IF_NO_GIT
def test_setup_backup_repo_idempotent(tmp_path: Path) -> None:
"""setup_backup_repo does not fail when called on an existing repo."""
backup_path: Path = tmp_path / "backup"
assert setup_backup_repo(backup_path) is True
assert setup_backup_repo(backup_path) is True
def test_setup_backup_repo_adds_origin_remote(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""setup_backup_repo adds remote 'origin' when GIT_BACKUP_REMOTE is set."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_REMOTE", "git@github.com:user/private.git")
with patch("discord_rss_bot.git_backup.subprocess.run") as mock_run:
# git config --local queries fail initially so setup writes defaults.
mock_run.side_effect = [
MagicMock(returncode=0), # git init
MagicMock(returncode=1), # config user.email read
MagicMock(returncode=0), # config user.email write
MagicMock(returncode=1), # config user.name read
MagicMock(returncode=0), # config user.name write
MagicMock(returncode=1), # remote get-url origin (missing)
MagicMock(returncode=0), # remote add origin <url>
]
assert setup_backup_repo(backup_path) is True
called_commands: list[list[str]] = [call.args[0] for call in mock_run.call_args_list]
assert ["remote", "add", "origin", "git@github.com:user/private.git"] in [
cmd[-4:] for cmd in called_commands if len(cmd) >= 4
]
def test_setup_backup_repo_updates_origin_remote(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""setup_backup_repo updates existing origin when URL differs."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_REMOTE", "git@github.com:user/new-private.git")
with patch("discord_rss_bot.git_backup.subprocess.run") as mock_run:
# Existing repo path: no git init call.
(backup_path / ".git").mkdir(parents=True)
mock_run.side_effect = [
MagicMock(returncode=0), # config user.email read
MagicMock(returncode=0), # config user.name read
MagicMock(returncode=0, stdout=b"git@github.com:user/old-private.git\n"), # remote get-url origin
MagicMock(returncode=0), # remote set-url origin <new>
]
assert setup_backup_repo(backup_path) is True
called_commands: list[list[str]] = [call.args[0] for call in mock_run.call_args_list]
assert ["remote", "set-url", "origin", "git@github.com:user/new-private.git"] in [
cmd[-4:] for cmd in called_commands if len(cmd) >= 4
]
def test_export_state_creates_state_json(tmp_path: Path) -> None:
"""export_state writes a valid state.json to the backup directory."""
mock_reader = MagicMock()
# Feeds
feed1 = MagicMock()
feed1.url = "https://example.com/feed.rss"
mock_reader.get_feeds.return_value = [feed1]
# Tag values: webhook present, everything else absent (returns None)
def get_tag_side_effect(
feed_or_key: tuple | str,
tag: str | None = None,
default: str | None = None,
) -> list[Any] | str | None:
if feed_or_key == () and tag is None:
# Called for global webhooks list
return []
if tag == "webhook":
return "https://discord.com/api/webhooks/123/abc"
return default
mock_reader.get_tag.side_effect = get_tag_side_effect
backup_path: Path = tmp_path / "backup"
backup_path.mkdir()
export_state(mock_reader, backup_path)
state_file: Path = backup_path / "state.json"
assert state_file.exists(), "state.json should be created by export_state"
data: dict[str, Any] = json.loads(state_file.read_text(encoding="utf-8"))
assert "feeds" in data
assert "webhooks" in data
assert data["feeds"][0]["url"] == "https://example.com/feed.rss"
assert data["feeds"][0]["webhook"] == "https://discord.com/api/webhooks/123/abc"
def test_export_state_omits_empty_tags(tmp_path: Path) -> None:
"""export_state does not include tags with empty-string or None values."""
mock_reader = MagicMock()
feed1 = MagicMock()
feed1.url = "https://example.com/feed.rss"
mock_reader.get_feeds.return_value = [feed1]
def get_tag_side_effect(
feed_or_key: tuple | str,
tag: str | None = None,
default: str | None = None,
) -> list[Any] | str | None:
if feed_or_key == ():
return []
# Return empty string for all tags
return default # default is None
mock_reader.get_tag.side_effect = get_tag_side_effect
backup_path: Path = tmp_path / "backup"
backup_path.mkdir()
export_state(mock_reader, backup_path)
data: dict[str, Any] = json.loads((backup_path / "state.json").read_text())
# Only "url" key should be present (no empty-value tags)
assert list(data["feeds"][0].keys()) == ["url"]
def test_commit_state_change_noop_when_not_configured(monkeypatch: pytest.MonkeyPatch) -> None:
"""commit_state_change does nothing when GIT_BACKUP_PATH is not set."""
monkeypatch.delenv("GIT_BACKUP_PATH", raising=False)
mock_reader = MagicMock()
# Should not raise and should not call reader methods for export
commit_state_change(mock_reader, "Add feed example.com/rss")
mock_reader.get_feeds.assert_not_called()
@SKIP_IF_NO_GIT
def test_commit_state_change_commits(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""commit_state_change creates a commit in the backup repo."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
mock_reader = MagicMock()
mock_reader.get_feeds.return_value = []
mock_reader.get_tag.return_value = []
commit_state_change(mock_reader, "Add feed https://example.com/rss")
# Verify a commit was created in the backup repo
git_executable: str | None = shutil.which("git")
assert git_executable is not None, "git executable not found"
result: subprocess.CompletedProcess[str] = subprocess.run( # noqa: S603
[git_executable, "-C", str(backup_path), "log", "--oneline"],
capture_output=True,
text=True,
check=False,
)
assert result.returncode == 0
assert "Add feed https://example.com/rss" in result.stdout
@SKIP_IF_NO_GIT
def test_commit_state_change_no_double_commit(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""commit_state_change does not create a commit when state has not changed."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
mock_reader = MagicMock()
mock_reader.get_feeds.return_value = []
mock_reader.get_tag.return_value = []
commit_state_change(mock_reader, "First commit")
commit_state_change(mock_reader, "Should not appear")
git_executable: str | None = shutil.which("git")
assert git_executable is not None, "git executable not found"
result: subprocess.CompletedProcess[str] = subprocess.run( # noqa: S603
[git_executable, "-C", str(backup_path), "log", "--oneline"],
capture_output=True,
text=True,
check=False,
)
assert result.returncode == 0
assert "First commit" in result.stdout
assert "Should not appear" not in result.stdout
def test_commit_state_change_push_when_remote_set(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""commit_state_change calls git push when GIT_BACKUP_REMOTE is configured."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.setenv("GIT_BACKUP_REMOTE", "git@github.com:user/private.git")
mock_reader = MagicMock()
mock_reader.get_feeds.return_value = []
mock_reader.get_tag.return_value = []
with patch("discord_rss_bot.git_backup.subprocess.run") as mock_run:
# Make all subprocess calls succeed
mock_run.return_value = MagicMock(returncode=1) # returncode=1 means staged changes exist
commit_state_change(mock_reader, "Add feed https://example.com/rss")
called_commands: list[list[str]] = [call.args[0] for call in mock_run.call_args_list]
push_calls: list[list[str]] = [cmd for cmd in called_commands if "push" in cmd]
assert push_calls, "git push should have been called when GIT_BACKUP_REMOTE is set"
assert any(cmd[-3:] == ["push", "origin", "HEAD"] for cmd in called_commands), (
"git push should target configured remote name 'origin'"
)
def test_commit_state_change_no_push_when_remote_unset(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""commit_state_change does not call git push when GIT_BACKUP_REMOTE is not set."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
mock_reader = MagicMock()
mock_reader.get_feeds.return_value = []
mock_reader.get_tag.return_value = []
with patch("discord_rss_bot.git_backup.subprocess.run") as mock_run:
mock_run.return_value = MagicMock(returncode=1)
commit_state_change(mock_reader, "Add feed https://example.com/rss")
called_commands: list[list[str]] = [call.args[0] for call in mock_run.call_args_list]
push_calls: list[list[str]] = [cmd for cmd in called_commands if "push" in cmd]
assert not push_calls, "git push should NOT be called when GIT_BACKUP_REMOTE is not set"
client: TestClient = TestClient(app)
test_webhook_name: str = "Test Backup Webhook"
test_webhook_url: str = "https://discord.com/api/webhooks/999999999/testbackupwebhook"
test_feed_url: str = "https://lovinator.space/rss_test.xml"
def setup_test_feed() -> None:
"""Set up a test webhook and feed for endpoint tests."""
# Clean up existing test data
with contextlib.suppress(Exception):
client.post(url="/remove", data={"feed_url": test_feed_url})
with contextlib.suppress(Exception):
client.post(url="/delete_webhook", data={"webhook_url": test_webhook_url})
# Create webhook and feed
client.post(
url="/add_webhook",
data={"webhook_name": test_webhook_name, "webhook_url": test_webhook_url},
)
client.post(url="/add", data={"feed_url": test_feed_url, "webhook_dropdown": test_webhook_name})
def test_post_embed_triggers_backup(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""Posting to /embed should trigger a git backup with appropriate message."""
# Set up git backup
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
setup_test_feed()
with patch("discord_rss_bot.main.commit_state_change") as mock_commit:
response = client.post(
url="/embed",
data={
"feed_url": test_feed_url,
"title": "Custom Title",
"description": "Custom Description",
"color": "#FF5733",
},
)
assert response.status_code == 200, f"Failed to post embed: {response.text}"
mock_commit.assert_called_once()
# Verify the commit message contains the feed URL
call_args = mock_commit.call_args
assert call_args is not None
commit_message: str = call_args[0][1]
assert "Update embed settings" in commit_message
assert test_feed_url in commit_message
def test_post_use_embed_triggers_backup(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""Posting to /use_embed should trigger a git backup."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
setup_test_feed()
with patch("discord_rss_bot.main.commit_state_change") as mock_commit:
response = client.post(url="/use_embed", data={"feed_url": test_feed_url})
assert response.status_code == 200, f"Failed to enable embed: {response.text}"
mock_commit.assert_called_once()
# Verify the commit message
call_args = mock_commit.call_args
assert call_args is not None
commit_message: str = call_args[0][1]
assert "Enable embed mode" in commit_message
assert test_feed_url in commit_message
def test_post_use_text_triggers_backup(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""Posting to /use_text should trigger a git backup."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
setup_test_feed()
with patch("discord_rss_bot.main.commit_state_change") as mock_commit:
response = client.post(url="/use_text", data={"feed_url": test_feed_url})
assert response.status_code == 200, f"Failed to disable embed: {response.text}"
mock_commit.assert_called_once()
# Verify the commit message
call_args = mock_commit.call_args
assert call_args is not None
commit_message: str = call_args[0][1]
assert "Disable embed mode" in commit_message
assert test_feed_url in commit_message
def test_post_custom_message_triggers_backup(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""Posting to /custom should trigger a git backup."""
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
setup_test_feed()
with patch("discord_rss_bot.main.commit_state_change") as mock_commit:
response = client.post(
url="/custom",
data={
"feed_url": test_feed_url,
"custom_message": "Check out this entry: {entry.title}",
},
)
assert response.status_code == 200, f"Failed to set custom message: {response.text}"
mock_commit.assert_called_once()
# Verify the commit message
call_args = mock_commit.call_args
assert call_args is not None
commit_message: str = call_args[0][1]
assert "Update custom message" in commit_message
assert test_feed_url in commit_message
@SKIP_IF_NO_GIT
def test_embed_backup_end_to_end(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""End-to-end test: customizing embed creates a real commit in the backup repo."""
git_executable: str | None = shutil.which("git")
assert git_executable is not None, "git executable not found"
backup_path: Path = tmp_path / "backup"
monkeypatch.setenv("GIT_BACKUP_PATH", str(backup_path))
monkeypatch.delenv("GIT_BACKUP_REMOTE", raising=False)
setup_test_feed()
# Post embed customization
response = client.post(
url="/embed",
data={
"feed_url": test_feed_url,
"title": "{entry.title}",
"description": "{entry.summary}",
"color": "#0099FF",
"image_url": "{entry.image}",
},
)
assert response.status_code == 200, f"Failed to customize embed: {response.text}"
# Verify a commit was created
result: subprocess.CompletedProcess[str] = subprocess.run( # noqa: S603
[git_executable, "-C", str(backup_path), "log", "--oneline"],
capture_output=True,
text=True,
check=False,
)
assert result.returncode == 0, f"Failed to read git log: {result.stderr}"
assert "Update embed settings" in result.stdout, f"Commit not found in log: {result.stdout}"
# Verify state.json contains embed data
state_file: Path = backup_path / "state.json"
assert state_file.exists(), "state.json should exist in backup repo"
state_data: dict[str, Any] = json.loads(state_file.read_text(encoding="utf-8"))
# Find our test feed in the state
test_feed_data = next((feed for feed in state_data["feeds"] if feed["url"] == test_feed_url), None)
assert test_feed_data is not None, f"Test feed not found in state.json: {state_data}"
# The embed settings are stored as a nested dict under custom_embed tag
# This verifies the embed customization was persisted
assert "webhook" in test_feed_data, "Feed should have webhook set"

39
tests/test_hoyolab_api.py Normal file
View file

@ -0,0 +1,39 @@
from __future__ import annotations
from discord_rss_bot.hoyolab_api import extract_post_id_from_hoyolab_url
class TestExtractPostIdFromHoyolabUrl:
def test_extract_post_id_from_article_url(self) -> None:
"""Test extracting post ID from a direct article URL."""
test_cases: list[str] = [
"https://www.hoyolab.com/article/38588239",
"http://hoyolab.com/article/12345",
"https://www.hoyolab.com/article/987654321/comments",
]
expected_ids: list[str] = ["38588239", "12345", "987654321"]
for url, expected_id in zip(test_cases, expected_ids, strict=False):
assert extract_post_id_from_hoyolab_url(url) == expected_id
def test_url_without_post_id(self) -> None:
"""Test with a URL that doesn't have a post ID."""
test_cases: list[str] = [
"https://www.hoyolab.com/community",
]
for url in test_cases:
assert extract_post_id_from_hoyolab_url(url) is None
def test_edge_cases(self) -> None:
"""Test edge cases like None, empty string, and malformed URLs."""
test_cases: list[str | None] = [
None,
"",
"not_a_url",
"http:/", # Malformed URL
]
for url in test_cases:
assert extract_post_id_from_hoyolab_url(url) is None # type: ignore

View file

@ -1,14 +1,29 @@
from __future__ import annotations from __future__ import annotations
import re
import urllib.parse import urllib.parse
from dataclasses import dataclass
from dataclasses import field
from datetime import UTC
from datetime import datetime
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from typing import cast
from unittest.mock import MagicMock
from unittest.mock import patch
from fastapi.testclient import TestClient from fastapi.testclient import TestClient
import discord_rss_bot.main as main_module
from discord_rss_bot.main import app from discord_rss_bot.main import app
from discord_rss_bot.main import create_html_for_feed
from discord_rss_bot.main import get_reader_dependency
if TYPE_CHECKING: if TYPE_CHECKING:
from pathlib import Path
import pytest
from httpx import Response from httpx import Response
from reader import Entry
client: TestClient = TestClient(app) client: TestClient = TestClient(app)
webhook_name: str = "Hello, I am a webhook!" webhook_name: str = "Hello, I am a webhook!"
@ -45,7 +60,7 @@ def test_search() -> None:
# Check that the feed was added. # Check that the feed was added.
response = client.get(url="/") response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}" assert response.status_code == 200, f"Failed to get /: {response.text}"
assert feed_url in response.text, f"Feed not found in /: {response.text}" assert encoded_feed_url(feed_url) in response.text, f"Feed not found in /: {response.text}"
# Search for an entry. # Search for an entry.
response: Response = client.get(url="/search/?query=a") response: Response = client.get(url="/search/?query=a")
@ -72,6 +87,14 @@ def test_add_webhook() -> None:
def test_create_feed() -> None: def test_create_feed() -> None:
"""Test the /create_feed page.""" """Test the /create_feed page."""
# Ensure webhook exists for this test regardless of test order.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists before we run the test. # Remove the feed if it already exists before we run the test.
feeds: Response = client.get(url="/") feeds: Response = client.get(url="/")
if feed_url in feeds.text: if feed_url in feeds.text:
@ -85,11 +108,19 @@ def test_create_feed() -> None:
# Check that the feed was added. # Check that the feed was added.
response = client.get(url="/") response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}" assert response.status_code == 200, f"Failed to get /: {response.text}"
assert feed_url in response.text, f"Feed not found in /: {response.text}" assert encoded_feed_url(feed_url) in response.text, f"Feed not found in /: {response.text}"
def test_get() -> None: def test_get() -> None:
"""Test the /create_feed page.""" """Test the /create_feed page."""
# Ensure webhook exists for this test regardless of test order.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists before we run the test. # Remove the feed if it already exists before we run the test.
feeds: Response = client.get("/") feeds: Response = client.get("/")
if feed_url in feeds.text: if feed_url in feeds.text:
@ -103,7 +134,7 @@ def test_get() -> None:
# Check that the feed was added. # Check that the feed was added.
response = client.get("/") response = client.get("/")
assert response.status_code == 200, f"Failed to get /: {response.text}" assert response.status_code == 200, f"Failed to get /: {response.text}"
assert feed_url in response.text, f"Feed not found in /: {response.text}" assert encoded_feed_url(feed_url) in response.text, f"Feed not found in /: {response.text}"
response: Response = client.get(url="/add") response: Response = client.get(url="/add")
assert response.status_code == 200, f"/add failed: {response.text}" assert response.status_code == 200, f"/add failed: {response.text}"
@ -129,12 +160,23 @@ def test_get() -> None:
response: Response = client.get(url="/webhooks") response: Response = client.get(url="/webhooks")
assert response.status_code == 200, f"/webhooks failed: {response.text}" assert response.status_code == 200, f"/webhooks failed: {response.text}"
response = client.get(url="/webhook_entries", params={"webhook_url": webhook_url})
assert response.status_code == 200, f"/webhook_entries failed: {response.text}"
response: Response = client.get(url="/whitelist", params={"feed_url": encoded_feed_url(feed_url)}) response: Response = client.get(url="/whitelist", params={"feed_url": encoded_feed_url(feed_url)})
assert response.status_code == 200, f"/whitelist failed: {response.text}" assert response.status_code == 200, f"/whitelist failed: {response.text}"
def test_pause_feed() -> None: def test_pause_feed() -> None:
"""Test the /pause_feed page.""" """Test the /pause_feed page."""
# Ensure webhook exists for this test regardless of test order.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists before we run the test. # Remove the feed if it already exists before we run the test.
feeds: Response = client.get(url="/") feeds: Response = client.get(url="/")
if feed_url in feeds.text: if feed_url in feeds.text:
@ -143,6 +185,7 @@ def test_pause_feed() -> None:
# Add the feed. # Add the feed.
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name}) response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Unpause the feed if it is paused. # Unpause the feed if it is paused.
feeds: Response = client.get(url="/") feeds: Response = client.get(url="/")
@ -157,11 +200,19 @@ def test_pause_feed() -> None:
# Check that the feed was paused. # Check that the feed was paused.
response = client.get(url="/") response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}" assert response.status_code == 200, f"Failed to get /: {response.text}"
assert feed_url in response.text, f"Feed not found in /: {response.text}" assert encoded_feed_url(feed_url) in response.text, f"Feed not found in /: {response.text}"
def test_unpause_feed() -> None: def test_unpause_feed() -> None:
"""Test the /unpause_feed page.""" """Test the /unpause_feed page."""
# Ensure webhook exists for this test regardless of test order.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists before we run the test. # Remove the feed if it already exists before we run the test.
feeds: Response = client.get("/") feeds: Response = client.get("/")
if feed_url in feeds.text: if feed_url in feeds.text:
@ -170,6 +221,7 @@ def test_unpause_feed() -> None:
# Add the feed. # Add the feed.
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name}) response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Pause the feed if it is unpaused. # Pause the feed if it is unpaused.
feeds: Response = client.get(url="/") feeds: Response = client.get(url="/")
@ -184,11 +236,19 @@ def test_unpause_feed() -> None:
# Check that the feed was unpaused. # Check that the feed was unpaused.
response = client.get(url="/") response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}" assert response.status_code == 200, f"Failed to get /: {response.text}"
assert feed_url in response.text, f"Feed not found in /: {response.text}" assert encoded_feed_url(feed_url) in response.text, f"Feed not found in /: {response.text}"
def test_remove_feed() -> None: def test_remove_feed() -> None:
"""Test the /remove page.""" """Test the /remove page."""
# Ensure webhook exists for this test regardless of test order.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists before we run the test. # Remove the feed if it already exists before we run the test.
feeds: Response = client.get(url="/") feeds: Response = client.get(url="/")
if feed_url in feeds.text: if feed_url in feeds.text:
@ -197,6 +257,7 @@ def test_remove_feed() -> None:
# Add the feed. # Add the feed.
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name}) response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Remove the feed. # Remove the feed.
response: Response = client.post(url="/remove", data={"feed_url": feed_url}) response: Response = client.post(url="/remove", data={"feed_url": feed_url})
@ -208,6 +269,186 @@ def test_remove_feed() -> None:
assert feed_url not in response.text, f"Feed found in /: {response.text}" assert feed_url not in response.text, f"Feed found in /: {response.text}"
def test_change_feed_url() -> None:
"""Test changing a feed URL from the feed page endpoint."""
new_feed_url = "https://lovinator.space/rss_test_small.xml"
# Ensure test feeds do not already exist.
client.post(url="/remove", data={"feed_url": feed_url})
client.post(url="/remove", data={"feed_url": new_feed_url})
# Ensure webhook exists.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Add the original feed.
response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Change feed URL.
response = client.post(
url="/change_feed_url",
data={"old_feed_url": feed_url, "new_feed_url": new_feed_url},
)
assert response.status_code == 200, f"Failed to change feed URL: {response.text}"
# New feed should be accessible.
response = client.get(url="/feed", params={"feed_url": new_feed_url})
assert response.status_code == 200, f"New feed URL is not accessible: {response.text}"
# Old feed should no longer be accessible.
response = client.get(url="/feed", params={"feed_url": feed_url})
assert response.status_code == 404, "Old feed URL should no longer exist"
# Cleanup.
client.post(url="/remove", data={"feed_url": new_feed_url})
def test_change_feed_url_marks_entries_as_read() -> None:
"""After changing a feed URL all entries on the new feed should be marked read to prevent resending."""
new_feed_url = "https://lovinator.space/rss_test_small.xml"
# Ensure feeds do not already exist.
client.post(url="/remove", data={"feed_url": feed_url})
client.post(url="/remove", data={"feed_url": new_feed_url})
# Ensure webhook exists.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
client.post(url="/add_webhook", data={"webhook_name": webhook_name, "webhook_url": webhook_url})
# Add the original feed.
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Patch reader on the main module so we can observe calls.
mock_entry_a = MagicMock()
mock_entry_a.id = "entry-a"
mock_entry_b = MagicMock()
mock_entry_b.id = "entry-b"
real_reader = main_module.get_reader_dependency()
# Use a no-redirect client so the POST response is inspected directly; the
# redirect target (/feed?feed_url=…) would 404 because change_feed_url is mocked.
no_redirect_client = TestClient(app, follow_redirects=False)
with (
patch.object(real_reader, "get_entries", return_value=[mock_entry_a, mock_entry_b]) as mock_get_entries,
patch.object(real_reader, "set_entry_read") as mock_set_read,
patch.object(real_reader, "update_feed") as mock_update_feed,
patch.object(real_reader, "change_feed_url"),
):
response = no_redirect_client.post(
url="/change_feed_url",
data={"old_feed_url": feed_url, "new_feed_url": new_feed_url},
)
assert response.status_code == 303, f"Expected 303 redirect, got {response.status_code}: {response.text}"
# update_feed should have been called with the new URL.
mock_update_feed.assert_called_once_with(new_feed_url)
# get_entries should have been called to fetch unread entries on the new URL.
mock_get_entries.assert_called_once_with(feed=new_feed_url, read=False)
# Every returned entry should have been marked as read.
assert mock_set_read.call_count == 2, f"Expected 2 set_entry_read calls, got {mock_set_read.call_count}"
mock_set_read.assert_any_call(mock_entry_a, True)
mock_set_read.assert_any_call(mock_entry_b, True)
# Cleanup.
client.post(url="/remove", data={"feed_url": feed_url})
client.post(url="/remove", data={"feed_url": new_feed_url})
def test_change_feed_url_empty_old_url_returns_400() -> None:
"""Submitting an empty old_feed_url should return HTTP 400."""
response: Response = client.post(
url="/change_feed_url",
data={"old_feed_url": " ", "new_feed_url": "https://example.com/feed.xml"},
)
assert response.status_code == 400, f"Expected 400 for empty old URL, got {response.status_code}"
def test_change_feed_url_empty_new_url_returns_400() -> None:
"""Submitting a blank new_feed_url should return HTTP 400."""
response: Response = client.post(
url="/change_feed_url",
data={"old_feed_url": feed_url, "new_feed_url": " "},
)
assert response.status_code == 400, f"Expected 400 for blank new URL, got {response.status_code}"
def test_change_feed_url_nonexistent_old_url_returns_404() -> None:
"""Trying to rename a feed that does not exist should return HTTP 404."""
non_existent = "https://does-not-exist.example.com/rss.xml"
# Make sure it really is absent.
client.post(url="/remove", data={"feed_url": non_existent})
response: Response = client.post(
url="/change_feed_url",
data={"old_feed_url": non_existent, "new_feed_url": "https://example.com/new.xml"},
)
assert response.status_code == 404, f"Expected 404 for non-existent feed, got {response.status_code}"
def test_change_feed_url_new_url_already_exists_returns_409() -> None:
"""Changing to a URL that is already tracked should return HTTP 409."""
second_feed_url = "https://lovinator.space/rss_test_small.xml"
# Ensure both feeds are absent.
client.post(url="/remove", data={"feed_url": feed_url})
client.post(url="/remove", data={"feed_url": second_feed_url})
# Ensure webhook exists.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
client.post(url="/add_webhook", data={"webhook_name": webhook_name, "webhook_url": webhook_url})
# Add both feeds.
client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
client.post(url="/add", data={"feed_url": second_feed_url, "webhook_dropdown": webhook_name})
# Try to rename one to the other.
response: Response = client.post(
url="/change_feed_url",
data={"old_feed_url": feed_url, "new_feed_url": second_feed_url},
)
assert response.status_code == 409, f"Expected 409 when new URL already exists, got {response.status_code}"
# Cleanup.
client.post(url="/remove", data={"feed_url": feed_url})
client.post(url="/remove", data={"feed_url": second_feed_url})
def test_change_feed_url_same_url_redirects_without_error() -> None:
"""Changing a feed's URL to itself should redirect cleanly without any error."""
# Ensure webhook exists.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
client.post(url="/add_webhook", data={"webhook_name": webhook_name, "webhook_url": webhook_url})
# Add the feed.
client.post(url="/remove", data={"feed_url": feed_url})
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Submit the same URL as both old and new.
response = client.post(
url="/change_feed_url",
data={"old_feed_url": feed_url, "new_feed_url": feed_url},
)
assert response.status_code == 200, f"Expected 200 redirect for same URL, got {response.status_code}"
# Feed should still be accessible.
response = client.get(url="/feed", params={"feed_url": feed_url})
assert response.status_code == 200, f"Feed should still exist after no-op URL change: {response.text}"
# Cleanup.
client.post(url="/remove", data={"feed_url": feed_url})
def test_delete_webhook() -> None: def test_delete_webhook() -> None:
"""Test the /delete_webhook page.""" """Test the /delete_webhook page."""
# Remove the feed if it already exists before we run the test. # Remove the feed if it already exists before we run the test.
@ -229,3 +470,1152 @@ def test_delete_webhook() -> None:
response = client.get(url="/webhooks") response = client.get(url="/webhooks")
assert response.status_code == 200, f"Failed to get /webhooks: {response.text}" assert response.status_code == 200, f"Failed to get /webhooks: {response.text}"
assert webhook_name not in response.text, f"Webhook found in /webhooks: {response.text}" assert webhook_name not in response.text, f"Webhook found in /webhooks: {response.text}"
def test_update_feed_not_found() -> None:
"""Test updating a non-existent feed."""
# Generate a feed URL that does not exist
nonexistent_feed_url = "https://nonexistent-feed.example.com/rss.xml"
# Try to update the non-existent feed
response: Response = client.get(url="/update", params={"feed_url": urllib.parse.quote(nonexistent_feed_url)})
# Check that it returns a 404 status code
assert response.status_code == 404, f"Expected 404 for non-existent feed, got: {response.status_code}"
assert "Feed not found" in response.text
def test_post_entry_send_to_discord() -> None:
"""Test that /post_entry sends an entry to Discord and redirects to the feed page.
Regression test for the bug where the injected reader was not passed to
send_entry_to_discord, meaning the dependency-injected reader was silently ignored.
"""
# Ensure webhook and feed exist.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Retrieve an entry from the feed to get a valid entry ID.
reader: main_module.Reader = main_module.get_reader_dependency()
entries: list[Entry] = list(reader.get_entries(feed=feed_url, limit=1))
assert entries, "Feed should have at least one entry to send"
entry_to_send: main_module.Entry = entries[0]
encoded_id: str = urllib.parse.quote(entry_to_send.id)
no_redirect_client = TestClient(app, follow_redirects=False)
# Patch execute_webhook so no real HTTP requests are made to Discord.
with patch("discord_rss_bot.feeds.execute_webhook") as mock_execute:
response = no_redirect_client.get(
url="/post_entry",
params={"entry_id": encoded_id, "feed_url": urllib.parse.quote(feed_url)},
)
assert response.status_code == 303, f"Expected redirect after sending, got {response.status_code}: {response.text}"
location: str = response.headers.get("location", "")
assert "feed?feed_url=" in location, f"Should redirect to feed page, got: {location}"
assert mock_execute.called, "execute_webhook should have been called to deliver the entry to Discord"
# Cleanup.
client.post(url="/remove", data={"feed_url": feed_url})
def test_post_entry_unknown_id_returns_404() -> None:
"""Test that /post_entry returns 404 when the entry ID does not exist."""
response: Response = client.get(
url="/post_entry",
params={"entry_id": "https://nonexistent.example.com/entry-that-does-not-exist"},
)
assert response.status_code == 404, f"Expected 404 for unknown entry, got {response.status_code}"
def test_post_entry_uses_feed_url_to_disambiguate_duplicate_ids() -> None:
"""When IDs collide across feeds, /post_entry should pick the entry from provided feed_url."""
@dataclass(slots=True)
class DummyFeed:
url: str
@dataclass(slots=True)
class DummyEntry:
id: str
feed: DummyFeed
feed_url: str
feed_a = "https://example.com/feed-a.xml"
feed_b = "https://example.com/feed-b.xml"
shared_id = "https://example.com/shared-entry-id"
entry_a: Entry = cast("Entry", DummyEntry(id=shared_id, feed=DummyFeed(feed_a), feed_url=feed_a))
entry_b: Entry = cast("Entry", DummyEntry(id=shared_id, feed=DummyFeed(feed_b), feed_url=feed_b))
class StubReader:
def get_entries(self, feed: str | None = None) -> list[Entry]:
if feed == feed_a:
return [entry_a]
if feed == feed_b:
return [entry_b]
return [entry_a, entry_b]
selected_feed_urls: list[str] = []
def fake_send_entry_to_discord(entry: Entry, reader: object) -> None:
selected_feed_urls.append(entry.feed.url)
app.dependency_overrides[get_reader_dependency] = StubReader
no_redirect_client = TestClient(app, follow_redirects=False)
try:
with patch("discord_rss_bot.main.send_entry_to_discord", side_effect=fake_send_entry_to_discord):
response: Response = no_redirect_client.get(
url="/post_entry",
params={"entry_id": urllib.parse.quote(shared_id), "feed_url": urllib.parse.quote(feed_b)},
)
assert response.status_code == 303, f"Expected redirect after sending, got {response.status_code}"
assert selected_feed_urls == [feed_b], f"Expected feed-b entry, got: {selected_feed_urls}"
location = response.headers.get("location", "")
assert urllib.parse.quote(feed_b) in location, f"Expected redirect to feed-b page, got: {location}"
finally:
app.dependency_overrides = {}
def test_navbar_backup_link_hidden_when_not_configured(monkeypatch: pytest.MonkeyPatch) -> None:
"""Test that the backup link is not shown in the navbar when GIT_BACKUP_PATH is not set."""
# Ensure GIT_BACKUP_PATH is not set
monkeypatch.delenv("GIT_BACKUP_PATH", raising=False)
# Get the index page
response: Response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}"
# Check that the backup button is not in the response
assert "Backup" not in response.text or 'action="/backup"' not in response.text, (
"Backup button should not be visible when GIT_BACKUP_PATH is not configured"
)
def test_navbar_backup_link_visible_when_configured(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None:
"""Test that the backup link is shown in the navbar when GIT_BACKUP_PATH is set."""
# Set GIT_BACKUP_PATH
monkeypatch.setenv("GIT_BACKUP_PATH", str(tmp_path))
# Get the index page
response: Response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}"
# Check that the backup button is in the response
assert "Backup" in response.text, "Backup button text should be visible when GIT_BACKUP_PATH is configured"
assert 'action="/backup"' in response.text, "Backup form should be visible when GIT_BACKUP_PATH is configured"
def test_backup_endpoint_returns_error_when_not_configured(monkeypatch: pytest.MonkeyPatch) -> None:
"""Test that the backup endpoint returns an error when GIT_BACKUP_PATH is not set."""
# Ensure GIT_BACKUP_PATH is not set
monkeypatch.delenv("GIT_BACKUP_PATH", raising=False)
# Try to trigger a backup
response: Response = client.post(url="/backup")
# Should redirect to index with error message
assert response.status_code == 200, f"Failed to post /backup: {response.text}"
assert "Git backup is not configured" in response.text or "GIT_BACKUP_PATH" in response.text, (
"Error message about backup not being configured should be shown"
)
def test_show_more_entries_button_visible_when_many_entries() -> None:
"""Test that the 'Show more entries' button is visible when there are more than 20 entries."""
# Add the webhook first
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists
feeds: Response = client.get(url="/")
if feed_url in feeds.text:
client.post(url="/remove", data={"feed_url": feed_url})
# Add the feed
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get the feed page
response: Response = client.get(url="/feed", params={"feed_url": feed_url})
assert response.status_code == 200, f"Failed to get /feed: {response.text}"
# Check if the feed has more than 20 entries by looking at the response
# The button should be visible if there are more than 20 entries
# We check for both the button text and the link structure
if "Show more entries" in response.text:
# Button is visible - verify it has the correct structure
assert "starting_after=" in response.text, "Show more entries button should contain starting_after parameter"
# The button should be a link to the feed page with pagination
assert (
f'href="/feed?feed_url={urllib.parse.quote(feed_url)}' in response.text
or f'href="/feed?feed_url={encoded_feed_url(feed_url)}' in response.text
), "Show more entries button should link back to the feed page"
def test_show_more_entries_button_not_visible_when_few_entries() -> None:
"""Test that the 'Show more entries' button is not visible when there are 20 or fewer entries."""
# Ensure webhook exists for this test regardless of test order.
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Use a feed with very few entries
small_feed_url = "https://lovinator.space/rss_test_small.xml"
# Clean up if exists
client.post(url="/remove", data={"feed_url": small_feed_url})
# Add a small feed (this may not exist, so this test is conditional)
response: Response = client.post(url="/add", data={"feed_url": small_feed_url, "webhook_dropdown": webhook_name})
if response.status_code == 200:
# Get the feed page
response: Response = client.get(url="/feed", params={"feed_url": small_feed_url})
assert response.status_code == 200, f"Failed to get /feed: {response.text}"
# If the feed has 20 or fewer entries, the button should not be visible
# We check the total entry count in the page
if "0 entries" in response.text or " entries)" in response.text:
# Extract entry count and verify button visibility
match: re.Match[str] | None = re.search(r"\((\d+) entries\)", response.text)
if match:
entry_count = int(match.group(1))
if entry_count <= 20:
assert "Show more entries" not in response.text, (
f"Show more entries button should not be visible when there are {entry_count} entries"
)
# Clean up
client.post(url="/remove", data={"feed_url": small_feed_url})
def test_show_more_entries_pagination_works() -> None:
"""Test that pagination with starting_after parameter works correctly."""
# Add the webhook first
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists
feeds: Response = client.get(url="/")
if feed_url in feeds.text:
client.post(url="/remove", data={"feed_url": feed_url})
# Add the feed
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get the first page
response: Response = client.get(url="/feed", params={"feed_url": feed_url})
assert response.status_code == 200, f"Failed to get /feed: {response.text}"
# Check if pagination is available
if "Show more entries" in response.text and "starting_after=" in response.text:
# Extract the starting_after parameter from the button link
match: re.Match[str] | None = re.search(r'starting_after=([^"&]+)', response.text)
if match:
starting_after_id: str = match.group(1)
# Request the second page
response: Response = client.get(
url="/feed",
params={"feed_url": feed_url, "starting_after": starting_after_id},
)
assert response.status_code == 200, f"Failed to get paginated feed: {response.text}"
# Verify we got a valid response (the page should contain entries)
assert "entries)" in response.text, "Paginated page should show entry count"
def test_show_more_entries_button_context_variable() -> None:
"""Test that the button visibility variable is correctly passed to the template context."""
# Add the webhook first
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove the feed if it already exists
feeds: Response = client.get(url="/")
if feed_url in feeds.text:
client.post(url="/remove", data={"feed_url": feed_url})
# Add the feed
response: Response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get the feed page
response: Response = client.get(url="/feed", params={"feed_url": feed_url})
assert response.status_code == 200, f"Failed to get /feed: {response.text}"
# Extract the total entries count from the page
match: re.Match[str] | None = re.search(r"\((\d+) entries\)", response.text)
if match:
entry_count = int(match.group(1))
# If more than 20 entries, button should be visible
if entry_count > 20:
assert "Show more entries" in response.text, (
f"Button should be visible when there are {entry_count} entries (more than 20)"
)
# If 20 or fewer entries, button should not be visible
else:
assert "Show more entries" not in response.text, (
f"Button should not be visible when there are {entry_count} entries (20 or fewer)"
)
def test_create_html_marks_entries_from_another_feed(monkeypatch: pytest.MonkeyPatch) -> None:
"""Entries from another feed should be marked in /feed html output."""
@dataclass(slots=True)
class DummyContent:
value: str
@dataclass(slots=True)
class DummyFeed:
url: str
@dataclass(slots=True)
class DummyEntry:
feed: DummyFeed
id: str
original_feed_url: str | None = None
link: str = "https://example.com/post"
title: str = "Example title"
author: str = "Author"
summary: str = "Summary"
content: list[DummyContent] = field(default_factory=lambda: [DummyContent("Content")])
published: None = None
def __post_init__(self) -> None:
if self.original_feed_url is None:
self.original_feed_url = self.feed.url
selected_feed_url = "https://example.com/feed-a.xml"
same_feed_entry = DummyEntry(DummyFeed(selected_feed_url), "same")
# feed.url matches selected feed, but original_feed_url differs; marker should still show.
other_feed_entry = DummyEntry(
DummyFeed(selected_feed_url),
"other",
original_feed_url="https://example.com/feed-b.xml",
)
monkeypatch.setattr(
"discord_rss_bot.main.replace_tags_in_text_message",
lambda _entry, **_kwargs: "Rendered content",
)
monkeypatch.setattr("discord_rss_bot.main.entry_is_blacklisted", lambda _entry, **_kwargs: False)
monkeypatch.setattr("discord_rss_bot.main.entry_is_whitelisted", lambda _entry, **_kwargs: False)
same_feed_entry_typed: Entry = cast("Entry", same_feed_entry)
other_feed_entry_typed: Entry = cast("Entry", other_feed_entry)
html: str = create_html_for_feed(
reader=MagicMock(),
current_feed_url=selected_feed_url,
entries=[
same_feed_entry_typed,
other_feed_entry_typed,
],
)
assert "From another feed: https://example.com/feed-b.xml" in html
assert "From another feed: https://example.com/feed-a.xml" not in html
def test_webhook_entries_webhook_not_found() -> None:
"""Test webhook_entries endpoint returns 404 when webhook doesn't exist."""
nonexistent_webhook_url = "https://discord.com/api/webhooks/999999/nonexistent"
response: Response = client.get(
url="/webhook_entries",
params={"webhook_url": nonexistent_webhook_url},
)
assert response.status_code == 404, f"Expected 404 for non-existent webhook, got: {response.status_code}"
assert "Webhook not found" in response.text
def test_webhook_entries_no_feeds() -> None:
"""Test webhook_entries endpoint displays message when webhook has no feeds."""
# Clean up any existing feeds first
client.post(url="/remove", data={"feed_url": feed_url})
# Clean up and create a webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Get webhook_entries without adding any feeds
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert webhook_name in response.text, "Webhook name not found in response"
assert "No feeds found" in response.text or "Add feeds" in response.text, "Expected message about no feeds"
def test_webhook_entries_no_feeds_still_shows_webhook_settings() -> None:
"""The webhook detail view should show settings/actions even with no attached feeds."""
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert "Settings" in response.text, "Expected settings card on webhook detail view"
assert "Modify Webhook" in response.text, "Expected modify form on webhook detail view"
assert "Delete Webhook" in response.text, "Expected delete action on webhook detail view"
assert "Back to dashboard" in response.text, "Expected dashboard navigation link"
assert "All webhooks" in response.text, "Expected all webhooks navigation link"
assert f'name="old_hook" value="{webhook_url}"' in response.text, "Expected old_hook hidden input"
assert f'value="/webhook_entries?webhook_url={urllib.parse.quote(webhook_url)}"' in response.text, (
"Expected modify form to redirect back to the current webhook detail view"
)
def test_webhook_entries_with_feeds_no_entries() -> None:
"""Test webhook_entries endpoint when webhook has feeds but no entries yet."""
# Clean up and create fresh webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Use a feed URL that exists but has no entries (or clean feed)
empty_feed_url = "https://lovinator.space/empty_feed.xml"
client.post(url="/remove", data={"feed_url": empty_feed_url})
# Add the feed
response = client.post(
url="/add",
data={"feed_url": empty_feed_url, "webhook_dropdown": webhook_name},
)
# Get webhook_entries
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert webhook_name in response.text, "Webhook name not found in response"
# Clean up
client.post(url="/remove", data={"feed_url": empty_feed_url})
def test_webhook_entries_with_entries() -> None:
"""Test webhook_entries endpoint displays entries correctly."""
# Clean up and create webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove and add the feed
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(
url="/add",
data={"feed_url": feed_url, "webhook_dropdown": webhook_name},
)
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get webhook_entries
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert webhook_name in response.text, "Webhook name not found in response"
# Should show entries (the feed has entries)
assert "total from" in response.text, "Expected to see entry count"
assert "Modify Webhook" in response.text, "Expected webhook settings to be visible"
assert "Attached feeds" in response.text, "Expected attached feeds section to be visible"
def test_webhook_entries_shows_attached_feed_link() -> None:
"""The webhook detail view should list attached feeds linking to their feed pages."""
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(
url="/add",
data={"feed_url": feed_url, "webhook_dropdown": webhook_name},
)
assert response.status_code == 200, f"Failed to add feed: {response.text}"
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert f"/feed?feed_url={urllib.parse.quote(feed_url)}" in response.text, (
"Expected attached feed to link to its feed detail page"
)
assert "Latest entries" in response.text, "Expected latest entries heading on webhook detail view"
client.post(url="/remove", data={"feed_url": feed_url})
def test_webhook_entries_multiple_feeds() -> None:
"""Test webhook_entries endpoint shows feed count correctly."""
# Clean up and create webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove and add feed
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(
url="/add",
data={"feed_url": feed_url, "webhook_dropdown": webhook_name},
)
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get webhook_entries
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert webhook_name in response.text, "Webhook name not found in response"
# Should show entries and feed count
assert "feed" in response.text.lower(), "Expected to see feed information"
# Clean up
client.post(url="/remove", data={"feed_url": feed_url})
def test_webhook_entries_sort_newest_and_non_null_published_first() -> None:
"""Webhook entries should be sorted newest-first with published=None entries placed last."""
@dataclass(slots=True)
class DummyFeed:
url: str
title: str | None = None
updates_enabled: bool = True
last_exception: None = None
@dataclass(slots=True)
class DummyEntry:
id: str
feed: DummyFeed
published: datetime | None
dummy_feed = DummyFeed(url="https://example.com/feed.xml", title="Example Feed")
# Intentionally unsorted input with two dated entries and two undated entries.
unsorted_entries: list[Entry] = [
cast("Entry", DummyEntry(id="old", feed=dummy_feed, published=datetime(2024, 1, 1, tzinfo=UTC))),
cast("Entry", DummyEntry(id="none-1", feed=dummy_feed, published=None)),
cast("Entry", DummyEntry(id="new", feed=dummy_feed, published=datetime(2024, 2, 1, tzinfo=UTC))),
cast("Entry", DummyEntry(id="none-2", feed=dummy_feed, published=None)),
]
class StubReader:
def get_tag(self, resource: object, key: str, default: object = None) -> object:
if resource == () and key == "webhooks":
return [{"name": webhook_name, "url": webhook_url}]
if key == "webhook" and isinstance(resource, str):
return webhook_url
return default
def get_feeds(self) -> list[DummyFeed]:
return [dummy_feed]
def get_entries(self, **_kwargs: object) -> list[Entry]:
return unsorted_entries
observed_order: list[str] = []
def capture_entries(*, reader: object, entries: list[Entry], current_feed_url: str = "") -> str:
del reader, current_feed_url
observed_order.extend(entry.id for entry in entries)
return ""
app.dependency_overrides[get_reader_dependency] = StubReader
try:
with (
patch(
"discord_rss_bot.main.get_data_from_hook_url",
return_value=main_module.WebhookInfo(custom_name=webhook_name, url=webhook_url),
),
patch("discord_rss_bot.main.create_html_for_feed", side_effect=capture_entries),
):
response: Response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
assert observed_order == ["new", "old", "none-1", "none-2"], (
"Expected newest published entries first and published=None entries last"
)
finally:
app.dependency_overrides = {}
def test_webhook_entries_pagination() -> None:
"""Test webhook_entries endpoint pagination functionality."""
# Clean up and create webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove and add the feed
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(
url="/add",
data={"feed_url": feed_url, "webhook_dropdown": webhook_name},
)
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get first page of webhook_entries
response = client.get(
url="/webhook_entries",
params={"webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries: {response.text}"
# Check if pagination button is shown when there are many entries
# The button should be visible if total_entries > 20 (entries_per_page)
if "Load More Entries" in response.text:
# Extract the starting_after parameter from the pagination form
# This is a simple check that pagination elements exist
assert 'name="starting_after"' in response.text, "Expected pagination form with starting_after parameter"
# Clean up
client.post(url="/remove", data={"feed_url": feed_url})
def test_webhook_entries_url_encoding() -> None:
"""Test webhook_entries endpoint handles URL encoding correctly."""
# Clean up and create webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
# Remove and add the feed
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(
url="/add",
data={"feed_url": feed_url, "webhook_dropdown": webhook_name},
)
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Get webhook_entries with URL-encoded webhook URL
encoded_webhook_url = urllib.parse.quote(webhook_url)
response = client.get(
url="/webhook_entries",
params={"webhook_url": encoded_webhook_url},
)
assert response.status_code == 200, f"Failed to get /webhook_entries with encoded URL: {response.text}"
assert webhook_name in response.text, "Webhook name not found in response"
# Clean up
client.post(url="/remove", data={"feed_url": feed_url})
def test_dashboard_webhook_name_links_to_webhook_detail() -> None:
"""Webhook names on the dashboard should open the webhook detail view."""
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
client.post(url="/remove", data={"feed_url": feed_url})
response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
response = client.get(url="/")
assert response.status_code == 200, f"Failed to get /: {response.text}"
expected_link = f"/webhook_entries?webhook_url={urllib.parse.quote(webhook_url)}"
assert expected_link in response.text, "Expected dashboard webhook link to point to the webhook detail view"
client.post(url="/remove", data={"feed_url": feed_url})
def test_modify_webhook_redirects_back_to_webhook_detail() -> None:
"""Webhook updates from the detail view should redirect back to that view with the new URL."""
original_webhook_url = "https://discord.com/api/webhooks/1234567890/abcdefghijklmnopqrstuvwxyz"
new_webhook_url = "https://discord.com/api/webhooks/1234567890/updated-token"
client.post(url="/delete_webhook", data={"webhook_url": original_webhook_url})
client.post(url="/delete_webhook", data={"webhook_url": new_webhook_url})
def test_modify_webhook_triggers_git_backup_commit() -> None:
"""Modifying a webhook URL should record a state change for git backup."""
original_webhook_url = "https://discord.com/api/webhooks/1234567890/abcdefghijklmnopqrstuvwxyz"
new_webhook_url = "https://discord.com/api/webhooks/1234567890/updated-token"
client.post(url="/delete_webhook", data={"webhook_url": original_webhook_url})
client.post(url="/delete_webhook", data={"webhook_url": new_webhook_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": original_webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
no_redirect_client = TestClient(app, follow_redirects=False)
with patch("discord_rss_bot.main.commit_state_change") as mock_commit_state_change:
response = no_redirect_client.post(
url="/modify_webhook",
data={
"old_hook": original_webhook_url,
"new_hook": new_webhook_url,
"redirect_to": f"/webhook_entries?webhook_url={urllib.parse.quote(original_webhook_url)}",
},
)
assert response.status_code == 303, f"Expected 303 redirect, got {response.status_code}: {response.text}"
assert mock_commit_state_change.call_count == 1, "Expected webhook modification to trigger git backup commit"
client.post(url="/delete_webhook", data={"webhook_url": new_webhook_url})
response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": original_webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
no_redirect_client = TestClient(app, follow_redirects=False)
response = no_redirect_client.post(
url="/modify_webhook",
data={
"old_hook": original_webhook_url,
"new_hook": new_webhook_url,
"redirect_to": f"/webhook_entries?webhook_url={urllib.parse.quote(original_webhook_url)}",
},
)
assert response.status_code == 303, f"Expected 303 redirect, got {response.status_code}: {response.text}"
assert response.headers["location"] == (f"/webhook_entries?webhook_url={urllib.parse.quote(new_webhook_url)}"), (
f"Unexpected redirect location: {response.headers['location']}"
)
client.post(url="/delete_webhook", data={"webhook_url": new_webhook_url})
def test_webhook_entries_mass_update_preview_shows_old_and_new_urls() -> None:
"""Preview should list old->new feed URLs for webhook bulk replacement."""
@dataclass(slots=True)
class DummyFeed:
url: str
title: str | None = None
updates_enabled: bool = True
last_exception: None = None
class StubReader:
def __init__(self) -> None:
self._feeds: list[DummyFeed] = [
DummyFeed(url="https://old.example.com/rss/a.xml", title="A"),
DummyFeed(url="https://old.example.com/rss/b.xml", title="B"),
DummyFeed(url="https://unchanged.example.com/rss/c.xml", title="C"),
]
def get_tag(self, resource: object, key: str, default: object = None) -> object:
if resource == () and key == "webhooks":
return [{"name": webhook_name, "url": webhook_url}]
if key == "webhook" and isinstance(resource, str):
if resource.startswith("https://old.example.com"):
return webhook_url
if resource.startswith("https://unchanged.example.com"):
return webhook_url
return default
def get_feeds(self) -> list[DummyFeed]:
return self._feeds
def get_entries(self, **_kwargs: object) -> list[Entry]:
return []
app.dependency_overrides[get_reader_dependency] = StubReader
try:
with (
patch(
"discord_rss_bot.main.get_data_from_hook_url",
return_value=main_module.WebhookInfo(custom_name=webhook_name, url=webhook_url),
),
patch(
"discord_rss_bot.main.resolve_final_feed_url",
side_effect=lambda url: (url.replace("old.example.com", "new.example.com"), None),
),
):
response: Response = client.get(
url="/webhook_entries",
params={
"webhook_url": webhook_url,
"replace_from": "old.example.com",
"replace_to": "new.example.com",
"resolve_urls": "true",
},
)
assert response.status_code == 200, f"Failed to get preview: {response.text}"
assert "Mass update feed URLs" in response.text
assert "old.example.com/rss/a.xml" in response.text
assert "new.example.com/rss/a.xml" in response.text
assert "Will update" in response.text
assert "Matched: 2" in response.text
assert "Will update: 2" in response.text
finally:
app.dependency_overrides = {}
def test_bulk_change_feed_urls_updates_matching_feeds() -> None:
"""Mass updater should change all matching feed URLs for a webhook."""
@dataclass(slots=True)
class DummyFeed:
url: str
class StubReader:
def __init__(self) -> None:
self._feeds = [
DummyFeed(url="https://old.example.com/rss/a.xml"),
DummyFeed(url="https://old.example.com/rss/b.xml"),
DummyFeed(url="https://unchanged.example.com/rss/c.xml"),
]
self.change_calls: list[tuple[str, str]] = []
self.updated_feeds: list[str] = []
def get_tag(self, resource: object, key: str, default: object = None) -> object:
if resource == () and key == "webhooks":
return [{"name": webhook_name, "url": webhook_url}]
if key == "webhook" and isinstance(resource, str):
return webhook_url
return default
def get_feeds(self) -> list[DummyFeed]:
return self._feeds
def change_feed_url(self, old_url: str, new_url: str) -> None:
self.change_calls.append((old_url, new_url))
def update_feed(self, feed_url: str) -> None:
self.updated_feeds.append(feed_url)
def get_entries(self, **_kwargs: object) -> list[Entry]:
return []
def set_entry_read(self, _entry: Entry, _value: bool) -> None: # noqa: FBT001
return
stub_reader = StubReader()
app.dependency_overrides[get_reader_dependency] = lambda: stub_reader
no_redirect_client = TestClient(app, follow_redirects=False)
try:
with patch(
"discord_rss_bot.main.resolve_final_feed_url",
side_effect=lambda url: (url.replace("old.example.com", "new.example.com"), None),
):
response: Response = no_redirect_client.post(
url="/bulk_change_feed_urls",
data={
"webhook_url": webhook_url,
"replace_from": "old.example.com",
"replace_to": "new.example.com",
"resolve_urls": "true",
},
)
assert response.status_code == 303, f"Expected redirect, got {response.status_code}: {response.text}"
assert "Updated%202%20feed%20URL%28s%29" in response.headers.get("location", "")
assert sorted(stub_reader.change_calls) == sorted([
("https://old.example.com/rss/a.xml", "https://new.example.com/rss/a.xml"),
("https://old.example.com/rss/b.xml", "https://new.example.com/rss/b.xml"),
])
assert sorted(stub_reader.updated_feeds) == sorted([
"https://new.example.com/rss/a.xml",
"https://new.example.com/rss/b.xml",
])
finally:
app.dependency_overrides = {}
def test_webhook_entries_mass_update_preview_fragment_endpoint() -> None:
"""HTMX preview endpoint should render only the mass-update preview fragment."""
@dataclass(slots=True)
class DummyFeed:
url: str
title: str | None = None
updates_enabled: bool = True
last_exception: None = None
class StubReader:
def __init__(self) -> None:
self._feeds: list[DummyFeed] = [
DummyFeed(url="https://old.example.com/rss/a.xml", title="A"),
DummyFeed(url="https://old.example.com/rss/b.xml", title="B"),
]
def get_tag(self, resource: object, key: str, default: object = None) -> object:
if key == "webhook" and isinstance(resource, str):
return webhook_url
return default
def get_feeds(self) -> list[DummyFeed]:
return self._feeds
app.dependency_overrides[get_reader_dependency] = StubReader
try:
with patch(
"discord_rss_bot.main.resolve_final_feed_url",
side_effect=lambda url: (url.replace("old.example.com", "new.example.com"), None),
):
response: Response = client.get(
url="/webhook_entries_mass_update_preview",
params={
"webhook_url": webhook_url,
"replace_from": "old.example.com",
"replace_to": "new.example.com",
"resolve_urls": "true",
},
)
assert response.status_code == 200, f"Failed to get HTMX preview fragment: {response.text}"
assert "Will update: 2" in response.text
assert "<table" in response.text
assert "Mass update feed URLs" not in response.text, "Fragment should not include full page wrapper text"
finally:
app.dependency_overrides = {}
def test_bulk_change_feed_urls_force_update_overwrites_conflict() -> None: # noqa: C901
"""Force update should overwrite conflicting target URLs instead of skipping them."""
@dataclass(slots=True)
class DummyFeed:
url: str
class StubReader:
def __init__(self) -> None:
self._feeds = [
DummyFeed(url="https://old.example.com/rss/a.xml"),
DummyFeed(url="https://new.example.com/rss/a.xml"),
]
self.delete_calls: list[str] = []
self.change_calls: list[tuple[str, str]] = []
def get_tag(self, resource: object, key: str, default: object = None) -> object:
if resource == () and key == "webhooks":
return [{"name": webhook_name, "url": webhook_url}]
if key == "webhook" and isinstance(resource, str):
return webhook_url
return default
def get_feeds(self) -> list[DummyFeed]:
return self._feeds
def delete_feed(self, feed_url: str) -> None:
self.delete_calls.append(feed_url)
def change_feed_url(self, old_url: str, new_url: str) -> None:
self.change_calls.append((old_url, new_url))
def update_feed(self, _feed_url: str) -> None:
return
def get_entries(self, **_kwargs: object) -> list[Entry]:
return []
def set_entry_read(self, _entry: Entry, _value: bool) -> None: # noqa: FBT001
return
stub_reader = StubReader()
app.dependency_overrides[get_reader_dependency] = lambda: stub_reader
no_redirect_client = TestClient(app, follow_redirects=False)
try:
with patch(
"discord_rss_bot.main.resolve_final_feed_url",
side_effect=lambda url: (url.replace("old.example.com", "new.example.com"), None),
):
response: Response = no_redirect_client.post(
url="/bulk_change_feed_urls",
data={
"webhook_url": webhook_url,
"replace_from": "old.example.com",
"replace_to": "new.example.com",
"resolve_urls": "true",
"force_update": "true",
},
)
assert response.status_code == 303, f"Expected redirect, got {response.status_code}: {response.text}"
assert stub_reader.delete_calls == ["https://new.example.com/rss/a.xml"]
assert stub_reader.change_calls == [
(
"https://old.example.com/rss/a.xml",
"https://new.example.com/rss/a.xml",
),
]
assert "Force%20overwrote%201" in response.headers.get("location", "")
finally:
app.dependency_overrides = {}
def test_bulk_change_feed_urls_force_update_ignores_resolution_error() -> None:
"""Force update should proceed even when URL resolution returns an error (e.g. HTTP 404)."""
@dataclass(slots=True)
class DummyFeed:
url: str
class StubReader:
def __init__(self) -> None:
self._feeds = [
DummyFeed(url="https://old.example.com/rss/a.xml"),
]
self.change_calls: list[tuple[str, str]] = []
def get_tag(self, resource: object, key: str, default: object = None) -> object:
if resource == () and key == "webhooks":
return [{"name": webhook_name, "url": webhook_url}]
if key == "webhook" and isinstance(resource, str):
return webhook_url
return default
def get_feeds(self) -> list[DummyFeed]:
return self._feeds
def change_feed_url(self, old_url: str, new_url: str) -> None:
self.change_calls.append((old_url, new_url))
def update_feed(self, _feed_url: str) -> None:
return
def get_entries(self, **_kwargs: object) -> list[Entry]:
return []
def set_entry_read(self, _entry: Entry, _value: bool) -> None: # noqa: FBT001
return
stub_reader = StubReader()
app.dependency_overrides[get_reader_dependency] = lambda: stub_reader
no_redirect_client = TestClient(app, follow_redirects=False)
try:
with patch(
"discord_rss_bot.main.resolve_final_feed_url",
return_value=("https://new.example.com/rss/a.xml", "HTTP 404"),
):
response: Response = no_redirect_client.post(
url="/bulk_change_feed_urls",
data={
"webhook_url": webhook_url,
"replace_from": "old.example.com",
"replace_to": "new.example.com",
"resolve_urls": "true",
"force_update": "true",
},
)
assert response.status_code == 303, f"Expected redirect, got {response.status_code}: {response.text}"
assert stub_reader.change_calls == [
(
"https://old.example.com/rss/a.xml",
"https://new.example.com/rss/a.xml",
),
]
location = response.headers.get("location", "")
assert "Updated%201%20feed%20URL%28s%29" in location
assert "Failed%200" in location
finally:
app.dependency_overrides = {}
def test_reader_dependency_override_is_used() -> None:
"""Reader should be injectable and overridable via FastAPI dependency overrides."""
class StubReader:
def get_tag(self, _resource: str, _key: str, default: str | None = None) -> str | None:
"""Stub get_tag that always returns the default value.
Args:
_resource: Ignored.
_key: Ignored.
default: The value to return.
Returns:
The default value, simulating a missing tag.
"""
return default
app.dependency_overrides[get_reader_dependency] = StubReader
try:
response: Response = client.get(url="/add")
assert response.status_code == 200, f"Expected /add to render with overridden reader: {response.text}"
finally:
app.dependency_overrides = {}

View file

@ -4,16 +4,18 @@ import tempfile
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from reader import Feed, Reader, make_reader from reader import Feed
from reader import Reader
from reader import make_reader
from discord_rss_bot.search import create_html_for_search_results from discord_rss_bot.search import create_search_context
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Iterable from collections.abc import Iterable
def test_create_html_for_search_results() -> None: def test_create_search_context() -> None:
"""Test create_html_for_search_results.""" """Test create_search_context."""
# Create a reader. # Create a reader.
with tempfile.TemporaryDirectory() as temp_dir: with tempfile.TemporaryDirectory() as temp_dir:
# Create the temp directory. # Create the temp directory.
@ -43,10 +45,9 @@ def test_create_html_for_search_results() -> None:
reader.enable_search() reader.enable_search()
reader.update_search() reader.update_search()
# Create the HTML and check if it is not empty. # Create the search context.
search_html: str = create_html_for_search_results("a", reader) context: dict = create_search_context("test", reader=reader)
assert search_html is not None, f"The search HTML should not be None. Got: {search_html}" assert context is not None, f"The context should not be None. Got: {context}"
assert len(search_html) > 10, f"The search HTML should be longer than 10 characters. Got: {len(search_html)}"
# Close the reader, so we can delete the directory. # Close the reader, so we can delete the directory.
reader.close() reader.close()

View file

@ -6,7 +6,9 @@ from pathlib import Path
from reader import Reader from reader import Reader
from discord_rss_bot.settings import data_dir, default_custom_message, get_reader from discord_rss_bot.settings import data_dir
from discord_rss_bot.settings import default_custom_message
from discord_rss_bot.settings import get_reader
def test_reader() -> None: def test_reader() -> None:
@ -20,12 +22,12 @@ def test_reader() -> None:
Path.mkdir(Path(temp_dir), exist_ok=True) Path.mkdir(Path(temp_dir), exist_ok=True)
custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite") custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite")
custom_reader: Reader = get_reader(custom_location=str(custom_loc)) reader: Reader = get_reader(custom_location=str(custom_loc))
assert_msg = f"The custom reader should be an instance of Reader. But it was '{type(custom_reader)}'." assert_msg = f"The custom reader should be an instance of Reader. But it was '{type(reader)}'."
assert isinstance(custom_reader, Reader), assert_msg assert isinstance(reader, Reader), assert_msg
# Close the reader, so we can delete the directory. # Close the reader, so we can delete the directory.
custom_reader.close() reader.close()
def test_data_dir() -> None: def test_data_dir() -> None:
@ -47,16 +49,16 @@ def test_get_webhook_for_entry() -> None:
Path.mkdir(Path(temp_dir), exist_ok=True) Path.mkdir(Path(temp_dir), exist_ok=True)
custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite") custom_loc: pathlib.Path = pathlib.Path(temp_dir, "custom_loc_db.sqlite")
custom_reader: Reader = get_reader(custom_location=str(custom_loc)) reader: Reader = get_reader(custom_location=str(custom_loc))
# Add a feed to the database. # Add a feed to the database.
custom_reader.add_feed("https://www.reddit.com/r/movies.rss") reader.add_feed("https://www.reddit.com/r/movies.rss")
custom_reader.update_feed("https://www.reddit.com/r/movies.rss") reader.update_feed("https://www.reddit.com/r/movies.rss")
# Add a webhook to the database. # Add a webhook to the database.
custom_reader.set_tag("https://www.reddit.com/r/movies.rss", "webhook", "https://example.com") # pyright: ignore[reportArgumentType] reader.set_tag("https://www.reddit.com/r/movies.rss", "webhook", "https://example.com") # pyright: ignore[reportArgumentType]
our_tag = custom_reader.get_tag("https://www.reddit.com/r/movies.rss", "webhook") # pyright: ignore[reportArgumentType] our_tag = reader.get_tag("https://www.reddit.com/r/movies.rss", "webhook") # pyright: ignore[reportArgumentType]
assert our_tag == "https://example.com", f"The tag should be 'https://example.com'. But it was '{our_tag}'." assert our_tag == "https://example.com", f"The tag should be 'https://example.com'. But it was '{our_tag}'."
# Close the reader, so we can delete the directory. # Close the reader, so we can delete the directory.
custom_reader.close() reader.close()

View file

@ -0,0 +1,101 @@
from __future__ import annotations
import urllib.parse
from typing import TYPE_CHECKING
from fastapi.testclient import TestClient
from discord_rss_bot.main import app
if TYPE_CHECKING:
from httpx import Response
client: TestClient = TestClient(app)
webhook_name: str = "Test Webhook for Update Interval"
webhook_url: str = "https://discord.com/api/webhooks/1234567890/test_update_interval"
feed_url: str = "https://lovinator.space/rss_test.xml"
def test_global_update_interval() -> None:
"""Test setting the global update interval."""
# Set global update interval to 30 minutes
response: Response = client.post("/set_global_update_interval", data={"interval_minutes": "30"})
assert response.status_code == 200, f"Failed to set global interval: {response.text}"
# Check that the settings page shows the new interval
response = client.get("/settings")
assert response.status_code == 200, f"Failed to get settings page: {response.text}"
assert "30" in response.text, "Global interval not updated on settings page"
def test_per_feed_update_interval() -> None:
"""Test setting per-feed update interval."""
# Clean up any existing feed/webhook
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
client.post(url="/remove", data={"feed_url": feed_url})
# Add webhook and feed
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# Set feed-specific update interval to 15 minutes
response = client.post("/set_update_interval", data={"feed_url": feed_url, "interval_minutes": "15"})
assert response.status_code == 200, f"Failed to set feed interval: {response.text}"
# Check that the feed page shows the custom interval
encoded_url = urllib.parse.quote(feed_url)
response = client.get(f"/feed?feed_url={encoded_url}")
assert response.status_code == 200, f"Failed to get feed page: {response.text}"
assert "15" in response.text, "Feed interval not displayed on feed page"
assert "Custom" in response.text, "Custom badge not shown for feed-specific interval"
def test_reset_feed_update_interval() -> None:
"""Test resetting feed update interval to global default."""
# Ensure feed/webhook setup exists regardless of test order
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})
client.post(url="/remove", data={"feed_url": feed_url})
response: Response = client.post(
url="/add_webhook",
data={"webhook_name": webhook_name, "webhook_url": webhook_url},
)
assert response.status_code == 200, f"Failed to add webhook: {response.text}"
response = client.post(url="/add", data={"feed_url": feed_url, "webhook_dropdown": webhook_name})
assert response.status_code == 200, f"Failed to add feed: {response.text}"
# First set a custom interval
response = client.post("/set_update_interval", data={"feed_url": feed_url, "interval_minutes": "15"})
assert response.status_code == 200, f"Failed to set feed interval: {response.text}"
# Reset to global default
response = client.post("/reset_update_interval", data={"feed_url": feed_url})
assert response.status_code == 200, f"Failed to reset feed interval: {response.text}"
# Check that the feed page shows global default
encoded_url = urllib.parse.quote(feed_url)
response = client.get(f"/feed?feed_url={encoded_url}")
assert response.status_code == 200, f"Failed to get feed page: {response.text}"
assert "Using global default" in response.text, "Global default badge not shown after reset"
def test_update_interval_validation() -> None:
"""Test that update interval validation works."""
# Try to set an interval below minimum (should be clamped to 1)
response: Response = client.post("/set_global_update_interval", data={"interval_minutes": "0"})
assert response.status_code == 200, f"Failed to handle minimum interval: {response.text}"
# Try to set an interval above maximum (should be clamped to 10080)
response = client.post("/set_global_update_interval", data={"interval_minutes": "20000"})
assert response.status_code == 200, f"Failed to handle maximum interval: {response.text}"
# Clean up
client.post(url="/remove", data={"feed_url": feed_url})
client.post(url="/delete_webhook", data={"webhook_url": webhook_url})

View file

@ -1,5 +1,6 @@
from __future__ import annotations from __future__ import annotations
from discord_rss_bot.filter.utils import is_regex_match
from discord_rss_bot.filter.utils import is_word_in_text from discord_rss_bot.filter.utils import is_word_in_text
@ -14,3 +15,51 @@ def test_is_word_in_text() -> None:
assert is_word_in_text("Alert,Forma", "Outbreak - Mutagen Mass - Rhea (Saturn)") is False, msg_false assert is_word_in_text("Alert,Forma", "Outbreak - Mutagen Mass - Rhea (Saturn)") is False, msg_false
assert is_word_in_text("Alert,Forma", "Outbreak - Mutagen Mass - Rhea (Saturn)") is False, msg_false assert is_word_in_text("Alert,Forma", "Outbreak - Mutagen Mass - Rhea (Saturn)") is False, msg_false
assert is_word_in_text("word1,word2", "This is a sample text containing none of the words.") is False, msg_false assert is_word_in_text("word1,word2", "This is a sample text containing none of the words.") is False, msg_false
def test_is_regex_match() -> None:
msg_true = "Should return True"
msg_false = "Should return False"
# Test basic regex patterns
assert is_regex_match(r"word\d+", "This text contains word123") is True, msg_true
assert is_regex_match(r"^Hello", "Hello world") is True, msg_true
assert is_regex_match(r"world$", "Hello world") is True, msg_true
# Test case insensitivity
assert is_regex_match(r"hello", "This text contains HELLO") is True, msg_true
# Test comma-separated patterns
assert is_regex_match(r"pattern1,pattern2", "This contains pattern2") is True, msg_true
assert is_regex_match(r"pattern1, pattern2", "This contains pattern1") is True, msg_true
# Test regex that shouldn't match
assert is_regex_match(r"^start", "This doesn't start with the pattern") is False, msg_false
assert is_regex_match(r"end$", "This doesn't end with the pattern") is False, msg_false
# Test with empty input
assert is_regex_match("", "Some text") is False, msg_false
assert is_regex_match("pattern", "") is False, msg_false
# Test with invalid regex (should not raise an exception and return False)
assert is_regex_match(r"[incomplete", "Some text") is False, msg_false
# Test with multiple patterns where one is invalid
assert is_regex_match(r"valid, [invalid, \w+", "Contains word") is True, msg_true
# Test newline-separated patterns
newline_patterns = "pattern1\n^start\ncontains\\d+"
assert is_regex_match(newline_patterns, "This contains123 text") is True, msg_true
assert is_regex_match(newline_patterns, "start of line") is True, msg_true
assert is_regex_match(newline_patterns, "pattern1 is here") is True, msg_true
assert is_regex_match(newline_patterns, "None of these match") is False, msg_false
# Test mixed newline and comma patterns (for backward compatibility)
mixed_patterns = "pattern1\npattern2,pattern3\npattern4"
assert is_regex_match(mixed_patterns, "Contains pattern3") is True, msg_true
assert is_regex_match(mixed_patterns, "Contains pattern4") is True, msg_true
# Test with empty lines and spaces
whitespace_patterns = "\\s+\n \n\npattern\n\n"
assert is_regex_match(whitespace_patterns, "text with spaces") is True, msg_true
assert is_regex_match(whitespace_patterns, "text with pattern") is True, msg_true

View file

@ -4,9 +4,13 @@ import tempfile
from pathlib import Path from pathlib import Path
from typing import TYPE_CHECKING from typing import TYPE_CHECKING
from reader import Entry, Feed, Reader, make_reader from reader import Entry
from reader import Feed
from reader import Reader
from reader import make_reader
from discord_rss_bot.filter.whitelist import has_white_tags, should_be_sent from discord_rss_bot.filter.whitelist import has_white_tags
from discord_rss_bot.filter.whitelist import should_be_sent
if TYPE_CHECKING: if TYPE_CHECKING:
from collections.abc import Iterable from collections.abc import Iterable
@ -33,11 +37,18 @@ def test_has_white_tags() -> None:
reader.update_feeds() reader.update_feeds()
# Test feed without any whitelist tags # Test feed without any whitelist tags
assert has_white_tags(custom_reader=get_reader(), feed=feed) is False, "Feed should not have any whitelist tags" assert has_white_tags(reader=get_reader(), feed=feed) is False, "Feed should not have any whitelist tags"
check_if_has_tag(reader, feed, "whitelist_title") check_if_has_tag(reader, feed, "whitelist_title")
check_if_has_tag(reader, feed, "whitelist_summary") check_if_has_tag(reader, feed, "whitelist_summary")
check_if_has_tag(reader, feed, "whitelist_content") check_if_has_tag(reader, feed, "whitelist_content")
check_if_has_tag(reader, feed, "whitelist_author")
# Test regex whitelist tags
check_if_has_tag(reader, feed, "regex_whitelist_title")
check_if_has_tag(reader, feed, "regex_whitelist_summary")
check_if_has_tag(reader, feed, "regex_whitelist_content")
check_if_has_tag(reader, feed, "regex_whitelist_author")
# Clean up # Clean up
reader.delete_feed(feed_url) reader.delete_feed(feed_url)
@ -45,9 +56,9 @@ def test_has_white_tags() -> None:
def check_if_has_tag(reader: Reader, feed: Feed, whitelist_name: str) -> None: def check_if_has_tag(reader: Reader, feed: Feed, whitelist_name: str) -> None:
reader.set_tag(feed, whitelist_name, "a") # pyright: ignore[reportArgumentType] reader.set_tag(feed, whitelist_name, "a") # pyright: ignore[reportArgumentType]
assert has_white_tags(custom_reader=reader, feed=feed) is True, "Feed should have whitelist tags" assert has_white_tags(reader=reader, feed=feed) is True, "Feed should have whitelist tags"
reader.delete_tag(feed, whitelist_name) reader.delete_tag(feed, whitelist_name)
assert has_white_tags(custom_reader=reader, feed=feed) is False, "Feed should not have any whitelist tags" assert has_white_tags(reader=reader, feed=feed) is False, "Feed should not have any whitelist tags"
def test_should_be_sent() -> None: def test_should_be_sent() -> None:
@ -109,3 +120,67 @@ def test_should_be_sent() -> None:
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent" assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
reader.delete_tag(feed, "whitelist_author") reader.delete_tag(feed, "whitelist_author")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent" assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
def test_regex_should_be_sent() -> None:
"""Test the regex filtering functionality for whitelist."""
reader: Reader = get_reader()
# Add feed and update entries
reader.add_feed(feed_url)
feed: Feed = reader.get_feed(feed_url)
reader.update_feeds()
# Get first entry
first_entry: list[Entry] = []
entries: Iterable[Entry] = reader.get_entries(feed=feed)
assert entries is not None, "Entries should not be None"
for entry in entries:
first_entry.append(entry)
break
assert len(first_entry) == 1, "First entry should be added"
# Test entry without any regex whitelists
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
# Test regex whitelist for title
reader.set_tag(feed, "regex_whitelist_title", r"fvnnn\w+") # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is True, "Entry should be sent with regex title match"
reader.delete_tag(feed, "regex_whitelist_title")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
# Test regex whitelist for summary
reader.set_tag(feed, "regex_whitelist_summary", r"ffdnfdn\w+") # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is True, "Entry should be sent with regex summary match"
reader.delete_tag(feed, "regex_whitelist_summary")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
# Test regex whitelist for content
reader.set_tag(feed, "regex_whitelist_content", r"ffdnfdnfdn\w+") # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is True, "Entry should be sent with regex content match"
reader.delete_tag(feed, "regex_whitelist_content")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
# Test regex whitelist for author
reader.set_tag(feed, "regex_whitelist_author", r"TheLovinator\d*") # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is True, "Entry should be sent with regex author match"
reader.delete_tag(feed, "regex_whitelist_author")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
# Test invalid regex pattern (should not raise an exception)
reader.set_tag(feed, "regex_whitelist_title", r"[incomplete") # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent with invalid regex"
reader.delete_tag(feed, "regex_whitelist_title")
# Test multiple regex patterns separated by commas
reader.set_tag(feed, "regex_whitelist_author", r"pattern1,TheLovinator\d*,pattern3") # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is True, "Entry should be sent with one matching pattern in list"
reader.delete_tag(feed, "regex_whitelist_author")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"
# Test newline-separated regex patterns
newline_patterns = "pattern1\nTheLovinator\\d*\npattern3"
reader.set_tag(feed, "regex_whitelist_author", newline_patterns) # pyright: ignore[reportArgumentType]
assert should_be_sent(reader, first_entry[0]) is True, "Entry should be sent with newline-separated patterns"
reader.delete_tag(feed, "regex_whitelist_author")
assert should_be_sent(reader, first_entry[0]) is False, "Entry should not be sent"