- FeedVault is a free service, and will always be free. However, if you wish to support the project, you can do so by donating to the developer.
-
-
What will the money be used for?
-
- The money will be used to pay for the server costs, domain name, and other expenses related to running the service.
-
-
How much does it cost to run FeedVault?
-
Domain name: 12 € / 13 $ / 10 £ per year
-
How can I donate?
-
- The preferred method of donating is through GitHub Sponsors due to no fees being taken. However, you can also donate through PayPal.
-
-
Crypto
-
- You can also donate through cryptocurrency. The addresses are listed below. If you wish to donate through a cryptocurrency not listed below, please contact me.
-
- Input the URLs of the feeds you wish to archive below. You can add as many as needed, and access them through the website or API. Alternatively, include links to .opml files, and the feeds within will be archived.
-
-
-
-
You can also upload .opml files containing the feeds you wish to archive:
-
-
-
FAQ
-
- What are web feeds?
-
- Web feeds are a way to distribute content on the web. They allow users to access updates from websites without having to visit them directly. Feeds are typically used for news websites, blogs, and other sites that frequently update content.
-
- You can read more about web feeds on Wikipedia.
-
-
-
-
- What is FeedVault?
-
- FeedVault is a service that archives web feeds. It allows users to access and search for historical content from various websites. The service is designed to preserve the history of the web and provide a reliable source for accessing content that may no longer be available on the original websites.
-
-
-
-
- Why archive feeds?
-
- Web feeds are a valuable source of information, and archiving them ensures that the content is preserved for future reference. By archiving feeds, we can ensure that historical content is available for research, analysis, and other purposes. Additionally, archiving feeds can help prevent the loss of valuable information due to website changes, outages, or other issues.
-
-
-
-
- How does it work?
-
- FeedVault is written in Go and uses the gofeed library to parse feeds. The service periodically checks for new content in the feeds and stores it in a database. Users can access the archived feeds through the website or API.
-
-
-
- How can I access the archived feeds?
-
- You can access the archived feeds through the website or API. The website provides a user interface for searching and browsing the feeds, while the API allows you to access the feeds programmatically. You can also download the feeds in various formats, such as JSON, XML, or RSS.
-
Log Files: These files contain details about your IP address, browser, and operating system.
-
-
This information is collected for debugging purposes and to enhance website performance.
-
Log files are automatically removed after a specific timeframe.
-
They are not linked to any personal information, shared with third parties, or used for marketing purposes.
-
Furthermore, log files are not utilized to track your activity on other websites.
-
-
Cloudflare: We use Cloudflare to secure and optimize our website.
-
-
Cloudflare may collect your IP address, cookies, and other data.
-
For more information, please review Cloudflare's privacy policy.
-
-
-
-
-
User Rights
-
- You have the right to access, correct, or delete your information. Any privacy-related inquiries can be directed to us using the contact information provided at the end of this document.
-
-
-
-
Changes to the Privacy Policy
-
- This privacy policy may be revised. You can review the revision history of this document on our GitHub repository here.
-
- Users are prohibited from uploading any content that is illegal under Swedish law. Any such content found on our platform will be removed. If this happens repeatedly, the user will be banned from using our platform.
-
-
- You can report any content that you believe violates our content policy by sending an email to hello@feedvault.se. Please include the URL of the content in question and a brief description of why you believe it violates our content policy.
-
-
Copyright Policy
-
- We will remove URLs that are used to share copyrightable information without the necessary permissions or licenses.
-
-
Web Scraping
-
- Web scraping is permitted on our platform. We currently do not impose a rate limit on requests.
-
-
API Usage
-
- Our API is free to use. We do not impose any rate limits on requests.
-
+ Input the URLs of the feeds you wish to archive below. You can add as many as needed, and access them through the website or API. Alternatively, include links to .opml files, and the feeds within will be archived.
+
+
+
+
You can also upload .opml files containing the feeds you wish to archive:
+
+ `
+
+ FAQ := `
+
+
FAQ
+
+ What are web feeds?
+
+ Web feeds are a way to distribute content on the web. They allow users to access updates from websites without having to visit them directly. Feeds are typically used for news websites, blogs, and other sites that frequently update content.
+
+ You can read more about web feeds on Wikipedia.
+
+
+
+
+ What is FeedVault?
+
+ FeedVault is a service that archives web feeds. It allows users to access and search for historical content from various websites. The service is designed to preserve the history of the web and provide a reliable source for accessing content that may no longer be available on the original websites.
+
+
+
+
+ Why archive feeds?
+
+ Web feeds are a valuable source of information, and archiving them ensures that the content is preserved for future reference. By archiving feeds, we can ensure that historical content is available for research, analysis, and other purposes. Additionally, archiving feeds can help prevent the loss of valuable information due to website changes, outages, or other issues.
+
+
+
+
+ How does it work?
+
+ FeedVault is written in Go and uses the gofeed library to parse feeds. The service periodically checks for new content in the feeds and stores it in a database. Users can access the archived feeds through the website or API.
+
+
+
+ How can I access the archived feeds?
+
+ You can access the archived feeds through the website or API. The website provides a user interface for searching and browsing the feeds, while the API allows you to access the feeds programmatically. You can also download the feeds in various formats, such as JSON, XML, or RSS.
+
"
- body := rr.Body.String()
- if !assert.Contains(t, body, shouldContain) {
- t.Errorf("handler returned unexpected body: got %v want %v",
- body, shouldContain)
- }
-}
-
-func TestTermsHandler(t *testing.T) {
- // Create a request to pass to our handler.
- req, err := http.NewRequest("GET", "/terms", nil)
- if err != nil {
- t.Fatal(err)
- }
-
- // We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
- rr := httptest.NewRecorder()
- handler := http.HandlerFunc(TermsHandler)
-
- // Our handlers satisfy http.Handler, so we can call their ServeHTTP method
- // directly and pass in our Request and ResponseRecorder.
- handler.ServeHTTP(rr, req)
-
- // Check the status code is what we expect.
- if status := rr.Code; status != http.StatusOK {
- t.Errorf("handler returned wrong status code: got %v want %v",
- status, http.StatusOK)
- }
-
- // Check the response contains the expected string.
- shouldContain := "Terms of Service"
- body := rr.Body.String()
- if !assert.Contains(t, body, shouldContain) {
- t.Errorf("handler returned unexpected body: got %v want %v",
- body, shouldContain)
- }
-}
-
-func TestPrivacyHandler(t *testing.T) {
- // Create a request to pass to our handler.
- req, err := http.NewRequest("GET", "/privacy", nil)
- if err != nil {
- t.Fatal(err)
- }
-
- // We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
- rr := httptest.NewRecorder()
- handler := http.HandlerFunc(PrivacyHandler)
-
- // Our handlers satisfy http.Handler, so we can call their ServeHTTP method
- // directly and pass in our Request and ResponseRecorder.
- handler.ServeHTTP(rr, req)
-
- // Check the status code is what we expect.
- if status := rr.Code; status != http.StatusOK {
- t.Errorf("handler returned wrong status code: got %v want %v",
- status, http.StatusOK)
- }
-
- // Check the response contains the expected string.
- shouldContain := "Privacy Policy"
- body := rr.Body.String()
- if !assert.Contains(t, body, shouldContain) {
- t.Errorf("handler returned unexpected body: got %v want %v",
- body, shouldContain)
- }
-}
-
-func TestNotFoundHandler(t *testing.T) {
- // Create a request to pass to our handler.
- req, err := http.NewRequest("GET", "/notfound", nil)
- if err != nil {
- t.Fatal(err)
- }
-
- // We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
- rr := httptest.NewRecorder()
- handler := http.HandlerFunc(NotFoundHandler)
-
- // Our handlers satisfy http.Handler, so we can call their ServeHTTP method
- // directly and pass in our Request and ResponseRecorder.
- handler.ServeHTTP(rr, req)
-
- // Check the status code is what we expect.
- if status := rr.Code; status != http.StatusNotFound {
- t.Errorf("handler returned wrong status code: got %v want %v",
- status, http.StatusNotFound)
- }
-
- // Check the response contains the expected string.
- shouldContain := "
404 - Page not found
"
- body := rr.Body.String()
- if !assert.Contains(t, body, shouldContain) {
- t.Errorf("handler returned unexpected body: got %v want %v",
- body, shouldContain)
- }
-}
-
-func TestMethodNotAllowedHandler(t *testing.T) {
- // Create a request to pass to our handler.
- req, err := http.NewRequest("GET", "/api", nil)
- if err != nil {
- t.Fatal(err)
- }
-
- // We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
- rr := httptest.NewRecorder()
- handler := http.HandlerFunc(MethodNotAllowedHandler)
-
- // Our handlers satisfy http.Handler, so we can call their ServeHTTP method
- // directly and pass in our Request and ResponseRecorder.
- handler.ServeHTTP(rr, req)
-
- // Check the status code is what we expect.
- if status := rr.Code; status != http.StatusMethodNotAllowed {
- t.Errorf("handler returned wrong status code: got %v want %v",
- status, http.StatusMethodNotAllowed)
- }
-
- // Check the response contains the expected string.
- shouldContain := "
405 - Method Not Allowed
"
- body := rr.Body.String()
- if !assert.Contains(t, body, shouldContain) {
- t.Errorf("handler returned unexpected body: got %v want %v",
- body, shouldContain)
- }
-}
-
-func TestDonateHandler(t *testing.T) {
- // Create a request to pass to our handler.
- req, err := http.NewRequest("GET", "/donate", nil)
- if err != nil {
- t.Fatal(err)
- }
-
- // We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
- rr := httptest.NewRecorder()
- handler := http.HandlerFunc(DonateHandler)
-
- // Our handlers satisfy http.Handler, so we can call their ServeHTTP method
- // directly and pass in our Request and ResponseRecorder.
- handler.ServeHTTP(rr, req)
-
- // Check the status code is what we expect.
- if status := rr.Code; status != http.StatusOK {
- t.Errorf("handler returned wrong status code: got %v want %v",
- status, http.StatusOK)
- }
-
- // Check the response contains the expected string.
- shouldContain := "tl;dr: GitHub Sponsors"
+ shouldContain := "Here be dragons."
body := rr.Body.String()
if !assert.Contains(t, body, shouldContain) {
t.Errorf("handler returned unexpected body: got %v want %v",