.oO SearXNG Developer Documentation Oo.
Loading...
Searching...
No Matches
searx.engines.annas_archive Namespace Reference

Functions

bool setup (dict[str, t.Any] engine_settings)
None request (str query, "OnlineParams" params)
EngineResults response ("SXNG_Response" resp)
dict[str, t.Any] _get_result (ElementBase item)
 fetch_traits (EngineTraits engine_traits)

Variables

dict about
list categories = ["files", "books"]
bool paging = True
str base_url = "https://annas-archive.org"
str aa_content = ""
str aa_sort = ""
str aa_ext = ""

Detailed Description

`Anna's Archive`_ is a free non-profit online shadow library metasearch
engine providing access to a variety of book resources (also via IPFS), created
by a team of anonymous archivists (AnnaArchivist_).

.. _Anna's Archive: https://annas-archive.org/
.. _AnnaArchivist: https://annas-software.org/AnnaArchivist/annas-archive

Configuration
=============

The engine has the following additional settings:

- :py:obj:`aa_content`
- :py:obj:`aa_ext`
- :py:obj:`aa_sort`

With this options a SearXNG maintainer is able to configure **additional**
engines for specific searches in Anna's Archive.  For example a engine to search
for *newest* articles and journals (PDF) / by shortcut ``!aaa <search-term>``.

.. code:: yaml

  - name: annas articles
    engine: annas_archive
    categories = ["general", "articles"]
    shortcut: aaa
    aa_content: "magazine"
    aa_ext: "pdf"
    aa_sort: "newest"


Implementations
===============

Function Documentation

◆ _get_result()

dict[str, t.Any] searx.engines.annas_archive._get_result ( ElementBase item)
protected

Definition at line 146 of file annas_archive.py.

146def _get_result(item: ElementBase) -> dict[str, t.Any]:
147 return {
148 "url": base_url + eval_xpath_getindex(item, "./a/@href", 0),
149 "title": extract_text(eval_xpath(item, "./div//a[starts-with(@href, '/md5')]")),
150 "authors": [extract_text(eval_xpath_getindex(item, ".//a[starts-with(@href, '/search')]", 0))],
151 "publisher": extract_text(
152 eval_xpath_getindex(item, ".//a[starts-with(@href, '/search')]", 1, default=None), allow_none=True
153 ),
154 "content": extract_text(eval_xpath(item, ".//div[contains(@class, 'relative')]")),
155 "thumbnail": extract_text(eval_xpath_getindex(item, ".//img/@src", 0, default=None), allow_none=True),
156 }
157
158

Referenced by response().

Here is the caller graph for this function:

◆ fetch_traits()

searx.engines.annas_archive.fetch_traits ( EngineTraits engine_traits)
Fetch languages and other search arguments from Anna's search form.

Definition at line 159 of file annas_archive.py.

159def fetch_traits(engine_traits: EngineTraits):
160 """Fetch languages and other search arguments from Anna's search form."""
161 # pylint: disable=import-outside-toplevel
162
163 import babel
164 from searx.network import get # see https://github.com/searxng/searxng/issues/762
165 from searx.locales import language_tag
166
167 engine_traits.all_locale = ""
168 engine_traits.custom["content"] = []
169 engine_traits.custom["ext"] = []
170 engine_traits.custom["sort"] = []
171
172 resp = get(base_url + "/search")
173 if not resp.ok:
174 raise RuntimeError("Response from Anna's search page is not OK.")
175 dom = html.fromstring(resp.text)
176
177 # supported language codes
178
179 lang_map: dict[str, str] = {}
180 for x in eval_xpath_list(dom, "//form//input[@name='lang']"):
181 eng_lang = x.get("value")
182 if eng_lang in ("", "_empty", "nl-BE", "und") or eng_lang.startswith("anti__"):
183 continue
184 try:
185 locale = babel.Locale.parse(lang_map.get(eng_lang, eng_lang), sep="-")
186 except babel.UnknownLocaleError:
187 # silently ignore unknown languages
188 # print("ERROR: %s -> %s is unknown by babel" % (x.get("data-name"), eng_lang))
189 continue
190 sxng_lang = language_tag(locale)
191 conflict = engine_traits.languages.get(sxng_lang)
192 if conflict:
193 if conflict != eng_lang:
194 print("CONFLICT: babel %s --> %s, %s" % (sxng_lang, conflict, eng_lang))
195 continue
196 engine_traits.languages[sxng_lang] = eng_lang
197
198 for x in eval_xpath_list(dom, "//form//input[@name='content']"):
199 if not x.get("value").startswith("anti__"):
200 engine_traits.custom["content"].append(x.get("value"))
201
202 for x in eval_xpath_list(dom, "//form//input[@name='ext']"):
203 if not x.get("value").startswith("anti__"):
204 engine_traits.custom["ext"].append(x.get("value"))
205
206 for x in eval_xpath_list(dom, "//form//select[@name='sort']//option"):
207 engine_traits.custom["sort"].append(x.get("value"))
208
209 # for better diff; sort the persistence of these traits
210 engine_traits.custom["content"].sort()
211 engine_traits.custom["ext"].sort()
212 engine_traits.custom["sort"].sort()

◆ request()

None searx.engines.annas_archive.request ( str query,
"OnlineParams" params )

Definition at line 113 of file annas_archive.py.

113def request(query: str, params: "OnlineParams") -> None:
114 lang = traits.get_language(params["searxng_locale"], traits.all_locale)
115 args = {
116 "lang": lang,
117 "content": aa_content,
118 "ext": aa_ext,
119 "sort": aa_sort,
120 "q": query,
121 "page": params["pageno"],
122 }
123 # filter out None and empty values
124 filtered_args = dict((k, v) for k, v in args.items() if v)
125 params["url"] = f"{base_url}/search?{urlencode(filtered_args)}"
126
127

◆ response()

EngineResults searx.engines.annas_archive.response ( "SXNG_Response" resp)

Definition at line 128 of file annas_archive.py.

128def response(resp: "SXNG_Response") -> EngineResults:
129 res = EngineResults()
130 dom = html.fromstring(resp.text)
131
132 # The rendering of the WEB page is strange; positions of Anna's result page
133 # are enclosed in SGML comments. These comments are *uncommented* by some
134 # JS code, see query of class ".js-scroll-hidden" in Anna's HTML template:
135 # https://annas-software.org/AnnaArchivist/annas-archive/-/blob/main/allthethings/templates/macros/md5_list.html
136
137 for item in eval_xpath_list(dom, "//main//div[contains(@class, 'js-aarecord-list-outer')]/div"):
138 try:
139 kwargs: dict[str, t.Any] = _get_result(item)
140 except SearxEngineXPathException:
141 continue
142 res.add(res.types.Paper(**kwargs))
143 return res
144
145

References _get_result().

Here is the call graph for this function:

◆ setup()

bool searx.engines.annas_archive.setup ( dict[str, t.Any] engine_settings)
Check of engine's settings.

Definition at line 97 of file annas_archive.py.

97def setup(engine_settings: dict[str, t.Any]) -> bool: # pylint: disable=unused-argument
98 """Check of engine's settings."""
99 traits = EngineTraits(**ENGINE_TRAITS["annas archive"])
100
101 if aa_content and aa_content not in traits.custom["content"]:
102 raise ValueError(f"invalid setting content: {aa_content}")
103
104 if aa_sort and aa_sort not in traits.custom["sort"]:
105 raise ValueError(f"invalid setting sort: {aa_sort}")
106
107 if aa_ext and aa_ext not in traits.custom["ext"]:
108 raise ValueError(f"invalid setting ext: {aa_ext}")
109
110 return True
111
112

Variable Documentation

◆ aa_content

str searx.engines.annas_archive.aa_content = ""

Definition at line 70 of file annas_archive.py.

◆ aa_ext

str searx.engines.annas_archive.aa_ext = ""

Definition at line 85 of file annas_archive.py.

◆ aa_sort

str searx.engines.annas_archive.aa_sort = ""

Definition at line 78 of file annas_archive.py.

◆ about

dict searx.engines.annas_archive.about
Initial value:
1= {
2 "website": "https://annas-archive.org/",
3 "wikidata_id": "Q115288326",
4 "official_api_documentation": None,
5 "use_official_api": False,
6 "require_api_key": False,
7 "results": "HTML",
8}

Definition at line 55 of file annas_archive.py.

◆ base_url

str searx.engines.annas_archive.base_url = "https://annas-archive.org"

Definition at line 69 of file annas_archive.py.

◆ categories

list searx.engines.annas_archive.categories = ["files", "books"]

Definition at line 65 of file annas_archive.py.

◆ paging

bool searx.engines.annas_archive.paging = True

Definition at line 66 of file annas_archive.py.