.oO SearXNG Developer Documentation Oo.
Loading...
Searching...
No Matches
searx.engines.hackernews Namespace Reference

Functions

 request (query, params)
 
 response (resp)
 

Variables

dict about
 
bool paging = True
 
bool time_range_support = True
 
list categories = ["it"]
 
int results_per_page = 30
 
str base_url = "https://hn.algolia.com/api/v1"
 

Detailed Description

Hackernews

Function Documentation

◆ request()

searx.engines.hackernews.request ( query,
params )

Definition at line 31 of file hackernews.py.

31def request(query, params):
32 search_type = 'search'
33 if not query:
34 # if search query is empty show results from HN's front page
35 search_type = 'search_by_date'
36 query_params = {
37 "tags": "front_page",
38 "page": (params["pageno"] - 1),
39 }
40 else:
41 query_params = {
42 "query": query,
43 "page": (params["pageno"] - 1),
44 "hitsPerPage": results_per_page,
45 "minWordSizefor1Typo": 4,
46 "minWordSizefor2Typos": 8,
47 "advancedSyntax": "true",
48 "ignorePlurals": "false",
49 "minProximity": 7,
50 "numericFilters": '[]',
51 "tagFilters": '["story",[]]',
52 "typoTolerance": "true",
53 "queryType": "prefixLast",
54 "restrictSearchableAttributes": '["title","comment_text","url","story_text","author"]',
55 "getRankingInfo": "true",
56 }
57
58 if params['time_range']:
59 search_type = 'search_by_date'
60 timestamp = (datetime.now() - relativedelta(**{f"{params['time_range']}s": 1})).timestamp()
61 query_params["numericFilters"] = f"created_at_i>{timestamp}"
62
63 params["url"] = f"{base_url}/{search_type}?{urlencode(query_params)}"
64 return params
65
66

◆ response()

searx.engines.hackernews.response ( resp)

Definition at line 67 of file hackernews.py.

67def response(resp):
68 results = []
69 data = resp.json()
70
71 for hit in data["hits"]:
72 object_id = hit["objectID"]
73 points = hit.get("points") or 0
74 num_comments = hit.get("num_comments") or 0
75
76 metadata = ""
77 if points != 0 or num_comments != 0:
78 metadata = f"{gettext('points')}: {points}" f" | {gettext('comments')}: {num_comments}"
79 results.append(
80 {
81 "title": hit.get("title") or f"{gettext('author')}: {hit['author']}",
82 "url": f"https://news.ycombinator.com/item?id={object_id}",
83 "content": hit.get("url") or hit.get("comment_text") or hit.get("story_text") or "",
84 "metadata": metadata,
85 "author": hit["author"],
86 "publishedDate": datetime.utcfromtimestamp(hit["created_at_i"]),
87 }
88 )
89
90 return results

Variable Documentation

◆ about

dict searx.engines.hackernews.about
Initial value:
1= {
2 "website": "https://news.ycombinator.com/",
3 "wikidata_id": "Q686797",
4 "official_api_documentation": "https://hn.algolia.com/api",
5 "use_official_api": True,
6 "require_api_key": False,
7 "results": "JSON",
8}

Definition at line 12 of file hackernews.py.

◆ base_url

str searx.engines.hackernews.base_url = "https://hn.algolia.com/api/v1"

Definition at line 28 of file hackernews.py.

◆ categories

list searx.engines.hackernews.categories = ["it"]

Definition at line 24 of file hackernews.py.

◆ paging

bool searx.engines.hackernews.paging = True

Definition at line 22 of file hackernews.py.

◆ results_per_page

int searx.engines.hackernews.results_per_page = 30

Definition at line 25 of file hackernews.py.

◆ time_range_support

bool searx.engines.hackernews.time_range_support = True

Definition at line 23 of file hackernews.py.