.oO SearXNG Developer Documentation Oo.
Loading...
Searching...
No Matches
searx.engines.wikidata Namespace Reference

Classes

class  WDAmountAttribute
class  WDArticle
class  WDAttribute
class  WDDateAttribute
class  WDGeoAttribute
class  WDImageAttribute
class  WDLabelAttribute
class  WDURLAttribute

Functions

 get_headers ()
 get_label_for_entity (entity_id, language)
 send_wikidata_query (query, method='GET', **kwargs)
 request (query, params)
 response (resp)
 get_thumbnail (img_src)
 get_results (attribute_result, attributes, language)
 get_query (query, language)
 get_attributes (language)
 debug_explain_wikidata_query (query, method='GET')
 init (engine_settings=None)
 fetch_traits (EngineTraits engine_traits)

Variables

dict about
list display_type = ["infobox"]
str SPARQL_ENDPOINT_URL = 'https://query.wikidata.org/sparql'
str SPARQL_EXPLAIN_URL = 'https://query.wikidata.org/bigdata/namespace/wdq/sparql?explain'
dict WIKIDATA_PROPERTIES
str QUERY_TEMPLATE
str QUERY_PROPERTY_NAMES
 DUMMY_ENTITY_URLS
 sparql_string_escape
 replace_http_by_https = get_string_replaces_function({'http:': 'https:'})
str _IMG_SRC_DEFAULT_URL_PREFIX = "https://commons.wikimedia.org/wiki/Special:FilePath/"
str _IMG_SRC_NEW_URL_PREFIX = "https://upload.wikimedia.org/wikipedia/commons/thumb/"

Detailed Description

This module implements the Wikidata engine.  Some implementations are shared
from :ref:`wikipedia engine`.

Function Documentation

◆ debug_explain_wikidata_query()

searx.engines.wikidata.debug_explain_wikidata_query ( query,
method = 'GET' )

Definition at line 782 of file wikidata.py.

782def debug_explain_wikidata_query(query, method='GET'):
783 if method == 'GET':
784 http_response = get(SPARQL_EXPLAIN_URL + '&' + urlencode({'query': query}), headers=get_headers())
785 else:
786 http_response = post(SPARQL_EXPLAIN_URL, data={'query': query}, headers=get_headers())
787 http_response.raise_for_status()
788 return http_response.content
789
790

References get_headers().

Here is the call graph for this function:

◆ fetch_traits()

searx.engines.wikidata.fetch_traits ( EngineTraits engine_traits)
Uses languages evaluated from :py:obj:`wikipedia.fetch_wikimedia_traits
<searx.engines.wikipedia.fetch_wikimedia_traits>` and removes

- ``traits.custom['wiki_netloc']``: wikidata does not have net-locations for
  the languages and the list of all

- ``traits.custom['WIKIPEDIA_LANGUAGES']``: not used in the wikipedia engine

Definition at line 811 of file wikidata.py.

811def fetch_traits(engine_traits: EngineTraits):
812 """Uses languages evaluated from :py:obj:`wikipedia.fetch_wikimedia_traits
813 <searx.engines.wikipedia.fetch_wikimedia_traits>` and removes
814
815 - ``traits.custom['wiki_netloc']``: wikidata does not have net-locations for
816 the languages and the list of all
817
818 - ``traits.custom['WIKIPEDIA_LANGUAGES']``: not used in the wikipedia engine
819
820 """
821
822 fetch_wikimedia_traits(engine_traits)
823 engine_traits.custom['wiki_netloc'] = {}
824 engine_traits.custom['WIKIPEDIA_LANGUAGES'] = []

◆ get_attributes()

searx.engines.wikidata.get_attributes ( language)

Definition at line 348 of file wikidata.py.

348def get_attributes(language):
349 # pylint: disable=too-many-statements
350 attributes = []
351
352 def add_value(name):
353 attributes.append(WDAttribute(name))
354
355 def add_amount(name):
356 attributes.append(WDAmountAttribute(name))
357
358 def add_label(name):
359 attributes.append(WDLabelAttribute(name))
360
361 def add_url(name, url_id=None, url_path_prefix=None, **kwargs):
362 attributes.append(WDURLAttribute(name, url_id, url_path_prefix, kwargs))
363
364 def add_image(name, url_id=None, priority=1):
365 attributes.append(WDImageAttribute(name, url_id, priority))
366
367 def add_date(name):
368 attributes.append(WDDateAttribute(name))
369
370 # Dates
371 for p in [
372 'P571', # inception date
373 'P576', # dissolution date
374 'P580', # start date
375 'P582', # end date
376 'P569', # date of birth
377 'P570', # date of death
378 'P619', # date of spacecraft launch
379 'P620',
380 ]: # date of spacecraft landing
381 add_date(p)
382
383 for p in [
384 'P27', # country of citizenship
385 'P495', # country of origin
386 'P17', # country
387 'P159',
388 ]: # headquarters location
389 add_label(p)
390
391 # Places
392 for p in [
393 'P36', # capital
394 'P35', # head of state
395 'P6', # head of government
396 'P122', # basic form of government
397 'P37',
398 ]: # official language
399 add_label(p)
400
401 add_value('P1082') # population
402 add_amount('P2046') # area
403 add_amount('P281') # postal code
404 add_label('P38') # currency
405 add_amount('P2048') # height (building)
406
407 # Media
408 for p in [
409 'P400', # platform (videogames, computing)
410 'P50', # author
411 'P170', # creator
412 'P57', # director
413 'P175', # performer
414 'P178', # developer
415 'P162', # producer
416 'P176', # manufacturer
417 'P58', # screenwriter
418 'P272', # production company
419 'P264', # record label
420 'P123', # publisher
421 'P449', # original network
422 'P750', # distributed by
423 'P86',
424 ]: # composer
425 add_label(p)
426
427 add_date('P577') # publication date
428 add_label('P136') # genre (music, film, artistic...)
429 add_label('P364') # original language
430 add_value('P212') # ISBN-13
431 add_value('P957') # ISBN-10
432 add_label('P275') # copyright license
433 add_label('P277') # programming language
434 add_value('P348') # version
435 add_label('P840') # narrative location
436
437 # Languages
438 add_value('P1098') # number of speakers
439 add_label('P282') # writing system
440 add_label('P1018') # language regulatory body
441 add_value('P218') # language code (ISO 639-1)
442
443 # Other
444 add_label('P169') # ceo
445 add_label('P112') # founded by
446 add_label('P1454') # legal form (company, organization)
447 add_label('P137') # operator (service, facility, ...)
448 add_label('P1029') # crew members (tripulation)
449 add_label('P225') # taxon name
450 add_value('P274') # chemical formula
451 add_label('P1346') # winner (sports, contests, ...)
452 add_value('P1120') # number of deaths
453 add_value('P498') # currency code (ISO 4217)
454
455 # URL
456 add_url('P856', official=True) # official website
457 attributes.append(WDArticle(language)) # wikipedia (user language)
458 if not language.startswith('en'):
459 attributes.append(WDArticle('en')) # wikipedia (english)
460
461 add_url('P1324') # source code repository
462 add_url('P1581') # blog
463 add_url('P434', url_id='musicbrainz_artist')
464 add_url('P435', url_id='musicbrainz_work')
465 add_url('P436', url_id='musicbrainz_release_group')
466 add_url('P966', url_id='musicbrainz_label')
467 add_url('P345', url_id='imdb_id')
468 add_url('P2397', url_id='youtube_channel')
469 add_url('P1651', url_id='youtube_video')
470 add_url('P2002', url_id='twitter_profile')
471 add_url('P2013', url_id='facebook_profile')
472 add_url('P2003', url_id='instagram_profile')
473
474 # Fediverse
475 add_url('P4033', url_path_prefix='/@') # Mastodon user
476 add_url('P11947', url_path_prefix='/c/') # Lemmy community
477 add_url('P12622', url_path_prefix='/c/') # PeerTube channel
478
479 # Map
480 attributes.append(WDGeoAttribute('P625'))
481
482 # Image
483 add_image('P15', priority=1, url_id='wikimedia_image') # route map
484 add_image('P242', priority=2, url_id='wikimedia_image') # locator map
485 add_image('P154', priority=3, url_id='wikimedia_image') # logo
486 add_image('P18', priority=4, url_id='wikimedia_image') # image
487 add_image('P41', priority=5, url_id='wikimedia_image') # flag
488 add_image('P2716', priority=6, url_id='wikimedia_image') # collage
489 add_image('P2910', priority=7, url_id='wikimedia_image') # icon
490
491 return attributes
492
493

Referenced by get_query(), and init().

Here is the caller graph for this function:

◆ get_headers()

searx.engines.wikidata.get_headers ( )

Definition at line 135 of file wikidata.py.

135def get_headers():
136 # user agent: https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#Query_limits
137 return {'Accept': 'application/sparql-results+json', 'User-Agent': searxng_useragent()}
138
139

Referenced by debug_explain_wikidata_query(), request(), and send_wikidata_query().

Here is the caller graph for this function:

◆ get_label_for_entity()

searx.engines.wikidata.get_label_for_entity ( entity_id,
language )

Definition at line 140 of file wikidata.py.

140def get_label_for_entity(entity_id, language):
141 name = WIKIDATA_PROPERTIES.get(entity_id)
142 if name is None:
143 name = WIKIDATA_PROPERTIES.get((entity_id, language))
144 if name is None:
145 name = WIKIDATA_PROPERTIES.get((entity_id, language.split('-')[0]))
146 if name is None:
147 name = WIKIDATA_PROPERTIES.get((entity_id, 'en'))
148 if name is None:
149 name = entity_id
150 return name
151
152

Referenced by searx.engines.wikidata.WDAttribute.get_label(), and searx.engines.wikidata.WDAmountAttribute.get_str().

Here is the caller graph for this function:

◆ get_query()

searx.engines.wikidata.get_query ( query,
language )

Definition at line 331 of file wikidata.py.

331def get_query(query, language):
332 attributes = get_attributes(language)
333 select = [a.get_select() for a in attributes]
334 where = list(filter(lambda s: len(s) > 0, [a.get_where() for a in attributes]))
335 wikibase_label = list(filter(lambda s: len(s) > 0, [a.get_wikibase_label() for a in attributes]))
336 group_by = list(filter(lambda s: len(s) > 0, [a.get_group_by() for a in attributes]))
337 query = (
338 QUERY_TEMPLATE.replace('%QUERY%', sparql_string_escape(query))
339 .replace('%SELECT%', ' '.join(select))
340 .replace('%WHERE%', '\n '.join(where))
341 .replace('%WIKIBASE_LABELS%', '\n '.join(wikibase_label))
342 .replace('%GROUP_BY%', ' '.join(group_by))
343 .replace('%LANGUAGE%', language)
344 )
345 return query, attributes
346
347

References get_attributes(), and sparql_string_escape.

Referenced by request().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_results()

searx.engines.wikidata.get_results ( attribute_result,
attributes,
language )

Definition at line 248 of file wikidata.py.

248def get_results(attribute_result, attributes, language):
249 # pylint: disable=too-many-branches
250 results = []
251 infobox_title = attribute_result.get('itemLabel')
252 infobox_id = attribute_result['item']
253 infobox_id_lang = None
254 infobox_urls = []
255 infobox_attributes = []
256 infobox_content = attribute_result.get('itemDescription', [])
257 img_src = None
258 img_src_priority = 0
259
260 for attribute in attributes:
261 value = attribute.get_str(attribute_result, language)
262 if value is not None and value != '':
263 attribute_type = type(attribute)
264
265 if attribute_type in (WDURLAttribute, WDArticle):
266 # get_select() method : there is group_concat(distinct ...;separator=", ")
267 # split the value here
268 for url in value.split(', '):
269 infobox_urls.append({'title': attribute.get_label(language), 'url': url, **attribute.kwargs})
270 # "normal" results (not infobox) include official website and Wikipedia links.
271 if "list" in display_type and (attribute.kwargs.get('official') or attribute_type == WDArticle):
272 results.append({'title': infobox_title, 'url': url, "content": infobox_content})
273
274 # update the infobox_id with the wikipedia URL
275 # first the local wikipedia URL, and as fallback the english wikipedia URL
276 if attribute_type == WDArticle and (
277 (attribute.language == 'en' and infobox_id_lang is None) or attribute.language != 'en'
278 ):
279 infobox_id_lang = attribute.language
280 infobox_id = url
281 elif attribute_type == WDImageAttribute:
282 # this attribute is an image.
283 # replace the current image only the priority is lower
284 # (the infobox contain only one image).
285 if attribute.priority > img_src_priority:
286 img_src = get_thumbnail(value)
287 img_src_priority = attribute.priority
288 elif attribute_type == WDGeoAttribute:
289 # geocoordinate link
290 # use the area to get the OSM zoom
291 # Note: ignore the unit (must be km² otherwise the calculation is wrong)
292 # Should use normalized value p:P2046/psn:P2046/wikibase:quantityAmount
293 area = attribute_result.get('P2046')
294 osm_zoom = area_to_osm_zoom(area) if area else 19
295 url = attribute.get_geo_url(attribute_result, osm_zoom=osm_zoom)
296 if url:
297 infobox_urls.append({'title': attribute.get_label(language), 'url': url, 'entity': attribute.name})
298 else:
299 infobox_attributes.append(
300 {'label': attribute.get_label(language), 'value': value, 'entity': attribute.name}
301 )
302
303 if infobox_id:
304 infobox_id = replace_http_by_https(infobox_id)
305
306 # add the wikidata URL at the end
307 infobox_urls.append({'title': 'Wikidata', 'url': attribute_result['item']})
308
309 if (
310 "list" in display_type
311 and img_src is None
312 and len(infobox_attributes) == 0
313 and len(infobox_urls) == 1
314 and len(infobox_content) == 0
315 ):
316 results.append({'url': infobox_urls[0]['url'], 'title': infobox_title, 'content': infobox_content})
317 elif "infobox" in display_type:
318 results.append(
319 {
320 'infobox': infobox_title,
321 'id': infobox_id,
322 'content': infobox_content,
323 'img_src': img_src,
324 'urls': infobox_urls,
325 'attributes': infobox_attributes,
326 }
327 )
328 return results
329
330

References get_thumbnail(), and replace_http_by_https.

Referenced by response().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_thumbnail()

searx.engines.wikidata.get_thumbnail ( img_src)
Get Thumbnail image from wikimedia commons

Images from commons.wikimedia.org are (HTTP) redirected to
upload.wikimedia.org.  The redirected URL can be calculated by this
function.

- https://stackoverflow.com/a/33691240

Definition at line 209 of file wikidata.py.

209def get_thumbnail(img_src):
210 """Get Thumbnail image from wikimedia commons
211
212 Images from commons.wikimedia.org are (HTTP) redirected to
213 upload.wikimedia.org. The redirected URL can be calculated by this
214 function.
215
216 - https://stackoverflow.com/a/33691240
217
218 """
219 logger.debug('get_thumbnail(): %s', img_src)
220 if not img_src is None and _IMG_SRC_DEFAULT_URL_PREFIX in img_src.split()[0]:
221 img_src_name = unquote(img_src.replace(_IMG_SRC_DEFAULT_URL_PREFIX, "").split("?", 1)[0].replace("%20", "_"))
222 img_src_name_first = img_src_name
223 img_src_name_second = img_src_name
224
225 if ".svg" in img_src_name.split()[0]:
226 img_src_name_second = img_src_name + ".png"
227
228 img_src_size = img_src.replace(_IMG_SRC_DEFAULT_URL_PREFIX, "").split("?", 1)[1]
229 img_src_size = img_src_size[img_src_size.index("=") + 1 : img_src_size.index("&")]
230 img_src_name_md5 = md5(img_src_name.encode("utf-8")).hexdigest()
231 img_src = (
232 _IMG_SRC_NEW_URL_PREFIX
233 + img_src_name_md5[0]
234 + "/"
235 + img_src_name_md5[0:2]
236 + "/"
237 + img_src_name_first
238 + "/"
239 + img_src_size
240 + "px-"
241 + img_src_name_second
242 )
243 logger.debug('get_thumbnail() redirected: %s', img_src)
244
245 return img_src
246
247

Referenced by get_results().

Here is the caller graph for this function:

◆ init()

searx.engines.wikidata.init ( engine_settings = None)

Definition at line 791 of file wikidata.py.

791def init(engine_settings=None): # pylint: disable=unused-argument
792 # WIKIDATA_PROPERTIES : add unit symbols
793 for k, v in WIKIDATA_UNITS.items():
794 WIKIDATA_PROPERTIES[k] = v['symbol']
795
796 # WIKIDATA_PROPERTIES : add property labels
797 wikidata_property_names = []
798 for attribute in get_attributes('en'):
799 if type(attribute) in (WDAttribute, WDAmountAttribute, WDURLAttribute, WDDateAttribute, WDLabelAttribute):
800 if attribute.name not in WIKIDATA_PROPERTIES:
801 wikidata_property_names.append("wd:" + attribute.name)
802 query = QUERY_PROPERTY_NAMES.replace('%ATTRIBUTES%', " ".join(wikidata_property_names))
803 jsonresponse = send_wikidata_query(query, timeout=20)
804 for result in jsonresponse.get('results', {}).get('bindings', {}):
805 name = result['name']['value']
806 lang = result['name']['xml:lang']
807 entity_id = result['item']['value'].replace('http://www.wikidata.org/entity/', '')
808 WIKIDATA_PROPERTIES[(entity_id, lang)] = name.capitalize()
809
810

References get_attributes(), and send_wikidata_query().

Here is the call graph for this function:

◆ request()

searx.engines.wikidata.request ( query,
params )

Definition at line 167 of file wikidata.py.

167def request(query, params):
168
169 eng_tag, _wiki_netloc = get_wiki_params(params['searxng_locale'], traits)
170 query, attributes = get_query(query, eng_tag)
171 logger.debug("request --> language %s // len(attributes): %s", eng_tag, len(attributes))
172
173 params['method'] = 'POST'
174 params['url'] = SPARQL_ENDPOINT_URL
175 params['data'] = {'query': query}
176 params['headers'] = get_headers()
177 params['language'] = eng_tag
178 params['attributes'] = attributes
179
180 return params
181
182

References get_headers(), and get_query().

Here is the call graph for this function:

◆ response()

searx.engines.wikidata.response ( resp)

Definition at line 183 of file wikidata.py.

183def response(resp):
184
185 results = []
186 jsonresponse = loads(resp.content.decode())
187
188 language = resp.search_params['language']
189 attributes = resp.search_params['attributes']
190 logger.debug("request --> language %s // len(attributes): %s", language, len(attributes))
191
192 seen_entities = set()
193 for result in jsonresponse.get('results', {}).get('bindings', []):
194 attribute_result = {key: value['value'] for key, value in result.items()}
195 entity_url = attribute_result['item']
196 if entity_url not in seen_entities and entity_url not in DUMMY_ENTITY_URLS:
197 seen_entities.add(entity_url)
198 results += get_results(attribute_result, attributes, language)
199 else:
200 logger.debug('The SPARQL request returns duplicate entities: %s', str(attribute_result))
201
202 return results
203
204

References get_results().

Here is the call graph for this function:

◆ send_wikidata_query()

searx.engines.wikidata.send_wikidata_query ( query,
method = 'GET',
** kwargs )

Definition at line 153 of file wikidata.py.

153def send_wikidata_query(query, method='GET', **kwargs):
154 if method == 'GET':
155 # query will be cached by wikidata
156 http_response = get(SPARQL_ENDPOINT_URL + '?' + urlencode({'query': query}), headers=get_headers(), **kwargs)
157 else:
158 # query won't be cached by wikidata
159 http_response = post(SPARQL_ENDPOINT_URL, data={'query': query}, headers=get_headers(), **kwargs)
160 if http_response.status_code != 200:
161 logger.debug('SPARQL endpoint error %s', http_response.content.decode())
162 logger.debug('request time %s', str(http_response.elapsed))
163 http_response.raise_for_status()
164 return loads(http_response.content.decode())
165
166

References get_headers().

Referenced by init().

Here is the call graph for this function:
Here is the caller graph for this function:

Variable Documentation

◆ _IMG_SRC_DEFAULT_URL_PREFIX

str searx.engines.wikidata._IMG_SRC_DEFAULT_URL_PREFIX = "https://commons.wikimedia.org/wiki/Special:FilePath/"
protected

Definition at line 205 of file wikidata.py.

◆ _IMG_SRC_NEW_URL_PREFIX

str searx.engines.wikidata._IMG_SRC_NEW_URL_PREFIX = "https://upload.wikimedia.org/wikipedia/commons/thumb/"
protected

Definition at line 206 of file wikidata.py.

◆ about

dict searx.engines.wikidata.about
Initial value:
1= {
2 "website": 'https://wikidata.org/',
3 "wikidata_id": 'Q2013',
4 "official_api_documentation": 'https://query.wikidata.org/',
5 "use_official_api": True,
6 "require_api_key": False,
7 "results": 'JSON',
8}

Definition at line 26 of file wikidata.py.

◆ display_type

list searx.engines.wikidata.display_type = ["infobox"]

Definition at line 35 of file wikidata.py.

◆ DUMMY_ENTITY_URLS

searx.engines.wikidata.DUMMY_ENTITY_URLS
Initial value:
1= set(
2 "http://www.wikidata.org/entity/" + wid for wid in ("Q4115189", "Q13406268", "Q15397819", "Q17339402")
3)

Definition at line 110 of file wikidata.py.

◆ QUERY_PROPERTY_NAMES

str searx.engines.wikidata.QUERY_PROPERTY_NAMES
Initial value:
1= """
2SELECT ?item ?name
3WHERE {
4 {
5 SELECT ?item
6 WHERE { ?item wdt:P279* wd:Q12132 }
7 } UNION {
8 VALUES ?item { %ATTRIBUTES% }
9 }
10 OPTIONAL { ?item rdfs:label ?name. }
11}
12"""

Definition at line 95 of file wikidata.py.

◆ QUERY_TEMPLATE

str searx.engines.wikidata.QUERY_TEMPLATE
Initial value:
1= """
2SELECT ?item ?itemLabel ?itemDescription ?lat ?long %SELECT%
3WHERE
4{
5 SERVICE wikibase:mwapi {
6 bd:serviceParam wikibase:endpoint "www.wikidata.org";
7 wikibase:api "EntitySearch";
8 wikibase:limit 1;
9 mwapi:search "%QUERY%";
10 mwapi:language "%LANGUAGE%".
11 ?item wikibase:apiOutputItem mwapi:item.
12 }
13 hint:Prior hint:runFirst "true".
14
15 %WHERE%
16
17 SERVICE wikibase:label {
18 bd:serviceParam wikibase:language "%LANGUAGE%,en".
19 ?item rdfs:label ?itemLabel .
20 ?item schema:description ?itemDescription .
21 %WIKIBASE_LABELS%
22 }
23
24}
25GROUP BY ?item ?itemLabel ?itemDescription ?lat ?long %GROUP_BY%
26"""

Definition at line 67 of file wikidata.py.

◆ replace_http_by_https

searx.engines.wikidata.replace_http_by_https = get_string_replaces_function({'http:': 'https:'})

Definition at line 132 of file wikidata.py.

Referenced by get_results().

◆ SPARQL_ENDPOINT_URL

str searx.engines.wikidata.SPARQL_ENDPOINT_URL = 'https://query.wikidata.org/sparql'

Definition at line 42 of file wikidata.py.

◆ SPARQL_EXPLAIN_URL

str searx.engines.wikidata.SPARQL_EXPLAIN_URL = 'https://query.wikidata.org/bigdata/namespace/wdq/sparql?explain'

Definition at line 43 of file wikidata.py.

◆ sparql_string_escape

searx.engines.wikidata.sparql_string_escape
Initial value:
1= get_string_replaces_function(
2 # fmt: off
3 {
4 '\t': '\\\t',
5 '\n': '\\\n',
6 '\r': '\\\r',
7 '\b': '\\\b',
8 '\f': '\\\f',
9 '\"': '\\\"',
10 '\'': '\\\'',
11 '\\': '\\\\'
12 }
13 # fmt: on
14)

Definition at line 117 of file wikidata.py.

Referenced by get_query().

◆ WIKIDATA_PROPERTIES

dict searx.engines.wikidata.WIKIDATA_PROPERTIES
Initial value:
1= {
2 'P434': 'MusicBrainz',
3 'P435': 'MusicBrainz',
4 'P436': 'MusicBrainz',
5 'P966': 'MusicBrainz',
6 'P345': 'IMDb',
7 'P2397': 'YouTube',
8 'P1651': 'YouTube',
9 'P2002': 'Twitter',
10 'P2013': 'Facebook',
11 'P2003': 'Instagram',
12 'P4033': 'Mastodon',
13 'P11947': 'Lemmy',
14 'P12622': 'PeerTube',
15}

Definition at line 44 of file wikidata.py.