.oO SearXNG Developer Documentation Oo.
Loading...
Searching...
No Matches
searx.engines.wikidata Namespace Reference

Classes

class  WDAmountAttribute
 
class  WDArticle
 
class  WDAttribute
 
class  WDDateAttribute
 
class  WDGeoAttribute
 
class  WDImageAttribute
 
class  WDLabelAttribute
 
class  WDURLAttribute
 

Functions

 get_headers ()
 
 get_label_for_entity (entity_id, language)
 
 send_wikidata_query (query, method='GET')
 
 request (query, params)
 
 response (resp)
 
 get_thumbnail (img_src)
 
 get_results (attribute_result, attributes, language)
 
 get_query (query, language)
 
 get_attributes (language)
 
 debug_explain_wikidata_query (query, method='GET')
 
 init (engine_settings=None)
 
 fetch_traits (EngineTraits engine_traits)
 

Variables

logging logger .Logger
 
dict about
 
list display_type = ["infobox"]
 
str SPARQL_ENDPOINT_URL = 'https://query.wikidata.org/sparql'
 
str SPARQL_EXPLAIN_URL = 'https://query.wikidata.org/bigdata/namespace/wdq/sparql?explain'
 
dict WIKIDATA_PROPERTIES
 
str QUERY_TEMPLATE
 
str QUERY_PROPERTY_NAMES
 
 DUMMY_ENTITY_URLS
 
 sparql_string_escape
 
 replace_http_by_https = get_string_replaces_function({'http:': 'https:'})
 
str _IMG_SRC_DEFAULT_URL_PREFIX = "https://commons.wikimedia.org/wiki/Special:FilePath/"
 
str _IMG_SRC_NEW_URL_PREFIX = "https://upload.wikimedia.org/wikipedia/commons/thumb/"
 

Detailed Description

This module implements the Wikidata engine.  Some implementations are shared
from :ref:`wikipedia engine`.

Function Documentation

◆ debug_explain_wikidata_query()

searx.engines.wikidata.debug_explain_wikidata_query ( query,
method = 'GET' )

Definition at line 790 of file wikidata.py.

790def debug_explain_wikidata_query(query, method='GET'):
791 if method == 'GET':
792 http_response = get(SPARQL_EXPLAIN_URL + '&' + urlencode({'query': query}), headers=get_headers())
793 else:
794 http_response = post(SPARQL_EXPLAIN_URL, data={'query': query}, headers=get_headers())
795 http_response.raise_for_status()
796 return http_response.content
797
798

References get_headers().

+ Here is the call graph for this function:

◆ fetch_traits()

searx.engines.wikidata.fetch_traits ( EngineTraits engine_traits)
Uses languages evaluated from :py:obj:`wikipedia.fetch_wikimedia_traits
<searx.engines.wikipedia.fetch_wikimedia_traits>` and removes

- ``traits.custom['wiki_netloc']``: wikidata does not have net-locations for
  the languages and the list of all

- ``traits.custom['WIKIPEDIA_LANGUAGES']``: not used in the wikipedia engine

Definition at line 819 of file wikidata.py.

819def fetch_traits(engine_traits: EngineTraits):
820 """Uses languages evaluated from :py:obj:`wikipedia.fetch_wikimedia_traits
821 <searx.engines.wikipedia.fetch_wikimedia_traits>` and removes
822
823 - ``traits.custom['wiki_netloc']``: wikidata does not have net-locations for
824 the languages and the list of all
825
826 - ``traits.custom['WIKIPEDIA_LANGUAGES']``: not used in the wikipedia engine
827
828 """
829
830 fetch_wikimedia_traits(engine_traits)
831 engine_traits.custom['wiki_netloc'] = {}
832 engine_traits.custom['WIKIPEDIA_LANGUAGES'] = []

◆ get_attributes()

searx.engines.wikidata.get_attributes ( language)

Definition at line 356 of file wikidata.py.

356def get_attributes(language):
357 # pylint: disable=too-many-statements
358 attributes = []
359
360 def add_value(name):
361 attributes.append(WDAttribute(name))
362
363 def add_amount(name):
364 attributes.append(WDAmountAttribute(name))
365
366 def add_label(name):
367 attributes.append(WDLabelAttribute(name))
368
369 def add_url(name, url_id=None, url_path_prefix=None, **kwargs):
370 attributes.append(WDURLAttribute(name, url_id, url_path_prefix, kwargs))
371
372 def add_image(name, url_id=None, priority=1):
373 attributes.append(WDImageAttribute(name, url_id, priority))
374
375 def add_date(name):
376 attributes.append(WDDateAttribute(name))
377
378 # Dates
379 for p in [
380 'P571', # inception date
381 'P576', # dissolution date
382 'P580', # start date
383 'P582', # end date
384 'P569', # date of birth
385 'P570', # date of death
386 'P619', # date of spacecraft launch
387 'P620',
388 ]: # date of spacecraft landing
389 add_date(p)
390
391 for p in [
392 'P27', # country of citizenship
393 'P495', # country of origin
394 'P17', # country
395 'P159',
396 ]: # headquarters location
397 add_label(p)
398
399 # Places
400 for p in [
401 'P36', # capital
402 'P35', # head of state
403 'P6', # head of government
404 'P122', # basic form of government
405 'P37',
406 ]: # official language
407 add_label(p)
408
409 add_value('P1082') # population
410 add_amount('P2046') # area
411 add_amount('P281') # postal code
412 add_label('P38') # currency
413 add_amount('P2048') # height (building)
414
415 # Media
416 for p in [
417 'P400', # platform (videogames, computing)
418 'P50', # author
419 'P170', # creator
420 'P57', # director
421 'P175', # performer
422 'P178', # developer
423 'P162', # producer
424 'P176', # manufacturer
425 'P58', # screenwriter
426 'P272', # production company
427 'P264', # record label
428 'P123', # publisher
429 'P449', # original network
430 'P750', # distributed by
431 'P86',
432 ]: # composer
433 add_label(p)
434
435 add_date('P577') # publication date
436 add_label('P136') # genre (music, film, artistic...)
437 add_label('P364') # original language
438 add_value('P212') # ISBN-13
439 add_value('P957') # ISBN-10
440 add_label('P275') # copyright license
441 add_label('P277') # programming language
442 add_value('P348') # version
443 add_label('P840') # narrative location
444
445 # Languages
446 add_value('P1098') # number of speakers
447 add_label('P282') # writing system
448 add_label('P1018') # language regulatory body
449 add_value('P218') # language code (ISO 639-1)
450
451 # Other
452 add_label('P169') # ceo
453 add_label('P112') # founded by
454 add_label('P1454') # legal form (company, organization)
455 add_label('P137') # operator (service, facility, ...)
456 add_label('P1029') # crew members (tripulation)
457 add_label('P225') # taxon name
458 add_value('P274') # chemical formula
459 add_label('P1346') # winner (sports, contests, ...)
460 add_value('P1120') # number of deaths
461 add_value('P498') # currency code (ISO 4217)
462
463 # URL
464 add_url('P856', official=True) # official website
465 attributes.append(WDArticle(language)) # wikipedia (user language)
466 if not language.startswith('en'):
467 attributes.append(WDArticle('en')) # wikipedia (english)
468
469 add_url('P1324') # source code repository
470 add_url('P1581') # blog
471 add_url('P434', url_id='musicbrainz_artist')
472 add_url('P435', url_id='musicbrainz_work')
473 add_url('P436', url_id='musicbrainz_release_group')
474 add_url('P966', url_id='musicbrainz_label')
475 add_url('P345', url_id='imdb_id')
476 add_url('P2397', url_id='youtube_channel')
477 add_url('P1651', url_id='youtube_video')
478 add_url('P2002', url_id='twitter_profile')
479 add_url('P2013', url_id='facebook_profile')
480 add_url('P2003', url_id='instagram_profile')
481
482 # Fediverse
483 add_url('P4033', url_path_prefix='/@') # Mastodon user
484 add_url('P11947', url_path_prefix='/c/') # Lemmy community
485 add_url('P12622', url_path_prefix='/c/') # PeerTube channel
486
487 # Map
488 attributes.append(WDGeoAttribute('P625'))
489
490 # Image
491 add_image('P15', priority=1, url_id='wikimedia_image') # route map
492 add_image('P242', priority=2, url_id='wikimedia_image') # locator map
493 add_image('P154', priority=3, url_id='wikimedia_image') # logo
494 add_image('P18', priority=4, url_id='wikimedia_image') # image
495 add_image('P41', priority=5, url_id='wikimedia_image') # flag
496 add_image('P2716', priority=6, url_id='wikimedia_image') # collage
497 add_image('P2910', priority=7, url_id='wikimedia_image') # icon
498
499 return attributes
500
501

Referenced by get_query(), and init().

+ Here is the caller graph for this function:

◆ get_headers()

searx.engines.wikidata.get_headers ( )

Definition at line 143 of file wikidata.py.

143def get_headers():
144 # user agent: https://www.mediawiki.org/wiki/Wikidata_Query_Service/User_Manual#Query_limits
145 return {'Accept': 'application/sparql-results+json', 'User-Agent': searx_useragent()}
146
147

Referenced by debug_explain_wikidata_query(), request(), and send_wikidata_query().

+ Here is the caller graph for this function:

◆ get_label_for_entity()

searx.engines.wikidata.get_label_for_entity ( entity_id,
language )

Definition at line 148 of file wikidata.py.

148def get_label_for_entity(entity_id, language):
149 name = WIKIDATA_PROPERTIES.get(entity_id)
150 if name is None:
151 name = WIKIDATA_PROPERTIES.get((entity_id, language))
152 if name is None:
153 name = WIKIDATA_PROPERTIES.get((entity_id, language.split('-')[0]))
154 if name is None:
155 name = WIKIDATA_PROPERTIES.get((entity_id, 'en'))
156 if name is None:
157 name = entity_id
158 return name
159
160

Referenced by searx.engines.wikidata.WDAttribute.get_label(), and searx.engines.wikidata.WDAmountAttribute.get_str().

+ Here is the caller graph for this function:

◆ get_query()

searx.engines.wikidata.get_query ( query,
language )

Definition at line 339 of file wikidata.py.

339def get_query(query, language):
340 attributes = get_attributes(language)
341 select = [a.get_select() for a in attributes]
342 where = list(filter(lambda s: len(s) > 0, [a.get_where() for a in attributes]))
343 wikibase_label = list(filter(lambda s: len(s) > 0, [a.get_wikibase_label() for a in attributes]))
344 group_by = list(filter(lambda s: len(s) > 0, [a.get_group_by() for a in attributes]))
345 query = (
346 QUERY_TEMPLATE.replace('%QUERY%', sparql_string_escape(query))
347 .replace('%SELECT%', ' '.join(select))
348 .replace('%WHERE%', '\n '.join(where))
349 .replace('%WIKIBASE_LABELS%', '\n '.join(wikibase_label))
350 .replace('%GROUP_BY%', ' '.join(group_by))
351 .replace('%LANGUAGE%', language)
352 )
353 return query, attributes
354
355

References get_attributes(), and sparql_string_escape.

Referenced by request().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ get_results()

searx.engines.wikidata.get_results ( attribute_result,
attributes,
language )

Definition at line 256 of file wikidata.py.

256def get_results(attribute_result, attributes, language):
257 # pylint: disable=too-many-branches
258 results = []
259 infobox_title = attribute_result.get('itemLabel')
260 infobox_id = attribute_result['item']
261 infobox_id_lang = None
262 infobox_urls = []
263 infobox_attributes = []
264 infobox_content = attribute_result.get('itemDescription', [])
265 img_src = None
266 img_src_priority = 0
267
268 for attribute in attributes:
269 value = attribute.get_str(attribute_result, language)
270 if value is not None and value != '':
271 attribute_type = type(attribute)
272
273 if attribute_type in (WDURLAttribute, WDArticle):
274 # get_select() method : there is group_concat(distinct ...;separator=", ")
275 # split the value here
276 for url in value.split(', '):
277 infobox_urls.append({'title': attribute.get_label(language), 'url': url, **attribute.kwargs})
278 # "normal" results (not infobox) include official website and Wikipedia links.
279 if "list" in display_type and (attribute.kwargs.get('official') or attribute_type == WDArticle):
280 results.append({'title': infobox_title, 'url': url, "content": infobox_content})
281
282 # update the infobox_id with the wikipedia URL
283 # first the local wikipedia URL, and as fallback the english wikipedia URL
284 if attribute_type == WDArticle and (
285 (attribute.language == 'en' and infobox_id_lang is None) or attribute.language != 'en'
286 ):
287 infobox_id_lang = attribute.language
288 infobox_id = url
289 elif attribute_type == WDImageAttribute:
290 # this attribute is an image.
291 # replace the current image only the priority is lower
292 # (the infobox contain only one image).
293 if attribute.priority > img_src_priority:
294 img_src = get_thumbnail(value)
295 img_src_priority = attribute.priority
296 elif attribute_type == WDGeoAttribute:
297 # geocoordinate link
298 # use the area to get the OSM zoom
299 # Note: ignore the unit (must be kmĀ² otherwise the calculation is wrong)
300 # Should use normalized value p:P2046/psn:P2046/wikibase:quantityAmount
301 area = attribute_result.get('P2046')
302 osm_zoom = area_to_osm_zoom(area) if area else 19
303 url = attribute.get_geo_url(attribute_result, osm_zoom=osm_zoom)
304 if url:
305 infobox_urls.append({'title': attribute.get_label(language), 'url': url, 'entity': attribute.name})
306 else:
307 infobox_attributes.append(
308 {'label': attribute.get_label(language), 'value': value, 'entity': attribute.name}
309 )
310
311 if infobox_id:
312 infobox_id = replace_http_by_https(infobox_id)
313
314 # add the wikidata URL at the end
315 infobox_urls.append({'title': 'Wikidata', 'url': attribute_result['item']})
316
317 if (
318 "list" in display_type
319 and img_src is None
320 and len(infobox_attributes) == 0
321 and len(infobox_urls) == 1
322 and len(infobox_content) == 0
323 ):
324 results.append({'url': infobox_urls[0]['url'], 'title': infobox_title, 'content': infobox_content})
325 elif "infobox" in display_type:
326 results.append(
327 {
328 'infobox': infobox_title,
329 'id': infobox_id,
330 'content': infobox_content,
331 'img_src': img_src,
332 'urls': infobox_urls,
333 'attributes': infobox_attributes,
334 }
335 )
336 return results
337
338

References get_thumbnail(), and replace_http_by_https.

Referenced by response().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

◆ get_thumbnail()

searx.engines.wikidata.get_thumbnail ( img_src)
Get Thumbnail image from wikimedia commons

Images from commons.wikimedia.org are (HTTP) redirected to
upload.wikimedia.org.  The redirected URL can be calculated by this
function.

- https://stackoverflow.com/a/33691240

Definition at line 217 of file wikidata.py.

217def get_thumbnail(img_src):
218 """Get Thumbnail image from wikimedia commons
219
220 Images from commons.wikimedia.org are (HTTP) redirected to
221 upload.wikimedia.org. The redirected URL can be calculated by this
222 function.
223
224 - https://stackoverflow.com/a/33691240
225
226 """
227 logger.debug('get_thumbnail(): %s', img_src)
228 if not img_src is None and _IMG_SRC_DEFAULT_URL_PREFIX in img_src.split()[0]:
229 img_src_name = unquote(img_src.replace(_IMG_SRC_DEFAULT_URL_PREFIX, "").split("?", 1)[0].replace("%20", "_"))
230 img_src_name_first = img_src_name
231 img_src_name_second = img_src_name
232
233 if ".svg" in img_src_name.split()[0]:
234 img_src_name_second = img_src_name + ".png"
235
236 img_src_size = img_src.replace(_IMG_SRC_DEFAULT_URL_PREFIX, "").split("?", 1)[1]
237 img_src_size = img_src_size[img_src_size.index("=") + 1 : img_src_size.index("&")]
238 img_src_name_md5 = md5(img_src_name.encode("utf-8")).hexdigest()
239 img_src = (
240 _IMG_SRC_NEW_URL_PREFIX
241 + img_src_name_md5[0]
242 + "/"
243 + img_src_name_md5[0:2]
244 + "/"
245 + img_src_name_first
246 + "/"
247 + img_src_size
248 + "px-"
249 + img_src_name_second
250 )
251 logger.debug('get_thumbnail() redirected: %s', img_src)
252
253 return img_src
254
255

Referenced by get_results().

+ Here is the caller graph for this function:

◆ init()

searx.engines.wikidata.init ( engine_settings = None)

Definition at line 799 of file wikidata.py.

799def init(engine_settings=None): # pylint: disable=unused-argument
800 # WIKIDATA_PROPERTIES : add unit symbols
801 for k, v in WIKIDATA_UNITS.items():
802 WIKIDATA_PROPERTIES[k] = v['symbol']
803
804 # WIKIDATA_PROPERTIES : add property labels
805 wikidata_property_names = []
806 for attribute in get_attributes('en'):
807 if type(attribute) in (WDAttribute, WDAmountAttribute, WDURLAttribute, WDDateAttribute, WDLabelAttribute):
808 if attribute.name not in WIKIDATA_PROPERTIES:
809 wikidata_property_names.append("wd:" + attribute.name)
810 query = QUERY_PROPERTY_NAMES.replace('%ATTRIBUTES%', " ".join(wikidata_property_names))
811 jsonresponse = send_wikidata_query(query)
812 for result in jsonresponse.get('results', {}).get('bindings', {}):
813 name = result['name']['value']
814 lang = result['name']['xml:lang']
815 entity_id = result['item']['value'].replace('http://www.wikidata.org/entity/', '')
816 WIKIDATA_PROPERTIES[(entity_id, lang)] = name.capitalize()
817
818

References get_attributes(), and send_wikidata_query().

+ Here is the call graph for this function:

◆ request()

searx.engines.wikidata.request ( query,
params )

Definition at line 175 of file wikidata.py.

175def request(query, params):
176
177 eng_tag, _wiki_netloc = get_wiki_params(params['searxng_locale'], traits)
178 query, attributes = get_query(query, eng_tag)
179 logger.debug("request --> language %s // len(attributes): %s", eng_tag, len(attributes))
180
181 params['method'] = 'POST'
182 params['url'] = SPARQL_ENDPOINT_URL
183 params['data'] = {'query': query}
184 params['headers'] = get_headers()
185 params['language'] = eng_tag
186 params['attributes'] = attributes
187
188 return params
189
190

References get_headers(), and get_query().

+ Here is the call graph for this function:

◆ response()

searx.engines.wikidata.response ( resp)

Definition at line 191 of file wikidata.py.

191def response(resp):
192
193 results = []
194 jsonresponse = loads(resp.content.decode())
195
196 language = resp.search_params['language']
197 attributes = resp.search_params['attributes']
198 logger.debug("request --> language %s // len(attributes): %s", language, len(attributes))
199
200 seen_entities = set()
201 for result in jsonresponse.get('results', {}).get('bindings', []):
202 attribute_result = {key: value['value'] for key, value in result.items()}
203 entity_url = attribute_result['item']
204 if entity_url not in seen_entities and entity_url not in DUMMY_ENTITY_URLS:
205 seen_entities.add(entity_url)
206 results += get_results(attribute_result, attributes, language)
207 else:
208 logger.debug('The SPARQL request returns duplicate entities: %s', str(attribute_result))
209
210 return results
211
212

References get_results().

+ Here is the call graph for this function:

◆ send_wikidata_query()

searx.engines.wikidata.send_wikidata_query ( query,
method = 'GET' )

Definition at line 161 of file wikidata.py.

161def send_wikidata_query(query, method='GET'):
162 if method == 'GET':
163 # query will be cached by wikidata
164 http_response = get(SPARQL_ENDPOINT_URL + '?' + urlencode({'query': query}), headers=get_headers())
165 else:
166 # query won't be cached by wikidata
167 http_response = post(SPARQL_ENDPOINT_URL, data={'query': query}, headers=get_headers())
168 if http_response.status_code != 200:
169 logger.debug('SPARQL endpoint error %s', http_response.content.decode())
170 logger.debug('request time %s', str(http_response.elapsed))
171 http_response.raise_for_status()
172 return loads(http_response.content.decode())
173
174

References get_headers().

Referenced by init().

+ Here is the call graph for this function:
+ Here is the caller graph for this function:

Variable Documentation

◆ _IMG_SRC_DEFAULT_URL_PREFIX

str searx.engines.wikidata._IMG_SRC_DEFAULT_URL_PREFIX = "https://commons.wikimedia.org/wiki/Special:FilePath/"
protected

Definition at line 213 of file wikidata.py.

◆ _IMG_SRC_NEW_URL_PREFIX

str searx.engines.wikidata._IMG_SRC_NEW_URL_PREFIX = "https://upload.wikimedia.org/wikipedia/commons/thumb/"
protected

Definition at line 214 of file wikidata.py.

◆ about

dict searx.engines.wikidata.about
Initial value:
1= {
2 "website": 'https://wikidata.org/',
3 "wikidata_id": 'Q2013',
4 "official_api_documentation": 'https://query.wikidata.org/',
5 "use_official_api": True,
6 "require_api_key": False,
7 "results": 'JSON',
8}

Definition at line 34 of file wikidata.py.

◆ display_type

list searx.engines.wikidata.display_type = ["infobox"]

Definition at line 43 of file wikidata.py.

◆ DUMMY_ENTITY_URLS

searx.engines.wikidata.DUMMY_ENTITY_URLS
Initial value:
1= set(
2 "http://www.wikidata.org/entity/" + wid for wid in ("Q4115189", "Q13406268", "Q15397819", "Q17339402")
3)

Definition at line 118 of file wikidata.py.

◆ logger

logging searx.engines.wikidata.logger .Logger

Definition at line 29 of file wikidata.py.

◆ QUERY_PROPERTY_NAMES

str searx.engines.wikidata.QUERY_PROPERTY_NAMES
Initial value:
1= """
2SELECT ?item ?name
3WHERE {
4 {
5 SELECT ?item
6 WHERE { ?item wdt:P279* wd:Q12132 }
7 } UNION {
8 VALUES ?item { %ATTRIBUTES% }
9 }
10 OPTIONAL { ?item rdfs:label ?name. }
11}
12"""

Definition at line 103 of file wikidata.py.

◆ QUERY_TEMPLATE

str searx.engines.wikidata.QUERY_TEMPLATE
Initial value:
1= """
2SELECT ?item ?itemLabel ?itemDescription ?lat ?long %SELECT%
3WHERE
4{
5 SERVICE wikibase:mwapi {
6 bd:serviceParam wikibase:endpoint "www.wikidata.org";
7 wikibase:api "EntitySearch";
8 wikibase:limit 1;
9 mwapi:search "%QUERY%";
10 mwapi:language "%LANGUAGE%".
11 ?item wikibase:apiOutputItem mwapi:item.
12 }
13 hint:Prior hint:runFirst "true".
14
15 %WHERE%
16
17 SERVICE wikibase:label {
18 bd:serviceParam wikibase:language "%LANGUAGE%,en".
19 ?item rdfs:label ?itemLabel .
20 ?item schema:description ?itemDescription .
21 %WIKIBASE_LABELS%
22 }
23
24}
25GROUP BY ?item ?itemLabel ?itemDescription ?lat ?long %GROUP_BY%
26"""

Definition at line 75 of file wikidata.py.

◆ replace_http_by_https

searx.engines.wikidata.replace_http_by_https = get_string_replaces_function({'http:': 'https:'})

Definition at line 140 of file wikidata.py.

Referenced by get_results().

◆ SPARQL_ENDPOINT_URL

str searx.engines.wikidata.SPARQL_ENDPOINT_URL = 'https://query.wikidata.org/sparql'

Definition at line 50 of file wikidata.py.

◆ SPARQL_EXPLAIN_URL

str searx.engines.wikidata.SPARQL_EXPLAIN_URL = 'https://query.wikidata.org/bigdata/namespace/wdq/sparql?explain'

Definition at line 51 of file wikidata.py.

◆ sparql_string_escape

searx.engines.wikidata.sparql_string_escape
Initial value:
1= get_string_replaces_function(
2 # fmt: off
3 {
4 '\t': '\\\t',
5 '\n': '\\\n',
6 '\r': '\\\r',
7 '\b': '\\\b',
8 '\f': '\\\f',
9 '\"': '\\\"',
10 '\'': '\\\'',
11 '\\': '\\\\'
12 }
13 # fmt: on
14)

Definition at line 125 of file wikidata.py.

Referenced by get_query().

◆ WIKIDATA_PROPERTIES

dict searx.engines.wikidata.WIKIDATA_PROPERTIES
Initial value:
1= {
2 'P434': 'MusicBrainz',
3 'P435': 'MusicBrainz',
4 'P436': 'MusicBrainz',
5 'P966': 'MusicBrainz',
6 'P345': 'IMDb',
7 'P2397': 'YouTube',
8 'P1651': 'YouTube',
9 'P2002': 'Twitter',
10 'P2013': 'Facebook',
11 'P2003': 'Instagram',
12 'P4033': 'Mastodon',
13 'P11947': 'Lemmy',
14 'P12622': 'PeerTube',
15}

Definition at line 52 of file wikidata.py.