List of commits:
Subject Hash Author Date (UTC)
bpo-30285: Optimize case-insensitive matching and searching (#1482) 6d336a027913327fc042b0d758a16724fea27b9c Serhiy Storchaka 2017-05-09 20:37:14
bpo-30024: Circular imports involving absolute imports with binding (#1264) f93234bb8a87855f295d441524e519481ce6ab13 Serhiy Storchaka 2017-05-09 19:31:05
bpo-30273: update distutils.sysconfig for venv's created from Python (#1515) dbdea629e2e0e4bd8845aa55041e0a0ca4172cf3 Jeremy Kloth 2017-05-09 15:24:13
bpo-30258: regrtest: Fix run_tests_multiprocess() (#1479) 74683fc6247c522ae955a6e7308b8ff51def35d8 Victor Stinner 2017-05-09 09:34:01
bpo-29990: Fix range checking in GB18030 decoder (#1495) 9da408d15bdef624a5632182cb4edf98001fa82f Xiang Zhang 2017-05-09 03:38:32
bpo-30289: remove Misc/python-config.sh when make distclean (#1498) fa5abac1e6cd74979557d5a6f960a55f40a10b0e Xiang Zhang 2017-05-09 02:32:13
bpo-29979: Rewrite cgi.parse_multipart to make it consistent with FieldStorage (#991) cc3fa204d357be5fafc10eb8c2a80fe0bca998f1 Pierre Quentel 2017-05-08 12:08:34
Fix a trivial typo in global section (#1497) f34c6850203a2406c4950af7a9c8a134145df4ea Jim Fasarakis-Hilliard 2017-05-08 11:36:29
Closes bpo-30168: indent methods in Logger Class (#1295) 55ace65eba587fe3cf3759a43cccf85214651971 Jim Fasarakis-Hilliard 2017-05-07 18:40:18
Revert bpo-26293 for zipfile breakage. See also bpo-29094. (#1484) 3763ea865cee5bbabcce11cd577811135e0fc747 Serhiy Storchaka 2017-05-06 11:46:01
bpo-30218: support path-like objects in shutil.unpack_archive() (GH-1367) a12df7b7d40dbf47825917c8fa03d2c09b5a382c Jelle Zijlstra 2017-05-05 21:27:12
bpo-29243: Fix Makefile with respect to --enable-optimizations (#1478) a1054c3b0037d4c2a5492e79fc193f36245366c7 torsava 2017-05-05 15:35:50
bpo-29920: Document cgitb.text() and cgitb.html() functions (GH-849) c07b3a15be5e0a68a73b4c532861ed8de6932bd2 masklinn 2017-05-05 08:15:12
bpo-30279: Remove unused Python/thread_foobar.h (#1473) fdaeea620f8c78da89cddba4ab010c64535800e0 Masayuki Yamamoto 2017-05-05 08:04:13
bpo-30264: ExpatParser closes the source on error (#1451) ef9c0e732fc50aefbdd7c5a80e04e14b31684e66 Victor Stinner 2017-05-05 07:46:47
bpo-30277: Replace _sre.getlower() with _sre.ascii_tolower() and _sre.unicode_tolower(). (#1468) 7186cc29be352bed6f1110873283d073fd0643e4 Serhiy Storchaka 2017-05-05 07:42:46
bpo-30243: Fixed the possibility of a crash in _json. (#1420) 76a3e51a403bc84ed536921866c86dd7d07aaa7e Serhiy Storchaka 2017-05-05 07:08:49
bpo-30215: Make re.compile() locale agnostic. (#1361) 898ff03e1e7925ecde3da66327d3cdc7e07625ba Serhiy Storchaka 2017-05-05 05:53:40
Make code coverage less strict (GH-1438) 647c3d381e67490e82cdbbe6c96e46d5e1628ce2 Brett Cannon 2017-05-04 21:58:54
bpo-30273: Update sysconfig (#1464) b109a1d3360fc4bb87b9887264e3634632d392ca Victor Stinner 2017-05-04 21:29:09
Commit 6d336a027913327fc042b0d758a16724fea27b9c - bpo-30285: Optimize case-insensitive matching and searching (#1482)
of regular expressions.
Author: Serhiy Storchaka
Author date (UTC): 2017-05-09 20:37
Committer name: GitHub
Committer date (UTC): 2017-05-09 20:37
Parent(s): f93234bb8a87855f295d441524e519481ce6ab13
Signing key:
Tree: ca511a6c75e340ef3493674b791f05a692e0c9e2
File Lines added Lines deleted
Doc/whatsnew/3.7.rst 4 0
Lib/sre_compile.py 102 69
Lib/test/test_re.py 9 0
Misc/NEWS 3 0
Modules/_sre.c 34 0
Modules/clinic/_sre.c.h 63 1
File Doc/whatsnew/3.7.rst changed (mode: 100644) (index 3de8bc5c93..57fd4e42a5)
... ... Optimizations
208 208 using the :func:`os.scandir` function. using the :func:`os.scandir` function.
209 209 (Contributed by Serhiy Storchaka in :issue:`25996`.) (Contributed by Serhiy Storchaka in :issue:`25996`.)
210 210
211 * Optimized case-insensitive matching and searching of :mod:`regular
212 expressions <re>`. Searching some patterns can now be up to 20 times faster.
213 (Contributed by Serhiy Storchaka in :issue:`30285`.)
214
211 215
212 216 Build and C API Changes Build and C API Changes
213 217 ======================= =======================
File Lib/sre_compile.py changed (mode: 100644) (index db8b8a2778..cebecb93c0)
... ... def _compile(code, pattern, flags):
69 69 REPEATING_CODES = _REPEATING_CODES REPEATING_CODES = _REPEATING_CODES
70 70 SUCCESS_CODES = _SUCCESS_CODES SUCCESS_CODES = _SUCCESS_CODES
71 71 ASSERT_CODES = _ASSERT_CODES ASSERT_CODES = _ASSERT_CODES
72 iscased = None
72 73 tolower = None tolower = None
73 74 fixes = None fixes = None
74 75 if flags & SRE_FLAG_IGNORECASE and not flags & SRE_FLAG_LOCALE: if flags & SRE_FLAG_IGNORECASE and not flags & SRE_FLAG_LOCALE:
75 76 if flags & SRE_FLAG_UNICODE and not flags & SRE_FLAG_ASCII: if flags & SRE_FLAG_UNICODE and not flags & SRE_FLAG_ASCII:
77 iscased = _sre.unicode_iscased
76 78 tolower = _sre.unicode_tolower tolower = _sre.unicode_tolower
77 79 fixes = _ignorecase_fixes fixes = _ignorecase_fixes
78 80 else: else:
81 iscased = _sre.ascii_iscased
79 82 tolower = _sre.ascii_tolower tolower = _sre.ascii_tolower
80 83 for op, av in pattern: for op, av in pattern:
81 84 if op in LITERAL_CODES: if op in LITERAL_CODES:
 
... ... def _compile(code, pattern, flags):
85 88 elif flags & SRE_FLAG_LOCALE: elif flags & SRE_FLAG_LOCALE:
86 89 emit(OP_LOC_IGNORE[op]) emit(OP_LOC_IGNORE[op])
87 90 emit(av) emit(av)
91 elif not iscased(av):
92 emit(op)
93 emit(av)
88 94 else: else:
89 95 lo = tolower(av) lo = tolower(av)
90 96 if fixes and lo in fixes: if fixes and lo in fixes:
 
... ... def _compile(code, pattern, flags):
101 107 emit(OP_IGNORE[op]) emit(OP_IGNORE[op])
102 108 emit(lo) emit(lo)
103 109 elif op is IN: elif op is IN:
104 if not flags & SRE_FLAG_IGNORECASE:
105 emit(op)
106 elif flags & SRE_FLAG_LOCALE:
110 charset, hascased = _optimize_charset(av, iscased, tolower, fixes)
111 if flags & SRE_FLAG_IGNORECASE and flags & SRE_FLAG_LOCALE:
107 112 emit(IN_LOC_IGNORE) emit(IN_LOC_IGNORE)
108 else:
113 elif hascased:
109 114 emit(IN_IGNORE) emit(IN_IGNORE)
115 else:
116 emit(IN)
110 117 skip = _len(code); emit(0) skip = _len(code); emit(0)
111 _compile_charset(av, flags, code, tolower, fixes)
118 _compile_charset(charset, flags, code)
112 119 code[skip] = _len(code) - skip code[skip] = _len(code) - skip
113 120 elif op is ANY: elif op is ANY:
114 121 if flags & SRE_FLAG_DOTALL: if flags & SRE_FLAG_DOTALL:
 
... ... def _compile(code, pattern, flags):
223 230 else: else:
224 231 raise error("internal: unsupported operand type %r" % (op,)) raise error("internal: unsupported operand type %r" % (op,))
225 232
226 def _compile_charset(charset, flags, code, fixup=None, fixes=None):
233 def _compile_charset(charset, flags, code):
227 234 # compile charset subprogram # compile charset subprogram
228 235 emit = code.append emit = code.append
229 for op, av in _optimize_charset(charset, fixup, fixes):
236 for op, av in charset:
230 237 emit(op) emit(op)
231 238 if op is NEGATE: if op is NEGATE:
232 239 pass pass
 
... ... def _compile_charset(charset, flags, code, fixup=None, fixes=None):
250 257 raise error("internal: unsupported set operator %r" % (op,)) raise error("internal: unsupported set operator %r" % (op,))
251 258 emit(FAILURE) emit(FAILURE)
252 259
253 def _optimize_charset(charset, fixup, fixes):
260 def _optimize_charset(charset, iscased=None, fixup=None, fixes=None):
254 261 # internal: optimize character set # internal: optimize character set
255 262 out = [] out = []
256 263 tail = [] tail = []
257 264 charmap = bytearray(256) charmap = bytearray(256)
265 hascased = False
258 266 for op, av in charset: for op, av in charset:
259 267 while True: while True:
260 268 try: try:
 
... ... def _optimize_charset(charset, fixup, fixes):
265 273 if fixes and lo in fixes: if fixes and lo in fixes:
266 274 for k in fixes[lo]: for k in fixes[lo]:
267 275 charmap[k] = 1 charmap[k] = 1
276 if not hascased and iscased(av):
277 hascased = True
268 278 else: else:
269 279 charmap[av] = 1 charmap[av] = 1
270 280 elif op is RANGE: elif op is RANGE:
271 281 r = range(av[0], av[1]+1) r = range(av[0], av[1]+1)
272 282 if fixup: if fixup:
273 r = map(fixup, r)
274 if fixup and fixes:
275 for i in r:
276 charmap[i] = 1
277 if i in fixes:
278 for k in fixes[i]:
279 charmap[k] = 1
283 if fixes:
284 for i in map(fixup, r):
285 charmap[i] = 1
286 if i in fixes:
287 for k in fixes[i]:
288 charmap[k] = 1
289 else:
290 for i in map(fixup, r):
291 charmap[i] = 1
292 if not hascased:
293 hascased = any(map(iscased, r))
280 294 else: else:
281 295 for i in r: for i in r:
282 296 charmap[i] = 1 charmap[i] = 1
 
... ... def _optimize_charset(charset, fixup, fixes):
290 304 charmap += b'\0' * 0xff00 charmap += b'\0' * 0xff00
291 305 continue continue
292 306 # Character set contains non-BMP character codes. # Character set contains non-BMP character codes.
293 # There are only two ranges of cased non-BMP characters:
294 # 10400-1044F (Deseret) and 118A0-118DF (Warang Citi),
295 # and for both ranges RANGE_IGNORE works.
296 if fixup and op is RANGE:
297 op = RANGE_IGNORE
307 if fixup:
308 hascased = True
309 # There are only two ranges of cased non-BMP characters:
310 # 10400-1044F (Deseret) and 118A0-118DF (Warang Citi),
311 # and for both ranges RANGE_IGNORE works.
312 if op is RANGE:
313 op = RANGE_IGNORE
298 314 tail.append((op, av)) tail.append((op, av))
299 315 break break
300 316
 
... ... def _optimize_charset(charset, fixup, fixes):
322 338 out.append((RANGE, (p, q - 1))) out.append((RANGE, (p, q - 1)))
323 339 out += tail out += tail
324 340 # if the case was changed or new representation is more compact # if the case was changed or new representation is more compact
325 if fixup or len(out) < len(charset):
326 return out
341 if hascased or len(out) < len(charset):
342 return out, hascased
327 343 # else original character set is good enough # else original character set is good enough
328 return charset
344 return charset, hascased
329 345
330 346 # use bitmap # use bitmap
331 347 if len(charmap) == 256: if len(charmap) == 256:
332 348 data = _mk_bitmap(charmap) data = _mk_bitmap(charmap)
333 349 out.append((CHARSET, data)) out.append((CHARSET, data))
334 350 out += tail out += tail
335 return out
351 return out, hascased
336 352
337 353 # To represent a big charset, first a bitmap of all characters in the # To represent a big charset, first a bitmap of all characters in the
338 354 # set is constructed. Then, this bitmap is sliced into chunks of 256 # set is constructed. Then, this bitmap is sliced into chunks of 256
 
... ... def _optimize_charset(charset, fixup, fixes):
371 387 data[0:0] = [block] + _bytes_to_codes(mapping) data[0:0] = [block] + _bytes_to_codes(mapping)
372 388 out.append((BIGCHARSET, data)) out.append((BIGCHARSET, data))
373 389 out += tail out += tail
374 return out
390 return out, hascased
375 391
376 392 _CODEBITS = _sre.CODESIZE * 8 _CODEBITS = _sre.CODESIZE * 8
377 393 MAXCODE = (1 << _CODEBITS) - 1 MAXCODE = (1 << _CODEBITS) - 1
 
... ... def _generate_overlap_table(prefix):
414 430 table[i] = idx + 1 table[i] = idx + 1
415 431 return table return table
416 432
417 def _get_literal_prefix(pattern):
433 def _get_iscased(flags):
434 if not flags & SRE_FLAG_IGNORECASE:
435 return None
436 elif flags & SRE_FLAG_UNICODE and not flags & SRE_FLAG_ASCII:
437 return _sre.unicode_iscased
438 else:
439 return _sre.ascii_iscased
440
441 def _get_literal_prefix(pattern, flags):
418 442 # look for literal prefix # look for literal prefix
419 443 prefix = [] prefix = []
420 444 prefixappend = prefix.append prefixappend = prefix.append
421 445 prefix_skip = None prefix_skip = None
446 iscased = _get_iscased(flags)
422 447 for op, av in pattern.data: for op, av in pattern.data:
423 448 if op is LITERAL: if op is LITERAL:
449 if iscased and iscased(av):
450 break
424 451 prefixappend(av) prefixappend(av)
425 452 elif op is SUBPATTERN: elif op is SUBPATTERN:
426 453 group, add_flags, del_flags, p = av group, add_flags, del_flags, p = av
427 if add_flags & SRE_FLAG_IGNORECASE:
454 flags1 = (flags | add_flags) & ~del_flags
455 if flags1 & SRE_FLAG_IGNORECASE and flags1 & SRE_FLAG_LOCALE:
428 456 break break
429 prefix1, prefix_skip1, got_all = _get_literal_prefix(p)
457 prefix1, prefix_skip1, got_all = _get_literal_prefix(p, flags1)
430 458 if prefix_skip is None: if prefix_skip is None:
431 459 if group is not None: if group is not None:
432 460 prefix_skip = len(prefix) prefix_skip = len(prefix)
 
... ... def _get_literal_prefix(pattern):
441 469 return prefix, prefix_skip, True return prefix, prefix_skip, True
442 470 return prefix, prefix_skip, False return prefix, prefix_skip, False
443 471
444 def _get_charset_prefix(pattern):
445 charset = [] # not used
446 charsetappend = charset.append
447 if pattern.data:
472 def _get_charset_prefix(pattern, flags):
473 while True:
474 if not pattern.data:
475 return None
448 476 op, av = pattern.data[0] op, av = pattern.data[0]
449 if op is SUBPATTERN:
450 group, add_flags, del_flags, p = av
451 if p and not (add_flags & SRE_FLAG_IGNORECASE):
452 op, av = p[0]
453 if op is LITERAL:
454 charsetappend((op, av))
455 elif op is BRANCH:
456 c = []
457 cappend = c.append
458 for p in av[1]:
459 if not p:
460 break
461 op, av = p[0]
462 if op is LITERAL:
463 cappend((op, av))
464 else:
465 break
466 else:
467 charset = c
468 elif op is BRANCH:
469 c = []
470 cappend = c.append
471 for p in av[1]:
472 if not p:
473 break
474 op, av = p[0]
475 if op is LITERAL:
476 cappend((op, av))
477 else:
478 break
477 if op is not SUBPATTERN:
478 break
479 group, add_flags, del_flags, pattern = av
480 flags = (flags | add_flags) & ~del_flags
481 if flags & SRE_FLAG_IGNORECASE and flags & SRE_FLAG_LOCALE:
482 return None
483
484 iscased = _get_iscased(flags)
485 if op is LITERAL:
486 if iscased and iscased(av):
487 return None
488 return [(op, av)]
489 elif op is BRANCH:
490 charset = []
491 charsetappend = charset.append
492 for p in av[1]:
493 if not p:
494 return None
495 op, av = p[0]
496 if op is LITERAL and not (iscased and iscased(av)):
497 charsetappend((op, av))
479 498 else: else:
480 charset = c
481 elif op is IN:
482 charset = av
483 return charset
499 return None
500 return charset
501 elif op is IN:
502 charset = av
503 if iscased:
504 for op, av in charset:
505 if op is LITERAL:
506 if iscased(av):
507 return None
508 elif op is RANGE:
509 if av[1] > 0xffff:
510 return None
511 if any(map(iscased, range(av[0], av[1]+1))):
512 return None
513 return charset
514 return None
484 515
485 516 def _compile_info(code, pattern, flags): def _compile_info(code, pattern, flags):
486 517 # internal: compile an info block. in the current version, # internal: compile an info block. in the current version,
 
... ... def _compile_info(code, pattern, flags):
496 527 prefix = [] prefix = []
497 528 prefix_skip = 0 prefix_skip = 0
498 529 charset = [] # not used charset = [] # not used
499 if not (flags & SRE_FLAG_IGNORECASE):
530 if not (flags & SRE_FLAG_IGNORECASE and flags & SRE_FLAG_LOCALE):
500 531 # look for literal prefix # look for literal prefix
501 prefix, prefix_skip, got_all = _get_literal_prefix(pattern)
532 prefix, prefix_skip, got_all = _get_literal_prefix(pattern, flags)
502 533 # if no prefix, look for charset prefix # if no prefix, look for charset prefix
503 534 if not prefix: if not prefix:
504 charset = _get_charset_prefix(pattern)
535 charset = _get_charset_prefix(pattern, flags)
505 536 ## if prefix: ## if prefix:
506 537 ## print("*** PREFIX", prefix, prefix_skip) ## print("*** PREFIX", prefix, prefix_skip)
507 538 ## if charset: ## if charset:
 
... ... def _compile_info(code, pattern, flags):
536 567 # generate overlap table # generate overlap table
537 568 code.extend(_generate_overlap_table(prefix)) code.extend(_generate_overlap_table(prefix))
538 569 elif charset: elif charset:
570 charset, hascased = _optimize_charset(charset)
571 assert not hascased
539 572 _compile_charset(charset, flags, code) _compile_charset(charset, flags, code)
540 573 code[skip] = len(code) - skip code[skip] = len(code) - skip
541 574
File Lib/test/test_re.py changed (mode: 100644) (index b5b7cff9a2..3129f7e988)
... ... class ReTests(unittest.TestCase):
891 891 lo = ord(c.lower()) lo = ord(c.lower())
892 892 self.assertEqual(_sre.ascii_tolower(i), lo) self.assertEqual(_sre.ascii_tolower(i), lo)
893 893 self.assertEqual(_sre.unicode_tolower(i), lo) self.assertEqual(_sre.unicode_tolower(i), lo)
894 iscased = c in string.ascii_letters
895 self.assertEqual(_sre.ascii_iscased(i), iscased)
896 self.assertEqual(_sre.unicode_iscased(i), iscased)
894 897
895 898 for i in list(range(128, 0x1000)) + [0x10400, 0x10428]: for i in list(range(128, 0x1000)) + [0x10400, 0x10428]:
896 899 c = chr(i) c = chr(i)
897 900 self.assertEqual(_sre.ascii_tolower(i), i) self.assertEqual(_sre.ascii_tolower(i), i)
898 901 if i != 0x0130: if i != 0x0130:
899 902 self.assertEqual(_sre.unicode_tolower(i), ord(c.lower())) self.assertEqual(_sre.unicode_tolower(i), ord(c.lower()))
903 iscased = c != c.lower() or c != c.upper()
904 self.assertFalse(_sre.ascii_iscased(i))
905 self.assertEqual(_sre.unicode_iscased(i),
906 c != c.lower() or c != c.upper())
900 907
901 908 self.assertEqual(_sre.ascii_tolower(0x0130), 0x0130) self.assertEqual(_sre.ascii_tolower(0x0130), 0x0130)
902 909 self.assertEqual(_sre.unicode_tolower(0x0130), ord('i')) self.assertEqual(_sre.unicode_tolower(0x0130), ord('i'))
910 self.assertFalse(_sre.ascii_iscased(0x0130))
911 self.assertTrue(_sre.unicode_iscased(0x0130))
903 912
904 913 def test_not_literal(self): def test_not_literal(self):
905 914 self.assertEqual(re.search(r"\s([^a])", " b").group(1), "b") self.assertEqual(re.search(r"\s([^a])", " b").group(1), "b")
File Misc/NEWS changed (mode: 100644) (index 1828b01065..7a79521efd)
... ... Extension Modules
320 320 Library Library
321 321 ------- -------
322 322
323 - bpo-30285: Optimized case-insensitive matching and searching of regular
324 expressions.
325
323 326 - bpo-29990: Fix range checking in GB18030 decoder. Original patch by Ma Lin. - bpo-29990: Fix range checking in GB18030 decoder. Original patch by Ma Lin.
324 327
325 328 - bpo-29979: rewrite cgi.parse_multipart, reusing the FieldStorage class and - bpo-29979: rewrite cgi.parse_multipart, reusing the FieldStorage class and
File Modules/_sre.c changed (mode: 100644) (index a86c5f252b..6873f1db43)
... ... _sre_getcodesize_impl(PyObject *module)
273 273 return sizeof(SRE_CODE); return sizeof(SRE_CODE);
274 274 } }
275 275
276 /*[clinic input]
277 _sre.ascii_iscased -> bool
278
279 character: int
280 /
281
282 [clinic start generated code]*/
283
284 static int
285 _sre_ascii_iscased_impl(PyObject *module, int character)
286 /*[clinic end generated code: output=4f454b630fbd19a2 input=9f0bd952812c7ed3]*/
287 {
288 unsigned int ch = (unsigned int)character;
289 return ch != sre_lower(ch) || ch != sre_upper(ch);
290 }
291
292 /*[clinic input]
293 _sre.unicode_iscased -> bool
294
295 character: int
296 /
297
298 [clinic start generated code]*/
299
300 static int
301 _sre_unicode_iscased_impl(PyObject *module, int character)
302 /*[clinic end generated code: output=9c5ddee0dc2bc258 input=51e42c3b8dddb78e]*/
303 {
304 unsigned int ch = (unsigned int)character;
305 return ch != sre_lower_unicode(ch) || ch != sre_upper_unicode(ch);
306 }
307
276 308 /*[clinic input] /*[clinic input]
277 309 _sre.ascii_tolower -> int _sre.ascii_tolower -> int
278 310
 
... ... static PyTypeObject Scanner_Type = {
2750 2782 static PyMethodDef _functions[] = { static PyMethodDef _functions[] = {
2751 2783 _SRE_COMPILE_METHODDEF _SRE_COMPILE_METHODDEF
2752 2784 _SRE_GETCODESIZE_METHODDEF _SRE_GETCODESIZE_METHODDEF
2785 _SRE_ASCII_ISCASED_METHODDEF
2786 _SRE_UNICODE_ISCASED_METHODDEF
2753 2787 _SRE_ASCII_TOLOWER_METHODDEF _SRE_ASCII_TOLOWER_METHODDEF
2754 2788 _SRE_UNICODE_TOLOWER_METHODDEF _SRE_UNICODE_TOLOWER_METHODDEF
2755 2789 {NULL, NULL} {NULL, NULL}
File Modules/clinic/_sre.c.h changed (mode: 100644) (index 8056eda3b7..1e60686038)
... ... exit:
29 29 return return_value; return return_value;
30 30 } }
31 31
32 PyDoc_STRVAR(_sre_ascii_iscased__doc__,
33 "ascii_iscased($module, character, /)\n"
34 "--\n"
35 "\n");
36
37 #define _SRE_ASCII_ISCASED_METHODDEF \
38 {"ascii_iscased", (PyCFunction)_sre_ascii_iscased, METH_O, _sre_ascii_iscased__doc__},
39
40 static int
41 _sre_ascii_iscased_impl(PyObject *module, int character);
42
43 static PyObject *
44 _sre_ascii_iscased(PyObject *module, PyObject *arg)
45 {
46 PyObject *return_value = NULL;
47 int character;
48 int _return_value;
49
50 if (!PyArg_Parse(arg, "i:ascii_iscased", &character)) {
51 goto exit;
52 }
53 _return_value = _sre_ascii_iscased_impl(module, character);
54 if ((_return_value == -1) && PyErr_Occurred()) {
55 goto exit;
56 }
57 return_value = PyBool_FromLong((long)_return_value);
58
59 exit:
60 return return_value;
61 }
62
63 PyDoc_STRVAR(_sre_unicode_iscased__doc__,
64 "unicode_iscased($module, character, /)\n"
65 "--\n"
66 "\n");
67
68 #define _SRE_UNICODE_ISCASED_METHODDEF \
69 {"unicode_iscased", (PyCFunction)_sre_unicode_iscased, METH_O, _sre_unicode_iscased__doc__},
70
71 static int
72 _sre_unicode_iscased_impl(PyObject *module, int character);
73
74 static PyObject *
75 _sre_unicode_iscased(PyObject *module, PyObject *arg)
76 {
77 PyObject *return_value = NULL;
78 int character;
79 int _return_value;
80
81 if (!PyArg_Parse(arg, "i:unicode_iscased", &character)) {
82 goto exit;
83 }
84 _return_value = _sre_unicode_iscased_impl(module, character);
85 if ((_return_value == -1) && PyErr_Occurred()) {
86 goto exit;
87 }
88 return_value = PyBool_FromLong((long)_return_value);
89
90 exit:
91 return return_value;
92 }
93
32 94 PyDoc_STRVAR(_sre_ascii_tolower__doc__, PyDoc_STRVAR(_sre_ascii_tolower__doc__,
33 95 "ascii_tolower($module, character, /)\n" "ascii_tolower($module, character, /)\n"
34 96 "--\n" "--\n"
 
... ... _sre_SRE_Scanner_search(ScannerObject *self, PyObject *Py_UNUSED(ignored))
715 777 { {
716 778 return _sre_SRE_Scanner_search_impl(self); return _sre_SRE_Scanner_search_impl(self);
717 779 } }
718 /*[clinic end generated code: output=811e67d7f8f5052e input=a9049054013a1b77]*/
780 /*[clinic end generated code: output=5fe47c49e475cccb input=a9049054013a1b77]*/
Hints:
Before first commit, do not forget to setup your git environment:
git config --global user.name "your_name_here"
git config --global user.email "your@email_here"

Clone this repository using HTTP(S):
git clone https://rocketgit.com/user/benf_wspdigital/cpython

Clone this repository using ssh (do not forget to upload a key first):
git clone ssh://rocketgit@ssh.rocketgit.com/user/benf_wspdigital/cpython

Clone this repository using git:
git clone git://git.rocketgit.com/user/benf_wspdigital/cpython

You are allowed to anonymously push to this repository.
This means that your pushed commits will automatically be transformed into a merge request:
... clone the repository ...
... make some changes and some commits ...
git push origin main