potluck.harness
Tools for testing Python code and recording things like results, printed output, or even traces of calls to certain functions.
harness.py
These tools start with a few functions for creating "payloads" which are functions that take a context dictionary as a single argument and which return dictionaries of new context slots to establish (it's no coincidence that this same formula is what is expected of a context builder function; payloads are context builders).
Once a payload is established, this module offers a variety of augmentation functions which can create modified payloads with additional functionality. Note that some of these augmentations interfere with each other in minor ways, and should therefore be applied before others.
Also note that not every augmentation makes sense to apply to every kind
of payload (in particular, module import payloads don't make use of the
"module" context slot, so augmentations like with_module_decorations
can't be usefully applied to them).
1""" 2Tools for testing Python code and recording things like results, printed 3output, or even traces of calls to certain functions. 4 5harness.py 6 7These tools start with a few functions for creating "payloads" which are 8functions that take a context dictionary as a single argument and which 9return dictionaries of new context slots to establish (it's no 10coincidence that this same formula is what is expected of a context 11builder function; payloads are context builders). 12 13Once a payload is established, this module offers a variety of 14augmentation functions which can create modified payloads with additional 15functionality. Note that some of these augmentations interfere with each 16other in minor ways, and should therefore be applied before others. 17 18Also note that not every augmentation makes sense to apply to every kind 19of payload (in particular, module import payloads don't make use of the 20"module" context slot, so augmentations like `with_module_decorations` 21can't be usefully applied to them). 22""" 23 24import copy 25import sys 26import imp 27import io 28import re 29import traceback 30import shelve 31import os 32import ast 33 34import turtle 35 36from . import load 37from . import mast 38from . import context_utils 39from . import html_tools 40from . import timeout 41from . import logging 42from . import time_utils 43from . import phrasing 44 45 46#---------# 47# Globals # 48#---------# 49 50AUGMENTATION_ORDER = [ 51 "with_module_decorations", 52 "tracing_function_calls", 53 "with_cleanup", 54 "with_setup", # must be below with_cleanup! 55 "capturing_printed_output", 56 "with_fake_input", 57 "with_timeout", 58 "capturing_turtle_drawings", 59 "capturing_wavesynth_audio", 60 "capturing_file_contents", 61 "sampling_distribution_of_results", 62 "run_in_sandbox", 63 "run_for_base_and_ref_values" 64] 65""" 66Ideal order in which to apply augmentations in this module when multiple 67augmentations are being applied to the same payload. Because certain 68augmentations interfere with others if not applied in the correct order, 69applying them in order is important, although in certain cases special 70applications might want to deviate from this order. 71 72Note that even following this order, not all augmentations are really 73compatible with each other. For example, if one were to use 74`with_module_decorations` to perform intensive decoration (which is 75somewhat time-consuming per-run) and also attempt to use 76`sampling_distribution_of_results` with a large sample count, the 77resulting payload might be prohibitively slow. 78""" 79 80 81#----------------------------# 82# Payload creation functions # 83#----------------------------# 84 85def create_module_import_payload( 86 name_prefix="loaded_", 87 use_fix_parse=True, 88 prep=None, 89 wrap=None 90): 91 """ 92 This function returns a zero-argument payload function which imports 93 file identified by the "file_path" slot of the given context, using 94 the "filename" slot of the given context as the name of the file for 95 the purpose of deciding a module name, and establishing the resulting 96 module in the "module" slot along with "original_source" and "source" 97 slots holding the original and (possibly modified) source code. 98 99 It reads the "task_info" context slot to access the specification and 100 load the helper files list to make available during module execution. 101 102 A custom `name_prefix` may be given which will alter the name of the 103 imported module in sys.modules and in the __name__ automatic 104 variable as the module is being created; use this to avoid conflicts 105 when importing submitted and solution modules that have the same 106 filename. 107 108 If `use_fix_parse` is provided, `potluck.load.fix_parse` will be used 109 instead of just `mast.parse`, and in addition to generating 110 "original_source", "source", "scope", and "module" slots, a 111 "parse_errors" slot will be generated, holding a (hopefully empty) 112 list of Exception objects that were 'successfully' ignored during 113 parsing. 114 115 `prep` and/or `wrap` functions may be supplied, the `prep` function 116 will be given the module source as a string and must return it (or a 117 modified version); the `wrap` function will be given the compiled 118 module object and whatever it returns will be substituted for the 119 original module. 120 """ 121 def payload(context): 122 """ 123 Imports a specific file as a module, using a prefix in addition 124 to the filename itself to determine the module name. Returns a 125 'module' context slot. 126 """ 127 filename = context_utils.extract(context, "filename") 128 file_path = context_utils.extract(context, "file_path") 129 file_path = os.path.abspath(file_path) 130 full_name = name_prefix + filename 131 132 # Read the file 133 with open(file_path, 'r', encoding="utf-8") as fin: 134 original_source = fin.read() 135 136 # Call our prep function 137 if prep: 138 source = prep(original_source) 139 else: 140 source = original_source 141 142 # Decide if we're using fix_parse or not 143 if use_fix_parse: 144 # Parse using fix_parse 145 fixed, node, errors = load.fix_parse(source, full_name) 146 else: 147 # Just parse normally without attempting to steamroll errors 148 fixed = source 149 node = mast.parse(source, filename=full_name) 150 errors = None 151 152 # Since this payload is already running inside a sandbox 153 # directory, we don't need to provide a sandbox argument here. 154 module = load.create_module_from_code( 155 node, 156 full_name, 157 on_disk=file_path, 158 sandbox=None 159 ) 160 161 # Create result as a copy of the base context 162 result = copy.copy(context) 163 result.update({ 164 "original_source": original_source, 165 "source": fixed, 166 "scope": node, 167 "module": module 168 }) 169 if errors: 170 result["parse_errors"] = errors 171 172 # Wrap the resulting module if a wrap function was provided 173 if wrap: 174 result["module"] = wrap(result["module"]) 175 176 # Return our result 177 return result 178 179 return payload 180 181 182def create_read_variable_payload(varname): 183 """ 184 Creates a payload function which retrieves the given variable from 185 the "module" slot of the given context when run, placing the 186 retrieved value into a "value" slot. If the variable name is a 187 `potluck.context_utils.ContextualValue`, it will be replaced with a 188 real value first. The "variable" slot of the result context will be 189 set to the actual variable name used. 190 """ 191 def payload(context): 192 """ 193 Retrieves a specific variable from a certain module. Returns a 194 "value" context slot. 195 """ 196 nonlocal varname 197 module = context_utils.extract(context, "module") 198 if isinstance(varname, context_utils.ContextualValue): 199 try: 200 varname = varname.replace(context) 201 except Exception: 202 logging.log( 203 "Encountered error while attempting to substitute" 204 " contextual value:" 205 ) 206 logging.log(traceback.format_exc()) 207 raise 208 209 # Create result as a copy of the base context 210 result = copy.copy(context) 211 result.update({ 212 "variable": varname, 213 "value": getattr(module, varname) 214 }) 215 return result 216 217 return payload 218 219 220def create_run_function_payload( 221 fname, 222 posargs=None, 223 kwargs=None, 224 copy_args=True 225): 226 """ 227 Creates a payload function which retrieves a function from the 228 "module" slot of the given context and runs it with certain 229 positional and/or keyword arguments, returning a "value" context slot 230 containing the function's result. The arguments used are also placed 231 into "args" and "kwargs" context slots in case those are useful for 232 later checks, and the function name is placed into a "function" 233 context slot. 234 235 If `copy_args` is set to True (the default), deep copies of argument 236 values will be made before they are passed to the target function 237 (note that keyword argument keys are not copied, although they should 238 be strings in any case). The "args" and "kwargs" slots will also get 239 copies of the arguments, not the original values, and these will be 240 separate copies from those given to the function, so they'll retain 241 the values used as input even after the function is finished. 242 However, "used_args" and "used_kwargs" slots will be added if 243 `copy_args` is set to true that hold the actual arguments sent to the 244 function so that any changes made by the function can be measured if 245 necessary. 246 247 If the function name or any of the argument values (or keyword 248 argument keys) are `potluck.context_utils.ContextualValue` instances, 249 these will be replaced with actual values using the given context 250 before the function is run. This step happens before argument 251 copying, and before the "args" and "kwargs" result slots are set up. 252 """ 253 posargs = posargs or () 254 kwargs = kwargs or {} 255 256 def payload(context): 257 """ 258 Runs a specific function in a certain module with specific 259 arguments. Returns a "value" context slot. 260 """ 261 nonlocal fname 262 module = context_utils.extract(context, "module") 263 if isinstance(fname, context_utils.ContextualValue): 264 try: 265 fname = fname.replace(context) 266 except Exception: 267 logging.log( 268 "Encountered error while attempting to substitute" 269 " contextual value:" 270 ) 271 logging.log(traceback.format_exc()) 272 raise 273 fn = getattr(module, fname) 274 275 real_posargs = [] 276 initial_posargs = [] 277 real_kwargs = {} 278 initial_kwargs = {} 279 for arg in posargs: 280 if isinstance(arg, context_utils.ContextualValue): 281 try: 282 arg = arg.replace(context) 283 except Exception: 284 logging.log( 285 "Encountered error while attempting to substitute" 286 " contextual value:" 287 ) 288 logging.log(traceback.format_exc()) 289 raise 290 291 if copy_args: 292 real_posargs.append(copy.deepcopy(arg)) 293 initial_posargs.append(copy.deepcopy(arg)) 294 295 else: 296 real_posargs.append(arg) 297 initial_posargs.append(arg) 298 299 for key in kwargs: 300 if isinstance(key, context_utils.ContextualValue): 301 try: 302 key = key.replace(context) 303 except Exception: 304 logging.log( 305 "Encountered error while attempting to substitute" 306 " contextual value:" 307 ) 308 logging.log(traceback.format_exc()) 309 raise 310 311 value = kwargs[key] 312 if isinstance(value, context_utils.ContextualValue): 313 try: 314 value = value.replace(context) 315 except Exception: 316 logging.log( 317 "Encountered error while attempting to substitute" 318 " contextual value:" 319 ) 320 logging.log(traceback.format_exc()) 321 raise 322 323 if copy_args: 324 real_kwargs[key] = copy.deepcopy(value) 325 initial_kwargs[key] = copy.deepcopy(value) 326 else: 327 real_kwargs[key] = value 328 initial_kwargs[key] = value 329 330 # Create result as a copy of the base context 331 result = copy.copy(context) 332 result.update({ 333 "value": fn(*real_posargs, **real_kwargs), 334 "function": fname, 335 "args": initial_posargs, 336 "kwargs": initial_kwargs, 337 }) 338 339 if copy_args: 340 result["used_args"] = real_posargs 341 result["used_kwargs"] = real_kwargs 342 343 return result 344 345 return payload 346 347 348def create_run_harness_payload( 349 harness, 350 fname, 351 posargs=None, 352 kwargs=None, 353 copy_args=False 354): 355 """ 356 Creates a payload function which retrieves a function from the 357 "module" slot of the given context and passes it to a custom harness 358 function for testing. The harness function is given the function 359 object to test as its first parameter, followed by the positional and 360 keyword arguments specified here. Its result is placed in the "value" 361 context slot. Like `create_run_function_payload`, "args", "kwargs", 362 and "function" slots are established, and a "harness" slot is 363 established which holds the harness function used. 364 365 If `copy_args` is set to True, deep copies of argument values will be 366 made before they are passed to the harness function (note that keyword 367 argument keys are not copied, although they should be strings in any 368 case). 369 370 If the function name or any of the argument values (or keyword 371 argument keys) are `potluck.context_utils.ContextualValue` instances, 372 these will be replaced with actual values using the given context 373 before the function is run. This step happens before argument 374 copying and before these items are placed into their result slots. 375 """ 376 posargs = posargs or () 377 kwargs = kwargs or {} 378 379 def payload(context): 380 """ 381 Tests a specific function in a certain module using a test 382 harness, with specific arguments. Returns a "value" context slot. 383 """ 384 nonlocal fname 385 module = context_utils.extract(context, "module") 386 if isinstance(fname, context_utils.ContextualValue): 387 try: 388 fname = fname.replace(context) 389 except Exception: 390 logging.log( 391 "Encountered error while attempting to substitute" 392 " contextual value:" 393 ) 394 logging.log(traceback.format_exc()) 395 raise 396 fn = getattr(module, fname) 397 398 real_posargs = [] 399 real_kwargs = {} 400 for arg in posargs: 401 if isinstance(arg, context_utils.ContextualValue): 402 try: 403 arg = arg.replace(context) 404 except Exception: 405 logging.log( 406 "Encountered error while attempting to substitute" 407 " contextual value:" 408 ) 409 logging.log(traceback.format_exc()) 410 raise 411 412 if copy_args: 413 arg = copy.deepcopy(arg) 414 415 real_posargs.append(arg) 416 417 for key in kwargs: 418 if isinstance(key, context_utils.ContextualValue): 419 try: 420 key = key.replace(context) 421 except Exception: 422 logging.log( 423 "Encountered error while attempting to substitute" 424 " contextual value:" 425 ) 426 logging.log(traceback.format_exc()) 427 raise 428 429 value = kwargs[key] 430 if isinstance(value, context_utils.ContextualValue): 431 try: 432 value = value.replace(context) 433 except Exception: 434 logging.log( 435 "Encountered error while attempting to substitute" 436 " contextual value:" 437 ) 438 logging.log(traceback.format_exc()) 439 raise 440 441 if copy_args: 442 value = copy.deepcopy(value) 443 444 real_kwargs[key] = value 445 446 # Create result as a copy of the base context 447 result = copy.copy(context) 448 result.update({ 449 "value": harness(fn, *real_posargs, **real_kwargs), 450 "harness": harness, 451 "function": fname, 452 "args": real_posargs, 453 "kwargs": real_kwargs 454 }) 455 456 # Return our result 457 return result 458 459 return payload 460 461 462def make_module(statements): 463 """ 464 Creates an ast.Module object from a list of statements. Sets empty 465 type_ignores if we're in a version that requires them. 466 """ 467 vi = sys.version_info 468 if vi[0] > 3 or vi[0] == 3 and vi[1] >= 8: 469 return ast.Module(statements, []) 470 else: 471 return ast.Module(statements) 472 473 474def create_execute_code_block_payload(block_name, src, nodes=None): 475 """ 476 Creates a payload function which executes a series of statements 477 (provided as a multi-line code string OR list of AST nodes) in the 478 current context's "module" slot. A block name (a string) must be 479 provided and will appear as the filename if tracebacks are generated. 480 481 The 'src' argument must be a string, and dictates how the code will 482 be displayed, the 'nodes' argument must be a collection of AST nodes, 483 and dictates what code will actually be executed. If 'nodes' is not 484 provided, the given source code will be parsed to create a list of 485 AST nodes. 486 487 The payload runs the final expression or statement last, and if it 488 was an expression, its return value will be put in the "value" 489 context slot of the result; otherwise None will be put there (of 490 course, a final expression that evaluates to None would give the same 491 result). 492 493 The source code given is placed in the "block" context slot, while 494 the nodes used are placed in the "block_nodes" context slot, and the 495 block name is placed in the "block_name" context slot. 496 497 Note that although direct variable reassignments and new variables 498 created by the block of code won't affect the module it's run in, 499 more indirect changes WILL, so be extremely careful about side 500 effects! 501 """ 502 # Parse src if nodes weren't specified explicitly 503 if nodes is None: 504 nodes = ast.parse(src).body 505 506 def payload(context): 507 """ 508 Runs a sequence of statements or expressions (provided as AST 509 nodes) in a certain module. Creates a "value" context slot with 510 the result of the last expression, or None if the last node was a 511 statement. 512 """ 513 module = context_utils.extract(context, "module") 514 515 # Separate nodes into start and last 516 start = nodes[:-1] 517 last = nodes[-1] 518 519 # Create a cloned execution environment 520 env = {} 521 env.update(module.__dict__) 522 523 if len(start) > 0: 524 code = compile(make_module(start), block_name, 'exec') 525 exec(code, env) 526 527 if isinstance(last, ast.Expr): 528 # Treat last line as an expression and grab its value 529 last_code = compile( 530 ast.Expression(last.value), 531 block_name + "(final)", 532 'eval' 533 ) 534 value = eval(last_code, env) 535 else: 536 # Guess it wasn't an expression; just execute it 537 last_code = compile( 538 make_module([last]), 539 block_name + "(final)", 540 'exec' 541 ) 542 exec(last_code, env) 543 value = None 544 545 # Create result as a copy of the base context 546 result = copy.copy(context) 547 result.update({ 548 "value": value, 549 "block_name": block_name, 550 "block": src, 551 "block_nodes": nodes 552 }) 553 554 # Return our result 555 return result 556 557 return payload 558 559 560#--------------------------------# 561# Harness augmentation functions # 562#--------------------------------# 563 564def run_for_base_and_ref_values( 565 payload, 566 used_by_both=None, 567 cache_ref=True, 568 ref_only=False 569): 570 """ 571 Accepts a payload function and returns a modified payload function 572 which runs the provided function twice, the second time using ref_* 573 context values and setting ref_* versions of the original payload's 574 result slots. If a certain non-ref_* value needs to be available to 575 the reference payload other than the standard 576 `potluck.context_utils.BASE_CONTEXT_SLOTS`, it must be provided in 577 the "used_by_both" list. 578 579 Note that when applying multiple payload augmentations, this one 580 should be applied last. 581 582 The default behavior actually caches the reference values it 583 produces, under the assumption that only if cached reference values 584 are older than the solution file or the specification module should 585 the reference run actually take place. If this assumption is 586 incorrect, you should set `cache_ref` to False to actually run the 587 reference payload every time. 588 589 If you only care about the reference results (e.g., when compiling a 590 snippet) you can set ref_only to true, and the initial run will be 591 skipped. 592 593 TODO: Shelf doesn't support multiple-concurrent access!!! 594 TODO: THIS 595 """ 596 used_by_both = used_by_both or [] 597 598 def double_payload(context): 599 """ 600 Runs a payload twice, once normally and again against a context 601 where all ref_* slots have been merged into their non-ref_* 602 equivalents. Results from the second run are stored in ref_* 603 versions of the slots they would normally occupy, alongside the 604 original results. When possible, fetches cached results for the 605 ref_ values instead of actually running the payload a second 606 time. 607 """ 608 # Get initial results 609 if ref_only: 610 full_result = {} 611 else: 612 full_result = payload(context) 613 614 # Figure out our cache key 615 taskid = context["task_info"]["id"] 616 goal_id = context["goal_id"] 617 nth = context["which_context"] 618 # TODO: This cache key doesn't include enough info about the 619 # context object, apparently... 620 cache_key = taskid + ":" + goal_id + ":" + str(nth) 621 ts_key = cache_key + "::ts" 622 623 # Check the cache 624 cache_file = context["task_info"]["reference_cache_file"] 625 use_cached = True 626 cached = None 627 628 ignore_cache = context["task_info"]["ignore_cache"] 629 # TODO: Fix caching!!! 630 ignore_cache = True 631 632 # Respect ignore_cache setting 633 if not ignore_cache: 634 with shelve.open(cache_file) as shelf: 635 if ts_key not in shelf: 636 use_cached = False 637 else: # need to check timestamp 638 ts = shelf[ts_key] 639 640 # Get modification times for spec + solution 641 spec = context["task_info"]["specification"] 642 mtimes = [] 643 for fn in [ spec.__file__ ] + [ 644 os.path.join(spec.soln_path, f) 645 for f in spec.soln_files 646 ]: 647 mtimes.append(os.stat(fn).st_mtime) 648 649 # Units are seconds 650 changed_at = time_utils.time_from_timestamp( 651 max(mtimes) 652 ) 653 654 # Convert cache timestamp to seconds and compare 655 cache_time = time_utils.time_from_timestring(ts) 656 657 # Use cache if it was produced *after* last change 658 if cache_time <= changed_at: 659 use_cached = False 660 # else leave it at default True 661 662 # grab cached values 663 if use_cached: 664 cached = shelf[cache_key] 665 666 # Skip re-running the payload if we have a cached result 667 if cached is not None: 668 ref_result = cached 669 else: 670 # Create a context where each ref_* slot value is assigned to 671 # the equivalent non-ref_* slot 672 ref_context = { 673 key: context[key] 674 for key in context_utils.BASE_CONTEXT_SLOTS 675 } 676 for key in context: 677 if key in used_by_both: 678 ref_context[key] = context[key] 679 elif key.startswith("ref_"): 680 ref_context[key[4:]] = context[key] 681 # Retain original ref_ slots alongside collapsed slots 682 ref_context[key] = context[key] 683 684 # Get results from collapsed context 685 try: 686 ref_result = payload(ref_context) 687 except context_utils.MissingContextError as e: 688 e.args = ( 689 e.args[0] + " (in reference payload)", 690 ) + e.args[1:] 691 raise e 692 693 # Make an entry in our cache 694 if not ignore_cache: 695 with shelve.open(cache_file) as shelf: 696 # Just cache new things added by ref payload 697 hollowed = {} 698 for key in ref_result: 699 if ( 700 key not in context 701 or ref_result[key] != context[key] 702 ): 703 hollowed[key] = ref_result[key] 704 # If ref payload produces uncacheable results, we 705 # can't cache anything 706 try: 707 shelf[cache_key] = ref_result 708 shelf[ts_key] = time_utils.timestring() 709 except Exception: 710 logging.log( 711 "Payload produced uncacheable reference" 712 " value(s):" 713 ) 714 logging.log(html_tools.string_traceback()) 715 716 # Assign collapsed context results into final result under ref_* 717 # versions of their slots 718 for slot in ref_result: 719 full_result["ref_" + slot] = ref_result[slot] 720 721 return full_result 722 723 return double_payload 724 725 726def run_in_sandbox(payload): 727 """ 728 Returns a modified payload function which runs the provided base 729 payload, but first sets the current directory to the sandbox 730 directory specified by the provided context's "sandbox" slot. 731 Afterwards, it changes back to the original directory. 732 733 TODO: More stringent sandboxing? 734 """ 735 def sandboxed_payload(context): 736 """ 737 A payload function which runs a base payload within a specific 738 sandbox directory. 739 """ 740 orig_cwd = os.getcwd() 741 try: 742 os.chdir(context_utils.extract(context, "sandbox")) 743 result = payload(context) 744 finally: 745 os.chdir(orig_cwd) 746 747 return result 748 749 return sandboxed_payload 750 751 752def with_setup(payload, setup): 753 """ 754 Creates a modified payload which runs the given setup function 755 (with the incoming context dictionary as an argument) right before 756 running the base payload. The setup function's return value is used 757 as the context for the base payload. 758 759 Note that based on the augmentation order, function calls made during 760 the setup WILL NOT be captured as part of a trace if 761 tracing_function_calls is also used, but printed output during the 762 setup WILL be available via capturing_printed_output if that is used. 763 """ 764 def setup_payload(context): 765 """ 766 Runs a base payload after running a setup function. 767 """ 768 context = setup(context) 769 if context is None: 770 raise ValueError("Context setup function returned None!") 771 return payload(context) 772 773 return setup_payload 774 775 776def with_cleanup(payload, cleanup): 777 """ 778 Creates a modified payload which runs the given cleanup function 779 (with the original payload's result, which is a context dictionary, 780 as an argument) right after running the base payload. The return 781 value is the cleanup function's return value. 782 783 Note that based on the augmentation order, function calls made during 784 the setup WILL NOT be captured as part of a trace if 785 tracing_function_calls is also used, but printed output during the 786 setup WILL be available via capturing_printed_output if that is used. 787 """ 788 def cleanup_payload(context): 789 """ 790 Runs a base payload and then runs a cleanup function. 791 """ 792 result = payload(context) 793 result = cleanup(result) 794 return result 795 796 return cleanup_payload 797 798 799def capturing_printed_output( 800 payload, 801 capture_errors=False, 802 capture_stderr=False 803): 804 """ 805 Creates a modified version of the given payload which establishes an 806 "output" slot in addition to the base slots, holding a string 807 consisting of all output that was printed during the execution of 808 the original payload (specifically, anything that would have been 809 written to stdout). During payload execution, the captured text is 810 not actually printed as it would normally have been. If the payload 811 itself already established an "output" slot, that value will be 812 discarded in favor of the value established by this mix-in. 813 814 If `capture_errors` is set to True, then any `Exception` generated 815 by running the original payload will be captured as part of the 816 string output instead of bubbling out to the rest of the system. 817 However, context slots established by inner payload wrappers cannot 818 be retained if there is an `Exception` seen by this wrapper, since 819 any inner wrappers would not have gotten a chance to return in that 820 case. If an error is captured, an "error" context slot will be set to 821 the message for the exception that was caught. 822 823 If `capture_stderr` is set to True, then things printed to stderr 824 will be captured as well as those printed to stdout, and will be put 825 in a separate "error_log" slot. In this case, if `capture_errors` is 826 also True, the printed part of any traceback will be captured as part 827 of the error_log, not the output. 828 """ 829 def capturing_payload(context): 830 """ 831 Runs a base payload while also capturing printed output into an 832 "output" slot. 833 """ 834 # Set up output capturing 835 original_stdout = sys.stdout 836 string_stdout = io.StringIO() 837 sys.stdout = string_stdout 838 839 if capture_stderr: 840 original_stderr = sys.stderr 841 string_stderr = io.StringIO() 842 sys.stderr = string_stderr 843 844 # Run the base payload 845 try: 846 result = payload(context) 847 except Exception as e: 848 if capture_errors: 849 if capture_stderr: 850 string_stderr.write('\n' + html_tools.string_traceback()) 851 else: 852 string_stdout.write('\n' + html_tools.string_traceback()) 853 result = { "error": str(e) } 854 else: 855 raise 856 finally: 857 # Restore original stdout/stderr 858 sys.stdout = original_stdout 859 if capture_stderr: 860 sys.stderr = original_stderr 861 862 # Add our captured output to the "output" slot of the result 863 result["output"] = string_stdout.getvalue() 864 865 if capture_stderr: 866 result["error_log"] = string_stderr.getvalue() 867 868 return result 869 870 return capturing_payload 871 872 873def with_fake_input(payload, inputs, extra_policy="error"): 874 """ 875 Creates a modified payload function which runs the given payload but 876 supplies a pre-determined sequence of strings whenever `input` is 877 called instead of actually prompting for values from stdin. The 878 prompts and input values that would have shown up are still printed, 879 although a pair of zero-width word-joiner characters is added before 880 and after the fake input value at each prompt in the printed output. 881 882 The `inputs` and `extra_policy` arguments are passed to 883 `create_mock_input` to create the fake input setup. 884 885 The result will have "inputs" and "input_policy" context slots added 886 that store the specific inputs used, and the extra input policy. 887 """ 888 # Create mock input function and input reset function 889 mock_input, reset_input = create_mock_input(inputs, extra_policy) 890 891 def fake_input_payload(context): 892 """ 893 Runs a base payload with a mocked input function that returns 894 strings from a pre-determined sequence. 895 """ 896 # Replace `input` with our mock version 897 import builtins 898 original_input = builtins.input 899 reset_input() 900 builtins.input = mock_input 901 902 # TODO: Is this compatible with optimism's input-manipulation? 903 # TODO: Make this work with optimism's stdin-replacement 904 905 # Run the payload 906 try: 907 result = payload(context) 908 finally: 909 # Re-enable `input` 910 builtins.input = original_input 911 reset_input() 912 reset_input() 913 914 # Add "inputs" and "input_policy" context slots to the result 915 result["inputs"] = inputs 916 result["input_policy"] = extra_policy 917 918 return result 919 920 return fake_input_payload 921 922 923FAKE_INPUT_PATTERN = ( 924 "\u2060\u2060((?:[^\u2060]|(?:\u2060[^\u2060]))*)\u2060\u2060" 925) 926""" 927A regular expression which can be used to find fake input values in 928printed output from code that uses a mock input. The first group of each 929match will be a fake output value. 930""" 931 932 933def strip_mock_input_values(output): 934 """ 935 Given a printed output string produced by code using mocked inputs, 936 returns the same string, with the specific input values stripped out. 937 Actually strips any values found between paired word-joiner (U+2060) 938 characters, as that's what mock input values are wrapped in. 939 """ 940 return re.sub(FAKE_INPUT_PATTERN, "", output) 941 942 943def create_mock_input(inputs, extra_policy="error"): 944 """ 945 Creates two functions: a stand-in for `input` that returns strings 946 from the given "inputs" sequence, and a reset function that resets 947 the first function to the beginning of its inputs list. 948 949 The extra_policy specifies what happens if the inputs list runs out: 950 951 - "loop" means that it will be repeated again, ad infinitum. 952 - "hold" means that the last value will be returned for all 953 subsequent input calls. 954 - "error" means an `EOFError` will be raised as if stdin had been 955 closed. 956 957 "hold" is the default policy. 958 """ 959 960 input_index = 0 961 962 def mock_input(prompt=""): 963 """ 964 Function that retrieves the next input from the inputs list and 965 behaves according to the extra_inputs_policy when inputs run out: 966 967 - If extra_inputs_policy is "hold," the last input is returned 968 repeatedly. 969 970 - If extra_inputs_policy is "loop," the cycle of inputs repeats 971 indefinitely. 972 973 - If extra_inputs_policy is "error," (or any other value) an 974 EOFError is raised when the inputs run out. This also happens 975 if the inputs list is empty to begin with. 976 977 This function prints the prompt and the input that it is about to 978 return, so that they appear in printed output just as they would 979 have if normal input() had been called. 980 981 To enable identification of the input values, a pair of 982 zero-width "word joiner" character (U+2060) is printed directly 983 before and directly after each input value. These should not 984 normally be visible when the output is inspected by a human, but 985 can be searched for (and may also influence word wrapping in some 986 contexts). 987 """ 988 nonlocal input_index 989 print(prompt, end="") 990 if input_index >= len(inputs): 991 if extra_policy == "hold": 992 if len(inputs) > 0: 993 result = inputs[-1] 994 else: 995 raise EOFError 996 elif extra_policy == "loop": 997 if len(inputs) > 0: 998 input_index = 0 999 result = inputs[input_index] 1000 else: 1001 raise EOFError 1002 else: 1003 raise EOFError 1004 else: 1005 result = inputs[input_index] 1006 input_index += 1 1007 1008 print('\u2060\u2060' + result + '\u2060\u2060') 1009 return result 1010 1011 def reset_input(): 1012 """ 1013 Resets the input list state, so that the next call to input() 1014 behaves as if it was the first call with respect to the mock 1015 input function defined above (see create_mock_input). 1016 """ 1017 nonlocal input_index 1018 input_index = 0 1019 1020 # Return our newly-minted mock and reset functions 1021 return mock_input, reset_input 1022 1023 1024def with_timeout(payload, time_limit=5): 1025 """ 1026 Creates a modified payload which terminates itself with a 1027 `TimeoutError` if if takes longer than the specified time limit (in 1028 possibly-fractional seconds). 1029 1030 Note that on systems where `signal.SIGALRM` is not available, we 1031 have no way of interrupting the original payload, and so only after 1032 it terminates will a `TimeoutError` be raised, making this function 1033 MUCH less useful. 1034 1035 Note that the resulting payload function is NOT re-entrant: only one 1036 timer can be running at once, and calling the function again while 1037 it's already running re-starts the timer. 1038 """ 1039 def timed_payload(context): 1040 """ 1041 Runs a base payload with a timeout, raising a 1042 `potluck.timeout.TimeoutError` if the function takes too long. 1043 1044 See `potluck.timeout` for (horrific) details. 1045 """ 1046 return timeout.with_sigalrm_timeout(time_limit, payload, (context,)) 1047 1048 return timed_payload 1049 1050 1051def tracing_function_calls(payload, trace_targets, state_function): 1052 """ 1053 Augments a payload function such that calls to certain functions of 1054 interest during the payload's run are traced. This ends up creating 1055 a "trace" slot in the result context, which holds a trace object 1056 that consists of a list of trace entries. 1057 1058 The `trace_targets` argument should be a sequence of strings 1059 identifying the names of functions to trace calls to. It may contain 1060 tuples, in which case calls to any function named in the tuple will 1061 be treated as calls to the first function in the tuple, which is 1062 useful for collapsing aliases like turtle.fd and turtle.forward. 1063 1064 The `state_function` argument should be a one-argument function, 1065 which given a function name, captures some kind of state and returns 1066 a state object (typically a dictionary). 1067 1068 Each trace entry in the resulting trace represents one function call 1069 in the outermost scope and is a dictionary with the following keys: 1070 1071 - fname: The name of the function that was called. 1072 - args: A dictionary of arguments passed to the function, mapping 1073 argument names to their values. For calls to C functions (such as 1074 most built-in functions), arguments are not available, and this 1075 key will not be present. 1076 - result: The return value of the function. May be None if the 1077 function was terminated due to an exception, but there's no way 1078 to distinguish that from an intentional None return. For calls to 1079 C functions, this key will not be present. 1080 - pre_state: A state object resulting from calling the given 1081 state_function just before the traced function call starts, with 1082 the function name as its only argument. Calls made during the 1083 execution of the state function will not be traced. 1084 - post_state: The same kind of state object, but captured right 1085 before the return of the traced function. 1086 - during: A list of trace entries in the same format representing 1087 traced function calls which were initiated and returned before 1088 the end of the function call that this trace entry represents. 1089 1090 Note that to inspect all function calls, the hierarchy must be 1091 traversed recursively to look at calls in "during" slots. 1092 1093 Note that for *reasons*, functions named "setprofile" cannot be 1094 traced. Also note that since functions are identified by name, 1095 multiple functions with the same name occurring in different modules 1096 will be treated as the same function for tracing purposes, although 1097 this shouldn't normally matter. 1098 1099 Note that in order to avoid tracing function calls made by payload 1100 augmentation, this augmentation should be applied before others. 1101 """ 1102 1103 # Per-function-name stacks of open function calls 1104 trace_stacks = {} 1105 1106 # The trace result is a list of trace entries 1107 trace_result = [] 1108 1109 # The stack of trace destinations 1110 trace_destinations = [ trace_result ] 1111 1112 # Create our tracing targets map 1113 targets_map = {} 1114 for entry in trace_targets: 1115 if isinstance(entry, tuple): 1116 first = entry[0] 1117 for name in entry: 1118 targets_map[name] = first 1119 else: 1120 targets_map[entry] = entry 1121 1122 def tracer(frame, event, arg): 1123 """ 1124 A profiling function which will be called for profiling events 1125 (see `sys.setprofile`). It logs calls to a select list of named 1126 functions. 1127 """ 1128 nonlocal trace_stacks, trace_result 1129 if event in ("call", "return"): # normal function-call or return 1130 fname = frame.f_code.co_name 1131 elif event in ("c_call", "c_return"): # call/return to/from C code 1132 fname = arg.__name__ 1133 else: 1134 # Don't record any other events 1135 return 1136 1137 # Don't ever try to trace setprofile calls, since we'll see an 1138 # unreturned call when setprofile is used to turn off profiling. 1139 if fname == "setprofile": 1140 return 1141 1142 if fname in targets_map: # we're supposed to trace this one 1143 fname = targets_map[fname] # normalize function name 1144 if "return" not in event: # a call event 1145 # Create new info object for this call 1146 info = { 1147 "fname": fname, 1148 "pre_state": state_function(fname), 1149 "during": [] 1150 # args, result, and post_state added elsewhere 1151 } 1152 1153 # Grab arguments if we can: 1154 if not event.startswith("c_"): 1155 info["args"] = copy.copy(frame.f_locals) 1156 1157 # Push this info object onto the appropriate stack 1158 if fname not in trace_stacks: 1159 trace_stacks[fname] = [] 1160 trace_stacks[fname].append(info) 1161 1162 # Push onto the trace destinations stack 1163 trace_destinations.append(info["during"]) 1164 1165 else: # a return event 1166 try: 1167 prev_info = trace_stacks.get(fname, []).pop() 1168 trace_destinations.pop() 1169 except IndexError: # no matching call? 1170 prev_info = { 1171 "fname": fname, 1172 "pre_state": None, 1173 "during": [] 1174 } 1175 1176 # Capture result if we can 1177 if not event.startswith("c_"): 1178 prev_info["result"] = arg 1179 1180 # Capture post-call state 1181 prev_info["post_state"] = state_function(fname) 1182 1183 # Record trace event into current destination 1184 trace_destinations[-1].append(prev_info) 1185 1186 def traced_payload(context): 1187 """ 1188 Runs a payload while tracing calls to certain functions, 1189 returning the context slots created by the original payload plus 1190 a "trace" slot holding a hierarchical trace of function calls. 1191 """ 1192 nonlocal trace_stacks, trace_result, trace_destinations 1193 1194 # Reset tracing state 1195 trace_stacks = {} 1196 trace_result = [] 1197 trace_destinations = [ trace_result ] 1198 1199 # Turn on profiling 1200 sys.setprofile(tracer) 1201 1202 # Run our original payload 1203 result = payload(context) 1204 1205 # Turn off tracing 1206 sys.setprofile(None) 1207 1208 # add a "trace" slot to the result 1209 result["trace"] = trace_result 1210 1211 # we're done 1212 return result 1213 1214 return traced_payload 1215 1216 1217def walk_trace(trace): 1218 """ 1219 A generator which yields each entry from the given trace in 1220 depth-first order, which is also the order in which each traced 1221 function call frame was created. Each item yielded is a trace entry 1222 dictionary, as described in `tracing_function_calls`. 1223 """ 1224 for entry in trace: 1225 yield entry 1226 yield from walk_trace(entry["during"]) 1227 1228 1229def sampling_distribution_of_results( 1230 payload, 1231 slot_map={ 1232 "value": "distribution", 1233 "ref_value": "ref_distribution" 1234 }, 1235 trials=50000 1236): 1237 """ 1238 Creates a modified payload function that calls the given base payload 1239 many times, and creates a distribution table of the results: for each 1240 of the keys in the slot_map, a distribution table will be 1241 built and stored in a context slot labeled with the corresponding 1242 value from the slot_map. By default, the "value" and 1243 "ref_value" keys are observed and their distributions are stored in 1244 the "distribution" and "ref_distribution" slots. 1245 1246 Note: this augmentation has horrible interactions with most other 1247 augmentations, since either the other augmentations need to be 1248 applied each time a new sample is generated (horribly slow) or they 1249 will be applied to a payload which runs the base test many many times 1250 (often not what they're expecting). Accordingly, this augmentation is 1251 best used sparingly and with as few other augmentations as possible. 1252 1253 Note that the distribution table built by this function maps unique 1254 results to the number of times those results were observed across 1255 all trials, so the results of the payload being augmented must be 1256 hashable for it to work. 1257 1258 Note that the payload created by this augmentation does not generate 1259 any of the slots generated by the original payload. 1260 """ 1261 def distribution_observer_payload(context): 1262 """ 1263 Runs many trials of a base payload to determine the distribution 1264 of results. Stores that distribution under the 'distribution' 1265 context key as a dictionary with "trials" and "results" keys. 1266 The "trials" value is an integer number of trials performed, and 1267 the "results" value is a dictionary that maps distinct results 1268 observed to an integer number of times that result was observed. 1269 """ 1270 result = {} 1271 1272 distributions = { 1273 slot: { 1274 "trials": trials, 1275 "results": {} 1276 } 1277 for slot in slot_map 1278 } 1279 1280 for _ in range(trials): 1281 rctx = payload(context) 1282 for slot in slot_map: 1283 outcome = rctx[slot] 1284 target_dist = distributions[slot] 1285 target_dist["results"][outcome] = ( 1286 target_dist["results"].get(outcome, 0) + 1 1287 ) 1288 1289 for slot in slot_map: 1290 result[slot_map[slot]] = distributions[slot] 1291 1292 return result 1293 1294 return distribution_observer_payload 1295 1296 1297def with_module_decorations(payload, decorations, ignore_missing=False): 1298 """ 1299 Augments a payload such that before it gets run, certain values in 1300 the module that's in the "module" slot of the current context are 1301 replaced with decorated values: the results of running a decoration 1302 function on them. Then, after the payload is complete, the 1303 decorations are reversed and the original values are put back in 1304 place. 1305 1306 The `decorations` argument should be a map from possibly-dotted 1307 attribute names within the target module to decoration functions, 1308 whose results (when given original attribute values as arguments) 1309 will be used to replace those values temporarily. 1310 1311 If `ignore_missing` is set to True, then even if a specified 1312 decoration entry names an attribute which does not exist in the 1313 target module, an attribute with that name will be created; the 1314 associated decorator function will receive the special class 1315 `Missing` as its argument in that case. 1316 """ 1317 def decorated_payload(context): 1318 """ 1319 Runs a base payload but first pins various decorations in place, 1320 undoing the pins afterwards. 1321 """ 1322 # Remember original values and pin new ones: 1323 orig = {} 1324 prefixes = {} 1325 1326 target_module = context_utils.extract(context, "module") 1327 1328 # Pin everything, remembering prefixes so we can delete exactly 1329 # the grafted-on structure if ignore_missing is true: 1330 for key in decorations: 1331 if ignore_missing: 1332 orig[key] = get_dot_attr( 1333 target_module, 1334 key, 1335 NoAttr 1336 ) 1337 prefixes[key] = dot_attr_prefix(target_module, key) 1338 else: 1339 orig[key] = get_dot_attr(target_module, key) 1340 1341 decorated = decorations[key](orig[key]) 1342 set_dot_attr(target_module, key, decorated) 1343 1344 # Run the payload with pins in place: 1345 try: 1346 result = payload(context) 1347 finally: 1348 # Definitely clean afterwards up by unpinning stuff: 1349 for key in decorations: 1350 orig_val = orig[key] 1351 prefix = prefixes.get(key) 1352 if ignore_missing: 1353 if orig_val == NoAttr: 1354 if prefix == '': 1355 delattr(target_module, key.split('.')[0]) 1356 else: 1357 last_val = get_dot_attr(target_module, prefix) 1358 rest_key = key[len(prefix) + 1:] 1359 delattr(last_val, rest_key.split('.')[0]) 1360 else: 1361 set_dot_attr(target_module, key, orig_val) 1362 else: 1363 set_dot_attr(target_module, key, orig_val) 1364 1365 # Now return our result 1366 return result 1367 1368 return decorated_payload 1369 1370 1371#--------------------------------# 1372# Pinning & decorating functions # 1373#--------------------------------# 1374 1375class Missing: 1376 """ 1377 Class to indicate missing-ness when None is a valid value. 1378 """ 1379 pass 1380 1381 1382class Generic: 1383 """ 1384 Class for creating missing parent objects in `set_dot_attr`. 1385 """ 1386 pass 1387 1388 1389class NoAttr: 1390 """ 1391 Class to indicate that an attribute was not present when pinning 1392 something. 1393 """ 1394 pass 1395 1396 1397def get_dot_attr(obj, dot_attr, default=Missing): 1398 """ 1399 Gets an attribute from a obj, which may be a dotted attribute, in which 1400 case bits will be fetched in sequence. Returns the default if nothing is 1401 found at any step, or throws an AttributeError if no default is given 1402 (or if the default is explicitly set to Missing). 1403 """ 1404 if '.' in dot_attr: 1405 bits = dot_attr.split('.') 1406 first = getattr(obj, bits[0], Missing) 1407 if first is Missing: 1408 if default is Missing: 1409 raise AttributeError( 1410 "'{}' object has no attribute '{}'".format( 1411 type(obj), 1412 bits[0] 1413 ) 1414 ) 1415 else: 1416 return default 1417 else: 1418 return get_dot_attr(first, '.'.join(bits[1:]), default) 1419 else: 1420 result = getattr(obj, dot_attr, Missing) 1421 if result == Missing: 1422 if default == Missing: 1423 raise AttributeError( 1424 "'{}' object has no attribute '{}'".format( 1425 type(obj), 1426 dot_attr 1427 ) 1428 ) 1429 else: 1430 return default 1431 else: 1432 return result 1433 1434 1435def dot_attr_prefix(obj, dot_attr): 1436 """ 1437 Returns the longest prefix of attribute values that are part of the 1438 given dotted attribute string which actually exists on the given 1439 object. Returns an empty string if even the first attribute in the 1440 chain does not exist. If the full attribute value exists, it is 1441 returned as-is. 1442 """ 1443 if '.' in dot_attr: 1444 bits = dot_attr.split('.') 1445 first, rest = bits[0], bits[1:] 1446 if hasattr(obj, first): 1447 suffix = dot_attr_prefix(getattr(obj, first), '.'.join(rest)) 1448 if suffix: 1449 return first + '.' + suffix 1450 else: 1451 return first 1452 else: 1453 return "" 1454 else: 1455 if hasattr(obj, dot_attr): 1456 return dot_attr 1457 else: 1458 return "" 1459 1460 1461def set_dot_attr(obj, dot_attr, value): 1462 """ 1463 Works like get_dot_attr, but sets an attribute instead of getting one. 1464 Creates instances of Generic if the target attribute lacks parents. 1465 """ 1466 if '.' in dot_attr: 1467 bits = dot_attr.split('.') 1468 g = Generic() 1469 parent = getattr(obj, bits[0], g) 1470 if parent == g: 1471 setattr(obj, bits[0], parent) 1472 set_dot_attr(parent, '.'.join(bits[1:]), value) 1473 else: 1474 setattr(obj, dot_attr, value) 1475 1476 1477#-------------------# 1478# Turtle management # 1479#-------------------# 1480 1481def warp_turtle(context): 1482 """ 1483 Disables turtle tracing, and resets turtle state. Use as a setup 1484 function with `with_setup` and/or via 1485 `specifications.HasPayload.do_setup`. Note that you MUST also use 1486 `finalize_turtle` as a cleanup function, or else some elements may 1487 not actually get drawn. 1488 """ 1489 turtle.reset() 1490 turtle.tracer(0, 0) 1491 return context 1492 1493 1494def finalize_turtle(result): 1495 """ 1496 Paired with `warp_turtle`, makes sure that everything gets drawn. Use 1497 as a cleanup function (see `with_cleanup` and 1498 `specifications.HasPayload.do_cleanup`). 1499 """ 1500 turtle.update() 1501 return result 1502 1503 1504def capture_turtle_state(_): 1505 """ 1506 This state-capture function logs the following pieces of global 1507 turtle state: 1508 1509 - position: A 2-tuple of x/y coordinates. 1510 - heading: A floating point number in degrees. 1511 - pen_is_down: Boolean indicating pen state. 1512 - is_filling: Boolean indicating whether we're filling or not. 1513 - pen_size: Floating-point pen size. 1514 - pen_color: String indicating current pen color. 1515 - fill_color: String indicating current fill color. 1516 1517 This state-capture function ignores its argument (which is the name 1518 of the function being called). 1519 """ 1520 return { 1521 "position": turtle.position(), 1522 "heading": turtle.heading(), 1523 "pen_is_down": turtle.isdown(), 1524 "is_filling": turtle.filling(), 1525 "pen_size": turtle.pensize(), 1526 "pen_color": turtle.pencolor(), 1527 "fill_color": turtle.fillcolor() 1528 } 1529 1530 1531def capturing_turtle_drawings(payload, skip_reset=False, alt_text=None): 1532 """ 1533 Creates a modified version of the given payload which establishes an 1534 "image" slot in addition to the base slots, holding a PILlow image 1535 object which captures everything drawn on the turtle canvas by the 1536 time the function ended. It creates an "image_alt" slot with the 1537 provided alt_text, or if none is provided, it copies the "output" 1538 slot value as the image alt, assuming that `turtleBeads` has been 1539 used to create a description of what was drawn. 1540 1541 The function will reset the turtle state and turn off tracing 1542 before calling the payload function (see `warp_turtle`). It will 1543 also update the turtle canvas before capturing an image (see 1544 `finalize_turtle`). So you don't need to apply those as 1545 setup/cleanup functions yourself. If you want to disable the 1546 automatic setup/cleanup, set the skip_reset argument to False, 1547 although in that case tracing will still be disabled and one update 1548 will be performed at the end. 1549 1550 In default application order, the turtle reset/setup from this 1551 function is applied before any setup functions set using 1552 `with_setup`, and the output image is captured after any cleanup 1553 functions set using `with_cleanup` have been run, so you could for 1554 example apply a setup function that moves the turtle to a 1555 non-default starting point to test the flexibility of student code. 1556 1557 Note: you must have Pillow >=6.0.0 to use this augmentation, and you 1558 must also have Ghostscript installed (which is not available via 1559 PyPI, although most OS's should have a package manager via which 1560 Ghostscript can be installed)! 1561 """ 1562 # Before we even build our payload, verify that PIL will be 1563 # available (we let any exception bubble out naturally). 1564 import PIL 1565 # Check for full Ghostscript support necessary to read EPS 1566 import PIL.EpsImagePlugin as p 1567 if not p.has_ghostscript(): 1568 raise NotImplementedError( 1569 "In order to capture turtle drawings, you must install" 1570 " Ghostscript (which is not a Python package) manually." 1571 ) 1572 1573 def capturing_payload(context): 1574 """ 1575 Resets turtle state, disables tracing, runs a base payload, and 1576 then captures what was drawn on the turtle canvas as a PILlow 1577 image. 1578 """ 1579 # Reset turtle & disable tracing 1580 if skip_reset: 1581 turtle.tracer(0, 0) 1582 else: 1583 context = warp_turtle(context) 1584 1585 # Run the base payload 1586 result = payload(context) 1587 1588 # Ensure all drawing is up-to-date 1589 # Note: this if/else is future-proofing in case finalize_turtle 1590 # needs to do more in the future. 1591 if skip_reset: 1592 turtle.update() 1593 else: 1594 result = finalize_turtle(result) 1595 1596 # capture what's on the turtle canvas as a PILlow image 1597 canvas = turtle.getscreen().getcanvas() 1598 1599 # Capture postscript commands to recreate the canvas 1600 ps = canvas.postscript() 1601 1602 # Wrap as if it were a file and Use Ghostscript to turn the EPS 1603 # into a PIL image 1604 bio = io.BytesIO(ps.encode(encoding="utf-8")) 1605 captured = PIL.Image.open(bio, formats=["EPS"]) 1606 1607 # Convert to RGB mode if it's not in that mode already 1608 if captured.mode != "RGB": 1609 captured = captured.convert("RGB") 1610 1611 # Add our captured image to the "image" slot of the result 1612 result["image"] = captured 1613 1614 # Add alt text 1615 if alt_text is not None: 1616 result["image_alt"] = alt_text 1617 else: 1618 result["image_alt"] = result.get( 1619 "output", 1620 "no alt text available" 1621 ) 1622 1623 return result 1624 1625 return capturing_payload 1626 1627 1628#----------------------# 1629# Wavesynth management # 1630#----------------------# 1631 1632_PLAY_WAVESYNTH_TRACK = None 1633""" 1634The original wavesynth playTrack function, stored here temporarily while 1635it's disabled via `disable_track_actions`. 1636""" 1637 1638_SAVE_WAVESYNTH_TRACK = None 1639""" 1640The original wavesynth saveTrack function, stored here temporarily when 1641saveTrack is disabled via `disable_track_actions`. 1642""" 1643 1644 1645def disable_track_actions(): 1646 """ 1647 Disables the `playTrack` and `saveTrack` `wavesynth` functions, 1648 turning them into functions which accept the same arguments and 1649 simply instantly return None. This helps ensure that students' 1650 testing calls to `saveTrack` or `playTrack` don't eat up evaluation 1651 time. Saves the original functions in the `_PLAY_WAVESYNTH_TRACK` and 1652 `_SAVE_WAVESYNTH_TRACK` global variables. 1653 1654 Only saves original functions the first time it's called, so that 1655 `reenable_track_actions` will work even if `disable_track_actions` is 1656 called multiple times. 1657 1658 Note that you may want to use this function with 1659 `specifications.add_module_prep` to ensure that submitted code 1660 doesn't try to call `playTrack` or `saveTrack` during import and 1661 waste evaluation time. 1662 """ 1663 global _PLAY_WAVESYNTH_TRACK, _SAVE_WAVESYNTH_TRACK 1664 import wavesynth 1665 if _PLAY_WAVESYNTH_TRACK is None: 1666 _PLAY_WAVESYNTH_TRACK = wavesynth.playTrack 1667 _SAVE_WAVESYNTH_TRACK = wavesynth.saveTrack 1668 wavesynth.playTrack = lambda wait=None: None 1669 wavesynth.saveTrack = lambda filename: None 1670 1671 1672def reenable_track_actions(): 1673 """ 1674 Restores the `saveTrack` and `playTrack` functions after 1675 `disable_track_actions` has disabled them. 1676 """ 1677 global _PLAY_WAVESYNTH_TRACK, _SAVE_WAVESYNTH_TRACK 1678 import wavesynth 1679 if _PLAY_WAVESYNTH_TRACK is not None: 1680 wavesynth.playTrack = _PLAY_WAVESYNTH_TRACK 1681 wavesynth.saveTrack = _SAVE_WAVESYNTH_TRACK 1682 _PLAY_WAVESYNTH_TRACK = None 1683 _SAVE_WAVESYNTH_TRACK = None 1684 1685 1686def ensure_or_stub_simpleaudio(): 1687 """ 1688 Tries to import the `simpleaudio` module, and if that's not possible, 1689 creates a stub module named "simpleaudio" which raises an attribute 1690 error on any access attempt. The stub module will be inserted in 1691 `sys.modules` as if it were `simpleaudio`. 1692 1693 Note that you may want to set this up as a prep function using 1694 `specifications.add_module_prep` to avoid crashing if submitted code 1695 tries to import `simpleaudio` (although it will still crash if 1696 student code tries to use anything from `simpleaudio`). 1697 """ 1698 # We also try to import simpleaudio, but set up a dummy module in its 1699 # place if it's not available, since we don't need or want to play 1700 # the sounds for grading purposes. 1701 try: 1702 import simpleaudio # noqa F401 1703 except Exception: 1704 def missing(name): 1705 """ 1706 Fake getattr to raise a reasonable-seeming error if someone 1707 tries to use our fake simpleaudio. 1708 """ 1709 raise AttributeError( 1710 "During grading, simpleaudio is not accessible. We have" 1711 " disabled playTrack and saveTrack for testing purposes" 1712 " anyway, and your code should not need to use" 1713 " simpleaudio directly either." 1714 ) 1715 fake_simpleaudio = imp.new_module("simpleaudio") 1716 fake_simpleaudio.__getattr__ = missing 1717 sys.modules["simpleaudio"] = fake_simpleaudio 1718 1719 1720def capturing_wavesynth_audio(payload, just_capture=None, label=None): 1721 """ 1722 Creates a modified version of the given payload which establishes 1723 "notes" and "audio" slots in addition to the base slots. "notes" 1724 holds the result of `wavesynth.trackDescription` (a list of strings) 1725 while "audio" holds a dictionary with the following keys: 1726 1727 - "mimetype": The MIME type for the captured data. 1728 - "data": The captured binary data, as a bytes object. 1729 - "label": A text label for the audio, if a 'label' value is 1730 provided; not present otherwise 1731 1732 The data captured is the WAV format audio that would be saved by the 1733 wavesynth module's `saveTrack` function, which in particular means 1734 it only captures whatever is in the "current track." The 1735 `resetTracks` function is called before the payload is executed, and 1736 again afterwards to clean things up. 1737 1738 If the `wavesynth` module is not installed, a `ModuleNotFoundError` 1739 will be raised. 1740 """ 1741 # Before we even build our payload, verify that wavesynth will be 1742 # available (we let any exception bubble out naturally). 1743 import wavesynth 1744 1745 # We do this here just in case student code attempts to use 1746 # simpleaudio directly, since installing simpleaudio for evaluation 1747 # purposes shouldn't be necessary. 1748 ensure_or_stub_simpleaudio() 1749 1750 def capturing_payload(context): 1751 """ 1752 Resets all tracks state, runs a base payload, and then captures 1753 what was put into the current track as both a list of note 1754 descriptions and as a dictionary indicating a MIME type, raw 1755 binary data, and maybe a label. 1756 """ 1757 # Reset all tracks 1758 wavesynth.resetTracks() 1759 1760 # Disable playTrack and saveTrack 1761 disable_track_actions() 1762 1763 # Run the base payload 1764 try: 1765 result = payload(context) 1766 finally: 1767 reenable_track_actions() 1768 1769 # capture the descriptions of the notes in the current track 1770 if just_capture in (None, "notes"): 1771 result["notes"] = wavesynth.trackDescription() 1772 1773 # capture what's in the current track as raw WAV bytes 1774 if just_capture in (None, "audio"): 1775 bio = io.BytesIO() 1776 wavesynth.saveTrack(bio) 1777 data = bio.getvalue() 1778 1779 # Add our captured audio to the "audio" slot of the result 1780 result["audio"] = { 1781 "mimetype": "audio/wav", 1782 "data": data, 1783 } 1784 1785 # Add a label 1786 if label is not None: 1787 result["audio"]["label"] = label 1788 1789 # Reset all tracks (again) 1790 wavesynth.resetTracks() 1791 1792 return result 1793 1794 return capturing_payload 1795 1796 1797#---------------------------------# 1798# Miscellaneous harness functions # 1799#---------------------------------# 1800 1801def report_argument_modifications(target, *args, **kwargs): 1802 """ 1803 This function works as a test harness but doesn't capture the value 1804 or output of the function being tested. Instead, it generates a text 1805 report on whether each mutable argument to the function was modified 1806 or not after the function is finished. It only checks arguments which 1807 are are lists or dictionaries at the top level, so its definition of 1808 modifiable is rather narrow. 1809 1810 The report uses argument positions when the test case is given 1811 positional arguments and argument names when it's given keyword 1812 arguments. 1813 1814 (Note: the last two paragraphs of this docstring are picked up 1815 automatically as rubric values for tests using this harness. fname 1816 will be substituted in, which is why it appears in curly braces 1817 below.) 1818 1819 Description: 1820 1821 <code>{fname}</code> must only modify arguments it is supposed to 1822 modify. 1823 1824 We will call <code>{fname}</code> and check to make sure that the 1825 values provided as arguments are not changed by the function, except 1826 where such changes are explicitly required. Note that only mutable 1827 values, like dictionaries or lists, may be modified by a function, so 1828 this check is not applied to any string or number arguments. 1829 """ 1830 # Identify mutable arguments 1831 mposargs = [ 1832 i 1833 for i in range(len(args)) 1834 if isinstance(args[i], (list, dict)) 1835 ] 1836 mkwargs = [k for k in kwargs if isinstance(kwargs[k], (list, dict))] 1837 if target.__kwdefaults__ is not None: 1838 mkwdefaults = [k for k in target.__kwdefaults__ if k not in kwargs] 1839 else: 1840 mkwdefaults = [] 1841 # This code could be used to get argument names for positional 1842 # arguments, but we actually don't want them. 1843 #nargs = target.__code__.co_argcount + target.__code__.co_kwonlyargcount 1844 #margnames = [target.__code__.co_varnames[:nargs][i] for i in mposargs] 1845 #mposnames = margnames[:len(mposargs)] 1846 mposvals = [copy.deepcopy(args[i]) for i in mposargs] 1847 mkwvals = [copy.deepcopy(kwargs[k]) for k in mkwargs] 1848 mkwdefvals = { 1849 k: copy.deepcopy(target.__kwdefaults__[k]) 1850 for k in mkwdefaults 1851 } 1852 1853 # Call the target function 1854 _ = target(*args, **kwargs) 1855 1856 # Report on which arguments were modified 1857 result = "" 1858 1859 # Changes in positional argument values 1860 for argindex, orig in zip(mposargs, mposvals): 1861 final = args[argindex] 1862 result += "Your code {} the value of the {} argument.\n".format( 1863 "modified" if orig != final else "did not modify", 1864 phrasing.ordinal(argindex) 1865 ) 1866 1867 # Changes in keyword argument values 1868 for name, orig in zip(mkwargs, mkwvals): 1869 final = kwargs[name] 1870 result += "Your code {} the value of the '{}' argument.\n".format( 1871 "modified" if orig != final else "did not modify", 1872 name 1873 ) 1874 1875 # Changes in values of unsupplied keyword arguments (i.e., changes to 1876 # defaults, which if unintentional is usually bad!) 1877 for name, orig in zip(mkwdefaults, mkwdefvals): 1878 final = target.__kwdefaults__[name] 1879 result += "Your code {} the value of the '{}' argument.\n".format( 1880 "modified" if orig != final else "did not modify", 1881 name 1882 ) 1883 1884 # The report by default will be compared against an equivalent report 1885 # from the solution function, so that's how we figure out which 1886 # arguments *should* be modified or not. 1887 return result 1888 1889 1890def returns_a_new_value(target, *args, **kwargs): 1891 """ 1892 Checks whether or not the target function returns a value which is 1893 new (i.e., not the same object as one of its arguments). Uses the 1894 'is' operator to check for same-object identity, so it will catch 1895 cases in which an object is modified and then returned. Returns a 1896 string indicating whether or not a newly-constructed value is 1897 returned. 1898 1899 Note: won't catch cases where the result is a structure which 1900 *includes* one of the arguments. And does not check whether the 1901 result is equivalent to one of the arguments, just whether it's 1902 actually the same object or not. 1903 1904 (Note: the last two paragraphs of this docstring are picked up 1905 automatically as rubric values for tests using this harness. fname 1906 will be substituted in, which is why it appears in curly braces 1907 below. This harness can also be used to ensure that a function 1908 doesn't return a new value, in which case an alternate description 1909 should be used.) 1910 1911 Description: 1912 1913 <code>{fname}</code> must return a new value, rather than returning 1914 one of its arguments. 1915 1916 We will call <code>{fname}</code> and check to make sure that the 1917 value it returns is a new value, rather than one of the arguments it 1918 was given (modified or not). 1919 """ 1920 # Call the target function 1921 fresult = target(*args, **kwargs) 1922 1923 # Check the result against each of the arguments 1924 nargs = target.__code__.co_argcount + target.__code__.co_kwonlyargcount 1925 for argindex, argname in enumerate(target.__code__.co_varnames[:nargs]): 1926 if argindex < len(args): 1927 # a positional argument 1928 argval = args[argindex] 1929 argref = phrasing.ordinal(argindex) 1930 else: 1931 # a keyword argument (possibly defaulted via omission) 1932 argval = kwargs.get(argname, target.__kwdefaults__[argname]) 1933 argref = repr(argname) 1934 1935 if fresult is argval: 1936 return ( 1937 "Returned the {} argument (possibly with modifications)." 1938 ).format(argref) 1939 1940 # Since we didn't return in the loop above, there's no match 1941 return "Returned a new value." 1942 1943 1944#------------------# 1945# File I/O Helpers # 1946#------------------# 1947 1948def file_contents_setter(filename, contents): 1949 """ 1950 Returns a setup function (use with `with_setup`) which replaces the 1951 contents of the given file with the given contents. Be careful, 1952 because this will happily overwrite any file. If the desired contents 1953 is a bytes object, the file will be written in binary mode to contain 1954 exactly those bytes, otherwise contents should be a string. 1955 """ 1956 def setup_file_contents(context): 1957 """ 1958 Returns the provided context as-is, but before doing so, writes 1959 data to a specific file to set it up for the coming test. 1960 """ 1961 if isinstance(contents, bytes): 1962 with open(filename, 'wb') as fout: 1963 fout.write(contents) 1964 else: 1965 with open(filename, 'w') as fout: 1966 fout.write(contents) 1967 return context 1968 1969 return setup_file_contents 1970 1971 1972def capturing_file_contents(payload, filename, binary=False): 1973 """ 1974 Captures the entire contents of the given filename as a string (or a 1975 bytes object if binary is set to True), and stores it in the 1976 "output_file_contents" context slot. Also stores the file name of the 1977 file that was read in in the "output_filename" slot. 1978 """ 1979 def capturing_payload(context): 1980 """ 1981 Runs a base payload and then reads the contents of a specific 1982 file, adding that data as a "output_file_contents" context slot 1983 and also adding an "output_filename" slot holding the filename 1984 that was read from. 1985 """ 1986 # Run base payload 1987 result = payload(context) 1988 1989 # Record filename in result 1990 result["output_filename"] = filename 1991 1992 # Decide on open flags 1993 if binary: 1994 flags = 'rb' 1995 else: 1996 flags = 'r' 1997 1998 with open(filename, flags) as fin: 1999 file_contents = fin.read() 2000 2001 # Add file contents 2002 result["output_file_contents"] = file_contents 2003 2004 return result 2005 2006 return capturing_payload
Ideal order in which to apply augmentations in this module when multiple augmentations are being applied to the same payload. Because certain augmentations interfere with others if not applied in the correct order, applying them in order is important, although in certain cases special applications might want to deviate from this order.
Note that even following this order, not all augmentations are really
compatible with each other. For example, if one were to use
with_module_decorations
to perform intensive decoration (which is
somewhat time-consuming per-run) and also attempt to use
sampling_distribution_of_results
with a large sample count, the
resulting payload might be prohibitively slow.
86def create_module_import_payload( 87 name_prefix="loaded_", 88 use_fix_parse=True, 89 prep=None, 90 wrap=None 91): 92 """ 93 This function returns a zero-argument payload function which imports 94 file identified by the "file_path" slot of the given context, using 95 the "filename" slot of the given context as the name of the file for 96 the purpose of deciding a module name, and establishing the resulting 97 module in the "module" slot along with "original_source" and "source" 98 slots holding the original and (possibly modified) source code. 99 100 It reads the "task_info" context slot to access the specification and 101 load the helper files list to make available during module execution. 102 103 A custom `name_prefix` may be given which will alter the name of the 104 imported module in sys.modules and in the __name__ automatic 105 variable as the module is being created; use this to avoid conflicts 106 when importing submitted and solution modules that have the same 107 filename. 108 109 If `use_fix_parse` is provided, `potluck.load.fix_parse` will be used 110 instead of just `mast.parse`, and in addition to generating 111 "original_source", "source", "scope", and "module" slots, a 112 "parse_errors" slot will be generated, holding a (hopefully empty) 113 list of Exception objects that were 'successfully' ignored during 114 parsing. 115 116 `prep` and/or `wrap` functions may be supplied, the `prep` function 117 will be given the module source as a string and must return it (or a 118 modified version); the `wrap` function will be given the compiled 119 module object and whatever it returns will be substituted for the 120 original module. 121 """ 122 def payload(context): 123 """ 124 Imports a specific file as a module, using a prefix in addition 125 to the filename itself to determine the module name. Returns a 126 'module' context slot. 127 """ 128 filename = context_utils.extract(context, "filename") 129 file_path = context_utils.extract(context, "file_path") 130 file_path = os.path.abspath(file_path) 131 full_name = name_prefix + filename 132 133 # Read the file 134 with open(file_path, 'r', encoding="utf-8") as fin: 135 original_source = fin.read() 136 137 # Call our prep function 138 if prep: 139 source = prep(original_source) 140 else: 141 source = original_source 142 143 # Decide if we're using fix_parse or not 144 if use_fix_parse: 145 # Parse using fix_parse 146 fixed, node, errors = load.fix_parse(source, full_name) 147 else: 148 # Just parse normally without attempting to steamroll errors 149 fixed = source 150 node = mast.parse(source, filename=full_name) 151 errors = None 152 153 # Since this payload is already running inside a sandbox 154 # directory, we don't need to provide a sandbox argument here. 155 module = load.create_module_from_code( 156 node, 157 full_name, 158 on_disk=file_path, 159 sandbox=None 160 ) 161 162 # Create result as a copy of the base context 163 result = copy.copy(context) 164 result.update({ 165 "original_source": original_source, 166 "source": fixed, 167 "scope": node, 168 "module": module 169 }) 170 if errors: 171 result["parse_errors"] = errors 172 173 # Wrap the resulting module if a wrap function was provided 174 if wrap: 175 result["module"] = wrap(result["module"]) 176 177 # Return our result 178 return result 179 180 return payload
This function returns a zero-argument payload function which imports file identified by the "file_path" slot of the given context, using the "filename" slot of the given context as the name of the file for the purpose of deciding a module name, and establishing the resulting module in the "module" slot along with "original_source" and "source" slots holding the original and (possibly modified) source code.
It reads the "task_info" context slot to access the specification and load the helper files list to make available during module execution.
A custom name_prefix
may be given which will alter the name of the
imported module in sys.modules and in the __name__ automatic
variable as the module is being created; use this to avoid conflicts
when importing submitted and solution modules that have the same
filename.
If use_fix_parse
is provided, potluck.load.fix_parse
will be used
instead of just mast.parse
, and in addition to generating
"original_source", "source", "scope", and "module" slots, a
"parse_errors" slot will be generated, holding a (hopefully empty)
list of Exception objects that were 'successfully' ignored during
parsing.
prep
and/or wrap
functions may be supplied, the prep
function
will be given the module source as a string and must return it (or a
modified version); the wrap
function will be given the compiled
module object and whatever it returns will be substituted for the
original module.
183def create_read_variable_payload(varname): 184 """ 185 Creates a payload function which retrieves the given variable from 186 the "module" slot of the given context when run, placing the 187 retrieved value into a "value" slot. If the variable name is a 188 `potluck.context_utils.ContextualValue`, it will be replaced with a 189 real value first. The "variable" slot of the result context will be 190 set to the actual variable name used. 191 """ 192 def payload(context): 193 """ 194 Retrieves a specific variable from a certain module. Returns a 195 "value" context slot. 196 """ 197 nonlocal varname 198 module = context_utils.extract(context, "module") 199 if isinstance(varname, context_utils.ContextualValue): 200 try: 201 varname = varname.replace(context) 202 except Exception: 203 logging.log( 204 "Encountered error while attempting to substitute" 205 " contextual value:" 206 ) 207 logging.log(traceback.format_exc()) 208 raise 209 210 # Create result as a copy of the base context 211 result = copy.copy(context) 212 result.update({ 213 "variable": varname, 214 "value": getattr(module, varname) 215 }) 216 return result 217 218 return payload
Creates a payload function which retrieves the given variable from
the "module" slot of the given context when run, placing the
retrieved value into a "value" slot. If the variable name is a
potluck.context_utils.ContextualValue
, it will be replaced with a
real value first. The "variable" slot of the result context will be
set to the actual variable name used.
221def create_run_function_payload( 222 fname, 223 posargs=None, 224 kwargs=None, 225 copy_args=True 226): 227 """ 228 Creates a payload function which retrieves a function from the 229 "module" slot of the given context and runs it with certain 230 positional and/or keyword arguments, returning a "value" context slot 231 containing the function's result. The arguments used are also placed 232 into "args" and "kwargs" context slots in case those are useful for 233 later checks, and the function name is placed into a "function" 234 context slot. 235 236 If `copy_args` is set to True (the default), deep copies of argument 237 values will be made before they are passed to the target function 238 (note that keyword argument keys are not copied, although they should 239 be strings in any case). The "args" and "kwargs" slots will also get 240 copies of the arguments, not the original values, and these will be 241 separate copies from those given to the function, so they'll retain 242 the values used as input even after the function is finished. 243 However, "used_args" and "used_kwargs" slots will be added if 244 `copy_args` is set to true that hold the actual arguments sent to the 245 function so that any changes made by the function can be measured if 246 necessary. 247 248 If the function name or any of the argument values (or keyword 249 argument keys) are `potluck.context_utils.ContextualValue` instances, 250 these will be replaced with actual values using the given context 251 before the function is run. This step happens before argument 252 copying, and before the "args" and "kwargs" result slots are set up. 253 """ 254 posargs = posargs or () 255 kwargs = kwargs or {} 256 257 def payload(context): 258 """ 259 Runs a specific function in a certain module with specific 260 arguments. Returns a "value" context slot. 261 """ 262 nonlocal fname 263 module = context_utils.extract(context, "module") 264 if isinstance(fname, context_utils.ContextualValue): 265 try: 266 fname = fname.replace(context) 267 except Exception: 268 logging.log( 269 "Encountered error while attempting to substitute" 270 " contextual value:" 271 ) 272 logging.log(traceback.format_exc()) 273 raise 274 fn = getattr(module, fname) 275 276 real_posargs = [] 277 initial_posargs = [] 278 real_kwargs = {} 279 initial_kwargs = {} 280 for arg in posargs: 281 if isinstance(arg, context_utils.ContextualValue): 282 try: 283 arg = arg.replace(context) 284 except Exception: 285 logging.log( 286 "Encountered error while attempting to substitute" 287 " contextual value:" 288 ) 289 logging.log(traceback.format_exc()) 290 raise 291 292 if copy_args: 293 real_posargs.append(copy.deepcopy(arg)) 294 initial_posargs.append(copy.deepcopy(arg)) 295 296 else: 297 real_posargs.append(arg) 298 initial_posargs.append(arg) 299 300 for key in kwargs: 301 if isinstance(key, context_utils.ContextualValue): 302 try: 303 key = key.replace(context) 304 except Exception: 305 logging.log( 306 "Encountered error while attempting to substitute" 307 " contextual value:" 308 ) 309 logging.log(traceback.format_exc()) 310 raise 311 312 value = kwargs[key] 313 if isinstance(value, context_utils.ContextualValue): 314 try: 315 value = value.replace(context) 316 except Exception: 317 logging.log( 318 "Encountered error while attempting to substitute" 319 " contextual value:" 320 ) 321 logging.log(traceback.format_exc()) 322 raise 323 324 if copy_args: 325 real_kwargs[key] = copy.deepcopy(value) 326 initial_kwargs[key] = copy.deepcopy(value) 327 else: 328 real_kwargs[key] = value 329 initial_kwargs[key] = value 330 331 # Create result as a copy of the base context 332 result = copy.copy(context) 333 result.update({ 334 "value": fn(*real_posargs, **real_kwargs), 335 "function": fname, 336 "args": initial_posargs, 337 "kwargs": initial_kwargs, 338 }) 339 340 if copy_args: 341 result["used_args"] = real_posargs 342 result["used_kwargs"] = real_kwargs 343 344 return result 345 346 return payload
Creates a payload function which retrieves a function from the "module" slot of the given context and runs it with certain positional and/or keyword arguments, returning a "value" context slot containing the function's result. The arguments used are also placed into "args" and "kwargs" context slots in case those are useful for later checks, and the function name is placed into a "function" context slot.
If copy_args
is set to True (the default), deep copies of argument
values will be made before they are passed to the target function
(note that keyword argument keys are not copied, although they should
be strings in any case). The "args" and "kwargs" slots will also get
copies of the arguments, not the original values, and these will be
separate copies from those given to the function, so they'll retain
the values used as input even after the function is finished.
However, "used_args" and "used_kwargs" slots will be added if
copy_args
is set to true that hold the actual arguments sent to the
function so that any changes made by the function can be measured if
necessary.
If the function name or any of the argument values (or keyword
argument keys) are potluck.context_utils.ContextualValue
instances,
these will be replaced with actual values using the given context
before the function is run. This step happens before argument
copying, and before the "args" and "kwargs" result slots are set up.
349def create_run_harness_payload( 350 harness, 351 fname, 352 posargs=None, 353 kwargs=None, 354 copy_args=False 355): 356 """ 357 Creates a payload function which retrieves a function from the 358 "module" slot of the given context and passes it to a custom harness 359 function for testing. The harness function is given the function 360 object to test as its first parameter, followed by the positional and 361 keyword arguments specified here. Its result is placed in the "value" 362 context slot. Like `create_run_function_payload`, "args", "kwargs", 363 and "function" slots are established, and a "harness" slot is 364 established which holds the harness function used. 365 366 If `copy_args` is set to True, deep copies of argument values will be 367 made before they are passed to the harness function (note that keyword 368 argument keys are not copied, although they should be strings in any 369 case). 370 371 If the function name or any of the argument values (or keyword 372 argument keys) are `potluck.context_utils.ContextualValue` instances, 373 these will be replaced with actual values using the given context 374 before the function is run. This step happens before argument 375 copying and before these items are placed into their result slots. 376 """ 377 posargs = posargs or () 378 kwargs = kwargs or {} 379 380 def payload(context): 381 """ 382 Tests a specific function in a certain module using a test 383 harness, with specific arguments. Returns a "value" context slot. 384 """ 385 nonlocal fname 386 module = context_utils.extract(context, "module") 387 if isinstance(fname, context_utils.ContextualValue): 388 try: 389 fname = fname.replace(context) 390 except Exception: 391 logging.log( 392 "Encountered error while attempting to substitute" 393 " contextual value:" 394 ) 395 logging.log(traceback.format_exc()) 396 raise 397 fn = getattr(module, fname) 398 399 real_posargs = [] 400 real_kwargs = {} 401 for arg in posargs: 402 if isinstance(arg, context_utils.ContextualValue): 403 try: 404 arg = arg.replace(context) 405 except Exception: 406 logging.log( 407 "Encountered error while attempting to substitute" 408 " contextual value:" 409 ) 410 logging.log(traceback.format_exc()) 411 raise 412 413 if copy_args: 414 arg = copy.deepcopy(arg) 415 416 real_posargs.append(arg) 417 418 for key in kwargs: 419 if isinstance(key, context_utils.ContextualValue): 420 try: 421 key = key.replace(context) 422 except Exception: 423 logging.log( 424 "Encountered error while attempting to substitute" 425 " contextual value:" 426 ) 427 logging.log(traceback.format_exc()) 428 raise 429 430 value = kwargs[key] 431 if isinstance(value, context_utils.ContextualValue): 432 try: 433 value = value.replace(context) 434 except Exception: 435 logging.log( 436 "Encountered error while attempting to substitute" 437 " contextual value:" 438 ) 439 logging.log(traceback.format_exc()) 440 raise 441 442 if copy_args: 443 value = copy.deepcopy(value) 444 445 real_kwargs[key] = value 446 447 # Create result as a copy of the base context 448 result = copy.copy(context) 449 result.update({ 450 "value": harness(fn, *real_posargs, **real_kwargs), 451 "harness": harness, 452 "function": fname, 453 "args": real_posargs, 454 "kwargs": real_kwargs 455 }) 456 457 # Return our result 458 return result 459 460 return payload
Creates a payload function which retrieves a function from the
"module" slot of the given context and passes it to a custom harness
function for testing. The harness function is given the function
object to test as its first parameter, followed by the positional and
keyword arguments specified here. Its result is placed in the "value"
context slot. Like create_run_function_payload
, "args", "kwargs",
and "function" slots are established, and a "harness" slot is
established which holds the harness function used.
If copy_args
is set to True, deep copies of argument values will be
made before they are passed to the harness function (note that keyword
argument keys are not copied, although they should be strings in any
case).
If the function name or any of the argument values (or keyword
argument keys) are potluck.context_utils.ContextualValue
instances,
these will be replaced with actual values using the given context
before the function is run. This step happens before argument
copying and before these items are placed into their result slots.
463def make_module(statements): 464 """ 465 Creates an ast.Module object from a list of statements. Sets empty 466 type_ignores if we're in a version that requires them. 467 """ 468 vi = sys.version_info 469 if vi[0] > 3 or vi[0] == 3 and vi[1] >= 8: 470 return ast.Module(statements, []) 471 else: 472 return ast.Module(statements)
Creates an ast.Module object from a list of statements. Sets empty type_ignores if we're in a version that requires them.
475def create_execute_code_block_payload(block_name, src, nodes=None): 476 """ 477 Creates a payload function which executes a series of statements 478 (provided as a multi-line code string OR list of AST nodes) in the 479 current context's "module" slot. A block name (a string) must be 480 provided and will appear as the filename if tracebacks are generated. 481 482 The 'src' argument must be a string, and dictates how the code will 483 be displayed, the 'nodes' argument must be a collection of AST nodes, 484 and dictates what code will actually be executed. If 'nodes' is not 485 provided, the given source code will be parsed to create a list of 486 AST nodes. 487 488 The payload runs the final expression or statement last, and if it 489 was an expression, its return value will be put in the "value" 490 context slot of the result; otherwise None will be put there (of 491 course, a final expression that evaluates to None would give the same 492 result). 493 494 The source code given is placed in the "block" context slot, while 495 the nodes used are placed in the "block_nodes" context slot, and the 496 block name is placed in the "block_name" context slot. 497 498 Note that although direct variable reassignments and new variables 499 created by the block of code won't affect the module it's run in, 500 more indirect changes WILL, so be extremely careful about side 501 effects! 502 """ 503 # Parse src if nodes weren't specified explicitly 504 if nodes is None: 505 nodes = ast.parse(src).body 506 507 def payload(context): 508 """ 509 Runs a sequence of statements or expressions (provided as AST 510 nodes) in a certain module. Creates a "value" context slot with 511 the result of the last expression, or None if the last node was a 512 statement. 513 """ 514 module = context_utils.extract(context, "module") 515 516 # Separate nodes into start and last 517 start = nodes[:-1] 518 last = nodes[-1] 519 520 # Create a cloned execution environment 521 env = {} 522 env.update(module.__dict__) 523 524 if len(start) > 0: 525 code = compile(make_module(start), block_name, 'exec') 526 exec(code, env) 527 528 if isinstance(last, ast.Expr): 529 # Treat last line as an expression and grab its value 530 last_code = compile( 531 ast.Expression(last.value), 532 block_name + "(final)", 533 'eval' 534 ) 535 value = eval(last_code, env) 536 else: 537 # Guess it wasn't an expression; just execute it 538 last_code = compile( 539 make_module([last]), 540 block_name + "(final)", 541 'exec' 542 ) 543 exec(last_code, env) 544 value = None 545 546 # Create result as a copy of the base context 547 result = copy.copy(context) 548 result.update({ 549 "value": value, 550 "block_name": block_name, 551 "block": src, 552 "block_nodes": nodes 553 }) 554 555 # Return our result 556 return result 557 558 return payload
Creates a payload function which executes a series of statements (provided as a multi-line code string OR list of AST nodes) in the current context's "module" slot. A block name (a string) must be provided and will appear as the filename if tracebacks are generated.
The 'src' argument must be a string, and dictates how the code will be displayed, the 'nodes' argument must be a collection of AST nodes, and dictates what code will actually be executed. If 'nodes' is not provided, the given source code will be parsed to create a list of AST nodes.
The payload runs the final expression or statement last, and if it was an expression, its return value will be put in the "value" context slot of the result; otherwise None will be put there (of course, a final expression that evaluates to None would give the same result).
The source code given is placed in the "block" context slot, while the nodes used are placed in the "block_nodes" context slot, and the block name is placed in the "block_name" context slot.
Note that although direct variable reassignments and new variables created by the block of code won't affect the module it's run in, more indirect changes WILL, so be extremely careful about side effects!
565def run_for_base_and_ref_values( 566 payload, 567 used_by_both=None, 568 cache_ref=True, 569 ref_only=False 570): 571 """ 572 Accepts a payload function and returns a modified payload function 573 which runs the provided function twice, the second time using ref_* 574 context values and setting ref_* versions of the original payload's 575 result slots. If a certain non-ref_* value needs to be available to 576 the reference payload other than the standard 577 `potluck.context_utils.BASE_CONTEXT_SLOTS`, it must be provided in 578 the "used_by_both" list. 579 580 Note that when applying multiple payload augmentations, this one 581 should be applied last. 582 583 The default behavior actually caches the reference values it 584 produces, under the assumption that only if cached reference values 585 are older than the solution file or the specification module should 586 the reference run actually take place. If this assumption is 587 incorrect, you should set `cache_ref` to False to actually run the 588 reference payload every time. 589 590 If you only care about the reference results (e.g., when compiling a 591 snippet) you can set ref_only to true, and the initial run will be 592 skipped. 593 594 TODO: Shelf doesn't support multiple-concurrent access!!! 595 TODO: THIS 596 """ 597 used_by_both = used_by_both or [] 598 599 def double_payload(context): 600 """ 601 Runs a payload twice, once normally and again against a context 602 where all ref_* slots have been merged into their non-ref_* 603 equivalents. Results from the second run are stored in ref_* 604 versions of the slots they would normally occupy, alongside the 605 original results. When possible, fetches cached results for the 606 ref_ values instead of actually running the payload a second 607 time. 608 """ 609 # Get initial results 610 if ref_only: 611 full_result = {} 612 else: 613 full_result = payload(context) 614 615 # Figure out our cache key 616 taskid = context["task_info"]["id"] 617 goal_id = context["goal_id"] 618 nth = context["which_context"] 619 # TODO: This cache key doesn't include enough info about the 620 # context object, apparently... 621 cache_key = taskid + ":" + goal_id + ":" + str(nth) 622 ts_key = cache_key + "::ts" 623 624 # Check the cache 625 cache_file = context["task_info"]["reference_cache_file"] 626 use_cached = True 627 cached = None 628 629 ignore_cache = context["task_info"]["ignore_cache"] 630 # TODO: Fix caching!!! 631 ignore_cache = True 632 633 # Respect ignore_cache setting 634 if not ignore_cache: 635 with shelve.open(cache_file) as shelf: 636 if ts_key not in shelf: 637 use_cached = False 638 else: # need to check timestamp 639 ts = shelf[ts_key] 640 641 # Get modification times for spec + solution 642 spec = context["task_info"]["specification"] 643 mtimes = [] 644 for fn in [ spec.__file__ ] + [ 645 os.path.join(spec.soln_path, f) 646 for f in spec.soln_files 647 ]: 648 mtimes.append(os.stat(fn).st_mtime) 649 650 # Units are seconds 651 changed_at = time_utils.time_from_timestamp( 652 max(mtimes) 653 ) 654 655 # Convert cache timestamp to seconds and compare 656 cache_time = time_utils.time_from_timestring(ts) 657 658 # Use cache if it was produced *after* last change 659 if cache_time <= changed_at: 660 use_cached = False 661 # else leave it at default True 662 663 # grab cached values 664 if use_cached: 665 cached = shelf[cache_key] 666 667 # Skip re-running the payload if we have a cached result 668 if cached is not None: 669 ref_result = cached 670 else: 671 # Create a context where each ref_* slot value is assigned to 672 # the equivalent non-ref_* slot 673 ref_context = { 674 key: context[key] 675 for key in context_utils.BASE_CONTEXT_SLOTS 676 } 677 for key in context: 678 if key in used_by_both: 679 ref_context[key] = context[key] 680 elif key.startswith("ref_"): 681 ref_context[key[4:]] = context[key] 682 # Retain original ref_ slots alongside collapsed slots 683 ref_context[key] = context[key] 684 685 # Get results from collapsed context 686 try: 687 ref_result = payload(ref_context) 688 except context_utils.MissingContextError as e: 689 e.args = ( 690 e.args[0] + " (in reference payload)", 691 ) + e.args[1:] 692 raise e 693 694 # Make an entry in our cache 695 if not ignore_cache: 696 with shelve.open(cache_file) as shelf: 697 # Just cache new things added by ref payload 698 hollowed = {} 699 for key in ref_result: 700 if ( 701 key not in context 702 or ref_result[key] != context[key] 703 ): 704 hollowed[key] = ref_result[key] 705 # If ref payload produces uncacheable results, we 706 # can't cache anything 707 try: 708 shelf[cache_key] = ref_result 709 shelf[ts_key] = time_utils.timestring() 710 except Exception: 711 logging.log( 712 "Payload produced uncacheable reference" 713 " value(s):" 714 ) 715 logging.log(html_tools.string_traceback()) 716 717 # Assign collapsed context results into final result under ref_* 718 # versions of their slots 719 for slot in ref_result: 720 full_result["ref_" + slot] = ref_result[slot] 721 722 return full_result 723 724 return double_payload
Accepts a payload function and returns a modified payload function
which runs the provided function twice, the second time using ref_*
context values and setting ref_* versions of the original payload's
result slots. If a certain non-ref_* value needs to be available to
the reference payload other than the standard
potluck.context_utils.BASE_CONTEXT_SLOTS
, it must be provided in
the "used_by_both" list.
Note that when applying multiple payload augmentations, this one should be applied last.
The default behavior actually caches the reference values it
produces, under the assumption that only if cached reference values
are older than the solution file or the specification module should
the reference run actually take place. If this assumption is
incorrect, you should set cache_ref
to False to actually run the
reference payload every time.
If you only care about the reference results (e.g., when compiling a snippet) you can set ref_only to true, and the initial run will be skipped.
TODO: Shelf doesn't support multiple-concurrent access!!! TODO: THIS
727def run_in_sandbox(payload): 728 """ 729 Returns a modified payload function which runs the provided base 730 payload, but first sets the current directory to the sandbox 731 directory specified by the provided context's "sandbox" slot. 732 Afterwards, it changes back to the original directory. 733 734 TODO: More stringent sandboxing? 735 """ 736 def sandboxed_payload(context): 737 """ 738 A payload function which runs a base payload within a specific 739 sandbox directory. 740 """ 741 orig_cwd = os.getcwd() 742 try: 743 os.chdir(context_utils.extract(context, "sandbox")) 744 result = payload(context) 745 finally: 746 os.chdir(orig_cwd) 747 748 return result 749 750 return sandboxed_payload
Returns a modified payload function which runs the provided base payload, but first sets the current directory to the sandbox directory specified by the provided context's "sandbox" slot. Afterwards, it changes back to the original directory.
TODO: More stringent sandboxing?
753def with_setup(payload, setup): 754 """ 755 Creates a modified payload which runs the given setup function 756 (with the incoming context dictionary as an argument) right before 757 running the base payload. The setup function's return value is used 758 as the context for the base payload. 759 760 Note that based on the augmentation order, function calls made during 761 the setup WILL NOT be captured as part of a trace if 762 tracing_function_calls is also used, but printed output during the 763 setup WILL be available via capturing_printed_output if that is used. 764 """ 765 def setup_payload(context): 766 """ 767 Runs a base payload after running a setup function. 768 """ 769 context = setup(context) 770 if context is None: 771 raise ValueError("Context setup function returned None!") 772 return payload(context) 773 774 return setup_payload
Creates a modified payload which runs the given setup function (with the incoming context dictionary as an argument) right before running the base payload. The setup function's return value is used as the context for the base payload.
Note that based on the augmentation order, function calls made during the setup WILL NOT be captured as part of a trace if tracing_function_calls is also used, but printed output during the setup WILL be available via capturing_printed_output if that is used.
777def with_cleanup(payload, cleanup): 778 """ 779 Creates a modified payload which runs the given cleanup function 780 (with the original payload's result, which is a context dictionary, 781 as an argument) right after running the base payload. The return 782 value is the cleanup function's return value. 783 784 Note that based on the augmentation order, function calls made during 785 the setup WILL NOT be captured as part of a trace if 786 tracing_function_calls is also used, but printed output during the 787 setup WILL be available via capturing_printed_output if that is used. 788 """ 789 def cleanup_payload(context): 790 """ 791 Runs a base payload and then runs a cleanup function. 792 """ 793 result = payload(context) 794 result = cleanup(result) 795 return result 796 797 return cleanup_payload
Creates a modified payload which runs the given cleanup function (with the original payload's result, which is a context dictionary, as an argument) right after running the base payload. The return value is the cleanup function's return value.
Note that based on the augmentation order, function calls made during the setup WILL NOT be captured as part of a trace if tracing_function_calls is also used, but printed output during the setup WILL be available via capturing_printed_output if that is used.
800def capturing_printed_output( 801 payload, 802 capture_errors=False, 803 capture_stderr=False 804): 805 """ 806 Creates a modified version of the given payload which establishes an 807 "output" slot in addition to the base slots, holding a string 808 consisting of all output that was printed during the execution of 809 the original payload (specifically, anything that would have been 810 written to stdout). During payload execution, the captured text is 811 not actually printed as it would normally have been. If the payload 812 itself already established an "output" slot, that value will be 813 discarded in favor of the value established by this mix-in. 814 815 If `capture_errors` is set to True, then any `Exception` generated 816 by running the original payload will be captured as part of the 817 string output instead of bubbling out to the rest of the system. 818 However, context slots established by inner payload wrappers cannot 819 be retained if there is an `Exception` seen by this wrapper, since 820 any inner wrappers would not have gotten a chance to return in that 821 case. If an error is captured, an "error" context slot will be set to 822 the message for the exception that was caught. 823 824 If `capture_stderr` is set to True, then things printed to stderr 825 will be captured as well as those printed to stdout, and will be put 826 in a separate "error_log" slot. In this case, if `capture_errors` is 827 also True, the printed part of any traceback will be captured as part 828 of the error_log, not the output. 829 """ 830 def capturing_payload(context): 831 """ 832 Runs a base payload while also capturing printed output into an 833 "output" slot. 834 """ 835 # Set up output capturing 836 original_stdout = sys.stdout 837 string_stdout = io.StringIO() 838 sys.stdout = string_stdout 839 840 if capture_stderr: 841 original_stderr = sys.stderr 842 string_stderr = io.StringIO() 843 sys.stderr = string_stderr 844 845 # Run the base payload 846 try: 847 result = payload(context) 848 except Exception as e: 849 if capture_errors: 850 if capture_stderr: 851 string_stderr.write('\n' + html_tools.string_traceback()) 852 else: 853 string_stdout.write('\n' + html_tools.string_traceback()) 854 result = { "error": str(e) } 855 else: 856 raise 857 finally: 858 # Restore original stdout/stderr 859 sys.stdout = original_stdout 860 if capture_stderr: 861 sys.stderr = original_stderr 862 863 # Add our captured output to the "output" slot of the result 864 result["output"] = string_stdout.getvalue() 865 866 if capture_stderr: 867 result["error_log"] = string_stderr.getvalue() 868 869 return result 870 871 return capturing_payload
Creates a modified version of the given payload which establishes an "output" slot in addition to the base slots, holding a string consisting of all output that was printed during the execution of the original payload (specifically, anything that would have been written to stdout). During payload execution, the captured text is not actually printed as it would normally have been. If the payload itself already established an "output" slot, that value will be discarded in favor of the value established by this mix-in.
If capture_errors
is set to True, then any Exception
generated
by running the original payload will be captured as part of the
string output instead of bubbling out to the rest of the system.
However, context slots established by inner payload wrappers cannot
be retained if there is an Exception
seen by this wrapper, since
any inner wrappers would not have gotten a chance to return in that
case. If an error is captured, an "error" context slot will be set to
the message for the exception that was caught.
If capture_stderr
is set to True, then things printed to stderr
will be captured as well as those printed to stdout, and will be put
in a separate "error_log" slot. In this case, if capture_errors
is
also True, the printed part of any traceback will be captured as part
of the error_log, not the output.
874def with_fake_input(payload, inputs, extra_policy="error"): 875 """ 876 Creates a modified payload function which runs the given payload but 877 supplies a pre-determined sequence of strings whenever `input` is 878 called instead of actually prompting for values from stdin. The 879 prompts and input values that would have shown up are still printed, 880 although a pair of zero-width word-joiner characters is added before 881 and after the fake input value at each prompt in the printed output. 882 883 The `inputs` and `extra_policy` arguments are passed to 884 `create_mock_input` to create the fake input setup. 885 886 The result will have "inputs" and "input_policy" context slots added 887 that store the specific inputs used, and the extra input policy. 888 """ 889 # Create mock input function and input reset function 890 mock_input, reset_input = create_mock_input(inputs, extra_policy) 891 892 def fake_input_payload(context): 893 """ 894 Runs a base payload with a mocked input function that returns 895 strings from a pre-determined sequence. 896 """ 897 # Replace `input` with our mock version 898 import builtins 899 original_input = builtins.input 900 reset_input() 901 builtins.input = mock_input 902 903 # TODO: Is this compatible with optimism's input-manipulation? 904 # TODO: Make this work with optimism's stdin-replacement 905 906 # Run the payload 907 try: 908 result = payload(context) 909 finally: 910 # Re-enable `input` 911 builtins.input = original_input 912 reset_input() 913 reset_input() 914 915 # Add "inputs" and "input_policy" context slots to the result 916 result["inputs"] = inputs 917 result["input_policy"] = extra_policy 918 919 return result 920 921 return fake_input_payload
Creates a modified payload function which runs the given payload but
supplies a pre-determined sequence of strings whenever input
is
called instead of actually prompting for values from stdin. The
prompts and input values that would have shown up are still printed,
although a pair of zero-width word-joiner characters is added before
and after the fake input value at each prompt in the printed output.
The inputs
and extra_policy
arguments are passed to
create_mock_input
to create the fake input setup.
The result will have "inputs" and "input_policy" context slots added that store the specific inputs used, and the extra input policy.
A regular expression which can be used to find fake input values in printed output from code that uses a mock input. The first group of each match will be a fake output value.
934def strip_mock_input_values(output): 935 """ 936 Given a printed output string produced by code using mocked inputs, 937 returns the same string, with the specific input values stripped out. 938 Actually strips any values found between paired word-joiner (U+2060) 939 characters, as that's what mock input values are wrapped in. 940 """ 941 return re.sub(FAKE_INPUT_PATTERN, "", output)
Given a printed output string produced by code using mocked inputs, returns the same string, with the specific input values stripped out. Actually strips any values found between paired word-joiner (U+2060) characters, as that's what mock input values are wrapped in.
944def create_mock_input(inputs, extra_policy="error"): 945 """ 946 Creates two functions: a stand-in for `input` that returns strings 947 from the given "inputs" sequence, and a reset function that resets 948 the first function to the beginning of its inputs list. 949 950 The extra_policy specifies what happens if the inputs list runs out: 951 952 - "loop" means that it will be repeated again, ad infinitum. 953 - "hold" means that the last value will be returned for all 954 subsequent input calls. 955 - "error" means an `EOFError` will be raised as if stdin had been 956 closed. 957 958 "hold" is the default policy. 959 """ 960 961 input_index = 0 962 963 def mock_input(prompt=""): 964 """ 965 Function that retrieves the next input from the inputs list and 966 behaves according to the extra_inputs_policy when inputs run out: 967 968 - If extra_inputs_policy is "hold," the last input is returned 969 repeatedly. 970 971 - If extra_inputs_policy is "loop," the cycle of inputs repeats 972 indefinitely. 973 974 - If extra_inputs_policy is "error," (or any other value) an 975 EOFError is raised when the inputs run out. This also happens 976 if the inputs list is empty to begin with. 977 978 This function prints the prompt and the input that it is about to 979 return, so that they appear in printed output just as they would 980 have if normal input() had been called. 981 982 To enable identification of the input values, a pair of 983 zero-width "word joiner" character (U+2060) is printed directly 984 before and directly after each input value. These should not 985 normally be visible when the output is inspected by a human, but 986 can be searched for (and may also influence word wrapping in some 987 contexts). 988 """ 989 nonlocal input_index 990 print(prompt, end="") 991 if input_index >= len(inputs): 992 if extra_policy == "hold": 993 if len(inputs) > 0: 994 result = inputs[-1] 995 else: 996 raise EOFError 997 elif extra_policy == "loop": 998 if len(inputs) > 0: 999 input_index = 0 1000 result = inputs[input_index] 1001 else: 1002 raise EOFError 1003 else: 1004 raise EOFError 1005 else: 1006 result = inputs[input_index] 1007 input_index += 1 1008 1009 print('\u2060\u2060' + result + '\u2060\u2060') 1010 return result 1011 1012 def reset_input(): 1013 """ 1014 Resets the input list state, so that the next call to input() 1015 behaves as if it was the first call with respect to the mock 1016 input function defined above (see create_mock_input). 1017 """ 1018 nonlocal input_index 1019 input_index = 0 1020 1021 # Return our newly-minted mock and reset functions 1022 return mock_input, reset_input
Creates two functions: a stand-in for input
that returns strings
from the given "inputs" sequence, and a reset function that resets
the first function to the beginning of its inputs list.
The extra_policy specifies what happens if the inputs list runs out:
- "loop" means that it will be repeated again, ad infinitum.
- "hold" means that the last value will be returned for all subsequent input calls.
- "error" means an
EOFError
will be raised as if stdin had been closed.
"hold" is the default policy.
1025def with_timeout(payload, time_limit=5): 1026 """ 1027 Creates a modified payload which terminates itself with a 1028 `TimeoutError` if if takes longer than the specified time limit (in 1029 possibly-fractional seconds). 1030 1031 Note that on systems where `signal.SIGALRM` is not available, we 1032 have no way of interrupting the original payload, and so only after 1033 it terminates will a `TimeoutError` be raised, making this function 1034 MUCH less useful. 1035 1036 Note that the resulting payload function is NOT re-entrant: only one 1037 timer can be running at once, and calling the function again while 1038 it's already running re-starts the timer. 1039 """ 1040 def timed_payload(context): 1041 """ 1042 Runs a base payload with a timeout, raising a 1043 `potluck.timeout.TimeoutError` if the function takes too long. 1044 1045 See `potluck.timeout` for (horrific) details. 1046 """ 1047 return timeout.with_sigalrm_timeout(time_limit, payload, (context,)) 1048 1049 return timed_payload
Creates a modified payload which terminates itself with a
TimeoutError
if if takes longer than the specified time limit (in
possibly-fractional seconds).
Note that on systems where signal.SIGALRM
is not available, we
have no way of interrupting the original payload, and so only after
it terminates will a TimeoutError
be raised, making this function
MUCH less useful.
Note that the resulting payload function is NOT re-entrant: only one timer can be running at once, and calling the function again while it's already running re-starts the timer.
1052def tracing_function_calls(payload, trace_targets, state_function): 1053 """ 1054 Augments a payload function such that calls to certain functions of 1055 interest during the payload's run are traced. This ends up creating 1056 a "trace" slot in the result context, which holds a trace object 1057 that consists of a list of trace entries. 1058 1059 The `trace_targets` argument should be a sequence of strings 1060 identifying the names of functions to trace calls to. It may contain 1061 tuples, in which case calls to any function named in the tuple will 1062 be treated as calls to the first function in the tuple, which is 1063 useful for collapsing aliases like turtle.fd and turtle.forward. 1064 1065 The `state_function` argument should be a one-argument function, 1066 which given a function name, captures some kind of state and returns 1067 a state object (typically a dictionary). 1068 1069 Each trace entry in the resulting trace represents one function call 1070 in the outermost scope and is a dictionary with the following keys: 1071 1072 - fname: The name of the function that was called. 1073 - args: A dictionary of arguments passed to the function, mapping 1074 argument names to their values. For calls to C functions (such as 1075 most built-in functions), arguments are not available, and this 1076 key will not be present. 1077 - result: The return value of the function. May be None if the 1078 function was terminated due to an exception, but there's no way 1079 to distinguish that from an intentional None return. For calls to 1080 C functions, this key will not be present. 1081 - pre_state: A state object resulting from calling the given 1082 state_function just before the traced function call starts, with 1083 the function name as its only argument. Calls made during the 1084 execution of the state function will not be traced. 1085 - post_state: The same kind of state object, but captured right 1086 before the return of the traced function. 1087 - during: A list of trace entries in the same format representing 1088 traced function calls which were initiated and returned before 1089 the end of the function call that this trace entry represents. 1090 1091 Note that to inspect all function calls, the hierarchy must be 1092 traversed recursively to look at calls in "during" slots. 1093 1094 Note that for *reasons*, functions named "setprofile" cannot be 1095 traced. Also note that since functions are identified by name, 1096 multiple functions with the same name occurring in different modules 1097 will be treated as the same function for tracing purposes, although 1098 this shouldn't normally matter. 1099 1100 Note that in order to avoid tracing function calls made by payload 1101 augmentation, this augmentation should be applied before others. 1102 """ 1103 1104 # Per-function-name stacks of open function calls 1105 trace_stacks = {} 1106 1107 # The trace result is a list of trace entries 1108 trace_result = [] 1109 1110 # The stack of trace destinations 1111 trace_destinations = [ trace_result ] 1112 1113 # Create our tracing targets map 1114 targets_map = {} 1115 for entry in trace_targets: 1116 if isinstance(entry, tuple): 1117 first = entry[0] 1118 for name in entry: 1119 targets_map[name] = first 1120 else: 1121 targets_map[entry] = entry 1122 1123 def tracer(frame, event, arg): 1124 """ 1125 A profiling function which will be called for profiling events 1126 (see `sys.setprofile`). It logs calls to a select list of named 1127 functions. 1128 """ 1129 nonlocal trace_stacks, trace_result 1130 if event in ("call", "return"): # normal function-call or return 1131 fname = frame.f_code.co_name 1132 elif event in ("c_call", "c_return"): # call/return to/from C code 1133 fname = arg.__name__ 1134 else: 1135 # Don't record any other events 1136 return 1137 1138 # Don't ever try to trace setprofile calls, since we'll see an 1139 # unreturned call when setprofile is used to turn off profiling. 1140 if fname == "setprofile": 1141 return 1142 1143 if fname in targets_map: # we're supposed to trace this one 1144 fname = targets_map[fname] # normalize function name 1145 if "return" not in event: # a call event 1146 # Create new info object for this call 1147 info = { 1148 "fname": fname, 1149 "pre_state": state_function(fname), 1150 "during": [] 1151 # args, result, and post_state added elsewhere 1152 } 1153 1154 # Grab arguments if we can: 1155 if not event.startswith("c_"): 1156 info["args"] = copy.copy(frame.f_locals) 1157 1158 # Push this info object onto the appropriate stack 1159 if fname not in trace_stacks: 1160 trace_stacks[fname] = [] 1161 trace_stacks[fname].append(info) 1162 1163 # Push onto the trace destinations stack 1164 trace_destinations.append(info["during"]) 1165 1166 else: # a return event 1167 try: 1168 prev_info = trace_stacks.get(fname, []).pop() 1169 trace_destinations.pop() 1170 except IndexError: # no matching call? 1171 prev_info = { 1172 "fname": fname, 1173 "pre_state": None, 1174 "during": [] 1175 } 1176 1177 # Capture result if we can 1178 if not event.startswith("c_"): 1179 prev_info["result"] = arg 1180 1181 # Capture post-call state 1182 prev_info["post_state"] = state_function(fname) 1183 1184 # Record trace event into current destination 1185 trace_destinations[-1].append(prev_info) 1186 1187 def traced_payload(context): 1188 """ 1189 Runs a payload while tracing calls to certain functions, 1190 returning the context slots created by the original payload plus 1191 a "trace" slot holding a hierarchical trace of function calls. 1192 """ 1193 nonlocal trace_stacks, trace_result, trace_destinations 1194 1195 # Reset tracing state 1196 trace_stacks = {} 1197 trace_result = [] 1198 trace_destinations = [ trace_result ] 1199 1200 # Turn on profiling 1201 sys.setprofile(tracer) 1202 1203 # Run our original payload 1204 result = payload(context) 1205 1206 # Turn off tracing 1207 sys.setprofile(None) 1208 1209 # add a "trace" slot to the result 1210 result["trace"] = trace_result 1211 1212 # we're done 1213 return result 1214 1215 return traced_payload
Augments a payload function such that calls to certain functions of interest during the payload's run are traced. This ends up creating a "trace" slot in the result context, which holds a trace object that consists of a list of trace entries.
The trace_targets
argument should be a sequence of strings
identifying the names of functions to trace calls to. It may contain
tuples, in which case calls to any function named in the tuple will
be treated as calls to the first function in the tuple, which is
useful for collapsing aliases like turtle.fd and turtle.forward.
The state_function
argument should be a one-argument function,
which given a function name, captures some kind of state and returns
a state object (typically a dictionary).
Each trace entry in the resulting trace represents one function call in the outermost scope and is a dictionary with the following keys:
- fname: The name of the function that was called.
- args: A dictionary of arguments passed to the function, mapping argument names to their values. For calls to C functions (such as most built-in functions), arguments are not available, and this key will not be present.
- result: The return value of the function. May be None if the function was terminated due to an exception, but there's no way to distinguish that from an intentional None return. For calls to C functions, this key will not be present.
- pre_state: A state object resulting from calling the given state_function just before the traced function call starts, with the function name as its only argument. Calls made during the execution of the state function will not be traced.
- post_state: The same kind of state object, but captured right before the return of the traced function.
- during: A list of trace entries in the same format representing traced function calls which were initiated and returned before the end of the function call that this trace entry represents.
Note that to inspect all function calls, the hierarchy must be traversed recursively to look at calls in "during" slots.
Note that for reasons, functions named "setprofile" cannot be traced. Also note that since functions are identified by name, multiple functions with the same name occurring in different modules will be treated as the same function for tracing purposes, although this shouldn't normally matter.
Note that in order to avoid tracing function calls made by payload augmentation, this augmentation should be applied before others.
1218def walk_trace(trace): 1219 """ 1220 A generator which yields each entry from the given trace in 1221 depth-first order, which is also the order in which each traced 1222 function call frame was created. Each item yielded is a trace entry 1223 dictionary, as described in `tracing_function_calls`. 1224 """ 1225 for entry in trace: 1226 yield entry 1227 yield from walk_trace(entry["during"])
A generator which yields each entry from the given trace in
depth-first order, which is also the order in which each traced
function call frame was created. Each item yielded is a trace entry
dictionary, as described in tracing_function_calls
.
1230def sampling_distribution_of_results( 1231 payload, 1232 slot_map={ 1233 "value": "distribution", 1234 "ref_value": "ref_distribution" 1235 }, 1236 trials=50000 1237): 1238 """ 1239 Creates a modified payload function that calls the given base payload 1240 many times, and creates a distribution table of the results: for each 1241 of the keys in the slot_map, a distribution table will be 1242 built and stored in a context slot labeled with the corresponding 1243 value from the slot_map. By default, the "value" and 1244 "ref_value" keys are observed and their distributions are stored in 1245 the "distribution" and "ref_distribution" slots. 1246 1247 Note: this augmentation has horrible interactions with most other 1248 augmentations, since either the other augmentations need to be 1249 applied each time a new sample is generated (horribly slow) or they 1250 will be applied to a payload which runs the base test many many times 1251 (often not what they're expecting). Accordingly, this augmentation is 1252 best used sparingly and with as few other augmentations as possible. 1253 1254 Note that the distribution table built by this function maps unique 1255 results to the number of times those results were observed across 1256 all trials, so the results of the payload being augmented must be 1257 hashable for it to work. 1258 1259 Note that the payload created by this augmentation does not generate 1260 any of the slots generated by the original payload. 1261 """ 1262 def distribution_observer_payload(context): 1263 """ 1264 Runs many trials of a base payload to determine the distribution 1265 of results. Stores that distribution under the 'distribution' 1266 context key as a dictionary with "trials" and "results" keys. 1267 The "trials" value is an integer number of trials performed, and 1268 the "results" value is a dictionary that maps distinct results 1269 observed to an integer number of times that result was observed. 1270 """ 1271 result = {} 1272 1273 distributions = { 1274 slot: { 1275 "trials": trials, 1276 "results": {} 1277 } 1278 for slot in slot_map 1279 } 1280 1281 for _ in range(trials): 1282 rctx = payload(context) 1283 for slot in slot_map: 1284 outcome = rctx[slot] 1285 target_dist = distributions[slot] 1286 target_dist["results"][outcome] = ( 1287 target_dist["results"].get(outcome, 0) + 1 1288 ) 1289 1290 for slot in slot_map: 1291 result[slot_map[slot]] = distributions[slot] 1292 1293 return result 1294 1295 return distribution_observer_payload
Creates a modified payload function that calls the given base payload many times, and creates a distribution table of the results: for each of the keys in the slot_map, a distribution table will be built and stored in a context slot labeled with the corresponding value from the slot_map. By default, the "value" and "ref_value" keys are observed and their distributions are stored in the "distribution" and "ref_distribution" slots.
Note: this augmentation has horrible interactions with most other augmentations, since either the other augmentations need to be applied each time a new sample is generated (horribly slow) or they will be applied to a payload which runs the base test many many times (often not what they're expecting). Accordingly, this augmentation is best used sparingly and with as few other augmentations as possible.
Note that the distribution table built by this function maps unique results to the number of times those results were observed across all trials, so the results of the payload being augmented must be hashable for it to work.
Note that the payload created by this augmentation does not generate any of the slots generated by the original payload.
1298def with_module_decorations(payload, decorations, ignore_missing=False): 1299 """ 1300 Augments a payload such that before it gets run, certain values in 1301 the module that's in the "module" slot of the current context are 1302 replaced with decorated values: the results of running a decoration 1303 function on them. Then, after the payload is complete, the 1304 decorations are reversed and the original values are put back in 1305 place. 1306 1307 The `decorations` argument should be a map from possibly-dotted 1308 attribute names within the target module to decoration functions, 1309 whose results (when given original attribute values as arguments) 1310 will be used to replace those values temporarily. 1311 1312 If `ignore_missing` is set to True, then even if a specified 1313 decoration entry names an attribute which does not exist in the 1314 target module, an attribute with that name will be created; the 1315 associated decorator function will receive the special class 1316 `Missing` as its argument in that case. 1317 """ 1318 def decorated_payload(context): 1319 """ 1320 Runs a base payload but first pins various decorations in place, 1321 undoing the pins afterwards. 1322 """ 1323 # Remember original values and pin new ones: 1324 orig = {} 1325 prefixes = {} 1326 1327 target_module = context_utils.extract(context, "module") 1328 1329 # Pin everything, remembering prefixes so we can delete exactly 1330 # the grafted-on structure if ignore_missing is true: 1331 for key in decorations: 1332 if ignore_missing: 1333 orig[key] = get_dot_attr( 1334 target_module, 1335 key, 1336 NoAttr 1337 ) 1338 prefixes[key] = dot_attr_prefix(target_module, key) 1339 else: 1340 orig[key] = get_dot_attr(target_module, key) 1341 1342 decorated = decorations[key](orig[key]) 1343 set_dot_attr(target_module, key, decorated) 1344 1345 # Run the payload with pins in place: 1346 try: 1347 result = payload(context) 1348 finally: 1349 # Definitely clean afterwards up by unpinning stuff: 1350 for key in decorations: 1351 orig_val = orig[key] 1352 prefix = prefixes.get(key) 1353 if ignore_missing: 1354 if orig_val == NoAttr: 1355 if prefix == '': 1356 delattr(target_module, key.split('.')[0]) 1357 else: 1358 last_val = get_dot_attr(target_module, prefix) 1359 rest_key = key[len(prefix) + 1:] 1360 delattr(last_val, rest_key.split('.')[0]) 1361 else: 1362 set_dot_attr(target_module, key, orig_val) 1363 else: 1364 set_dot_attr(target_module, key, orig_val) 1365 1366 # Now return our result 1367 return result 1368 1369 return decorated_payload
Augments a payload such that before it gets run, certain values in the module that's in the "module" slot of the current context are replaced with decorated values: the results of running a decoration function on them. Then, after the payload is complete, the decorations are reversed and the original values are put back in place.
The decorations
argument should be a map from possibly-dotted
attribute names within the target module to decoration functions,
whose results (when given original attribute values as arguments)
will be used to replace those values temporarily.
If ignore_missing
is set to True, then even if a specified
decoration entry names an attribute which does not exist in the
target module, an attribute with that name will be created; the
associated decorator function will receive the special class
Missing
as its argument in that case.
1376class Missing: 1377 """ 1378 Class to indicate missing-ness when None is a valid value. 1379 """ 1380 pass
Class to indicate missing-ness when None is a valid value.
1383class Generic: 1384 """ 1385 Class for creating missing parent objects in `set_dot_attr`. 1386 """ 1387 pass
Class for creating missing parent objects in set_dot_attr
.
1390class NoAttr: 1391 """ 1392 Class to indicate that an attribute was not present when pinning 1393 something. 1394 """ 1395 pass
Class to indicate that an attribute was not present when pinning something.
1398def get_dot_attr(obj, dot_attr, default=Missing): 1399 """ 1400 Gets an attribute from a obj, which may be a dotted attribute, in which 1401 case bits will be fetched in sequence. Returns the default if nothing is 1402 found at any step, or throws an AttributeError if no default is given 1403 (or if the default is explicitly set to Missing). 1404 """ 1405 if '.' in dot_attr: 1406 bits = dot_attr.split('.') 1407 first = getattr(obj, bits[0], Missing) 1408 if first is Missing: 1409 if default is Missing: 1410 raise AttributeError( 1411 "'{}' object has no attribute '{}'".format( 1412 type(obj), 1413 bits[0] 1414 ) 1415 ) 1416 else: 1417 return default 1418 else: 1419 return get_dot_attr(first, '.'.join(bits[1:]), default) 1420 else: 1421 result = getattr(obj, dot_attr, Missing) 1422 if result == Missing: 1423 if default == Missing: 1424 raise AttributeError( 1425 "'{}' object has no attribute '{}'".format( 1426 type(obj), 1427 dot_attr 1428 ) 1429 ) 1430 else: 1431 return default 1432 else: 1433 return result
Gets an attribute from a obj, which may be a dotted attribute, in which case bits will be fetched in sequence. Returns the default if nothing is found at any step, or throws an AttributeError if no default is given (or if the default is explicitly set to Missing).
1436def dot_attr_prefix(obj, dot_attr): 1437 """ 1438 Returns the longest prefix of attribute values that are part of the 1439 given dotted attribute string which actually exists on the given 1440 object. Returns an empty string if even the first attribute in the 1441 chain does not exist. If the full attribute value exists, it is 1442 returned as-is. 1443 """ 1444 if '.' in dot_attr: 1445 bits = dot_attr.split('.') 1446 first, rest = bits[0], bits[1:] 1447 if hasattr(obj, first): 1448 suffix = dot_attr_prefix(getattr(obj, first), '.'.join(rest)) 1449 if suffix: 1450 return first + '.' + suffix 1451 else: 1452 return first 1453 else: 1454 return "" 1455 else: 1456 if hasattr(obj, dot_attr): 1457 return dot_attr 1458 else: 1459 return ""
Returns the longest prefix of attribute values that are part of the given dotted attribute string which actually exists on the given object. Returns an empty string if even the first attribute in the chain does not exist. If the full attribute value exists, it is returned as-is.
1462def set_dot_attr(obj, dot_attr, value): 1463 """ 1464 Works like get_dot_attr, but sets an attribute instead of getting one. 1465 Creates instances of Generic if the target attribute lacks parents. 1466 """ 1467 if '.' in dot_attr: 1468 bits = dot_attr.split('.') 1469 g = Generic() 1470 parent = getattr(obj, bits[0], g) 1471 if parent == g: 1472 setattr(obj, bits[0], parent) 1473 set_dot_attr(parent, '.'.join(bits[1:]), value) 1474 else: 1475 setattr(obj, dot_attr, value)
Works like get_dot_attr, but sets an attribute instead of getting one. Creates instances of Generic if the target attribute lacks parents.
1482def warp_turtle(context): 1483 """ 1484 Disables turtle tracing, and resets turtle state. Use as a setup 1485 function with `with_setup` and/or via 1486 `specifications.HasPayload.do_setup`. Note that you MUST also use 1487 `finalize_turtle` as a cleanup function, or else some elements may 1488 not actually get drawn. 1489 """ 1490 turtle.reset() 1491 turtle.tracer(0, 0) 1492 return context
Disables turtle tracing, and resets turtle state. Use as a setup
function with with_setup
and/or via
specifications.HasPayload.do_setup
. Note that you MUST also use
finalize_turtle
as a cleanup function, or else some elements may
not actually get drawn.
1495def finalize_turtle(result): 1496 """ 1497 Paired with `warp_turtle`, makes sure that everything gets drawn. Use 1498 as a cleanup function (see `with_cleanup` and 1499 `specifications.HasPayload.do_cleanup`). 1500 """ 1501 turtle.update() 1502 return result
Paired with warp_turtle
, makes sure that everything gets drawn. Use
as a cleanup function (see with_cleanup
and
specifications.HasPayload.do_cleanup
).
1505def capture_turtle_state(_): 1506 """ 1507 This state-capture function logs the following pieces of global 1508 turtle state: 1509 1510 - position: A 2-tuple of x/y coordinates. 1511 - heading: A floating point number in degrees. 1512 - pen_is_down: Boolean indicating pen state. 1513 - is_filling: Boolean indicating whether we're filling or not. 1514 - pen_size: Floating-point pen size. 1515 - pen_color: String indicating current pen color. 1516 - fill_color: String indicating current fill color. 1517 1518 This state-capture function ignores its argument (which is the name 1519 of the function being called). 1520 """ 1521 return { 1522 "position": turtle.position(), 1523 "heading": turtle.heading(), 1524 "pen_is_down": turtle.isdown(), 1525 "is_filling": turtle.filling(), 1526 "pen_size": turtle.pensize(), 1527 "pen_color": turtle.pencolor(), 1528 "fill_color": turtle.fillcolor() 1529 }
This state-capture function logs the following pieces of global turtle state:
- position: A 2-tuple of x/y coordinates.
- heading: A floating point number in degrees.
- pen_is_down: Boolean indicating pen state.
- is_filling: Boolean indicating whether we're filling or not.
- pen_size: Floating-point pen size.
- pen_color: String indicating current pen color.
- fill_color: String indicating current fill color.
This state-capture function ignores its argument (which is the name of the function being called).
1532def capturing_turtle_drawings(payload, skip_reset=False, alt_text=None): 1533 """ 1534 Creates a modified version of the given payload which establishes an 1535 "image" slot in addition to the base slots, holding a PILlow image 1536 object which captures everything drawn on the turtle canvas by the 1537 time the function ended. It creates an "image_alt" slot with the 1538 provided alt_text, or if none is provided, it copies the "output" 1539 slot value as the image alt, assuming that `turtleBeads` has been 1540 used to create a description of what was drawn. 1541 1542 The function will reset the turtle state and turn off tracing 1543 before calling the payload function (see `warp_turtle`). It will 1544 also update the turtle canvas before capturing an image (see 1545 `finalize_turtle`). So you don't need to apply those as 1546 setup/cleanup functions yourself. If you want to disable the 1547 automatic setup/cleanup, set the skip_reset argument to False, 1548 although in that case tracing will still be disabled and one update 1549 will be performed at the end. 1550 1551 In default application order, the turtle reset/setup from this 1552 function is applied before any setup functions set using 1553 `with_setup`, and the output image is captured after any cleanup 1554 functions set using `with_cleanup` have been run, so you could for 1555 example apply a setup function that moves the turtle to a 1556 non-default starting point to test the flexibility of student code. 1557 1558 Note: you must have Pillow >=6.0.0 to use this augmentation, and you 1559 must also have Ghostscript installed (which is not available via 1560 PyPI, although most OS's should have a package manager via which 1561 Ghostscript can be installed)! 1562 """ 1563 # Before we even build our payload, verify that PIL will be 1564 # available (we let any exception bubble out naturally). 1565 import PIL 1566 # Check for full Ghostscript support necessary to read EPS 1567 import PIL.EpsImagePlugin as p 1568 if not p.has_ghostscript(): 1569 raise NotImplementedError( 1570 "In order to capture turtle drawings, you must install" 1571 " Ghostscript (which is not a Python package) manually." 1572 ) 1573 1574 def capturing_payload(context): 1575 """ 1576 Resets turtle state, disables tracing, runs a base payload, and 1577 then captures what was drawn on the turtle canvas as a PILlow 1578 image. 1579 """ 1580 # Reset turtle & disable tracing 1581 if skip_reset: 1582 turtle.tracer(0, 0) 1583 else: 1584 context = warp_turtle(context) 1585 1586 # Run the base payload 1587 result = payload(context) 1588 1589 # Ensure all drawing is up-to-date 1590 # Note: this if/else is future-proofing in case finalize_turtle 1591 # needs to do more in the future. 1592 if skip_reset: 1593 turtle.update() 1594 else: 1595 result = finalize_turtle(result) 1596 1597 # capture what's on the turtle canvas as a PILlow image 1598 canvas = turtle.getscreen().getcanvas() 1599 1600 # Capture postscript commands to recreate the canvas 1601 ps = canvas.postscript() 1602 1603 # Wrap as if it were a file and Use Ghostscript to turn the EPS 1604 # into a PIL image 1605 bio = io.BytesIO(ps.encode(encoding="utf-8")) 1606 captured = PIL.Image.open(bio, formats=["EPS"]) 1607 1608 # Convert to RGB mode if it's not in that mode already 1609 if captured.mode != "RGB": 1610 captured = captured.convert("RGB") 1611 1612 # Add our captured image to the "image" slot of the result 1613 result["image"] = captured 1614 1615 # Add alt text 1616 if alt_text is not None: 1617 result["image_alt"] = alt_text 1618 else: 1619 result["image_alt"] = result.get( 1620 "output", 1621 "no alt text available" 1622 ) 1623 1624 return result 1625 1626 return capturing_payload
Creates a modified version of the given payload which establishes an
"image" slot in addition to the base slots, holding a PILlow image
object which captures everything drawn on the turtle canvas by the
time the function ended. It creates an "image_alt" slot with the
provided alt_text, or if none is provided, it copies the "output"
slot value as the image alt, assuming that turtleBeads
has been
used to create a description of what was drawn.
The function will reset the turtle state and turn off tracing
before calling the payload function (see warp_turtle
). It will
also update the turtle canvas before capturing an image (see
finalize_turtle
). So you don't need to apply those as
setup/cleanup functions yourself. If you want to disable the
automatic setup/cleanup, set the skip_reset argument to False,
although in that case tracing will still be disabled and one update
will be performed at the end.
In default application order, the turtle reset/setup from this
function is applied before any setup functions set using
with_setup
, and the output image is captured after any cleanup
functions set using with_cleanup
have been run, so you could for
example apply a setup function that moves the turtle to a
non-default starting point to test the flexibility of student code.
Note: you must have Pillow >=6.0.0 to use this augmentation, and you must also have Ghostscript installed (which is not available via PyPI, although most OS's should have a package manager via which Ghostscript can be installed)!
1646def disable_track_actions(): 1647 """ 1648 Disables the `playTrack` and `saveTrack` `wavesynth` functions, 1649 turning them into functions which accept the same arguments and 1650 simply instantly return None. This helps ensure that students' 1651 testing calls to `saveTrack` or `playTrack` don't eat up evaluation 1652 time. Saves the original functions in the `_PLAY_WAVESYNTH_TRACK` and 1653 `_SAVE_WAVESYNTH_TRACK` global variables. 1654 1655 Only saves original functions the first time it's called, so that 1656 `reenable_track_actions` will work even if `disable_track_actions` is 1657 called multiple times. 1658 1659 Note that you may want to use this function with 1660 `specifications.add_module_prep` to ensure that submitted code 1661 doesn't try to call `playTrack` or `saveTrack` during import and 1662 waste evaluation time. 1663 """ 1664 global _PLAY_WAVESYNTH_TRACK, _SAVE_WAVESYNTH_TRACK 1665 import wavesynth 1666 if _PLAY_WAVESYNTH_TRACK is None: 1667 _PLAY_WAVESYNTH_TRACK = wavesynth.playTrack 1668 _SAVE_WAVESYNTH_TRACK = wavesynth.saveTrack 1669 wavesynth.playTrack = lambda wait=None: None 1670 wavesynth.saveTrack = lambda filename: None
Disables the playTrack
and saveTrack
wavesynth
functions,
turning them into functions which accept the same arguments and
simply instantly return None. This helps ensure that students'
testing calls to saveTrack
or playTrack
don't eat up evaluation
time. Saves the original functions in the _PLAY_WAVESYNTH_TRACK
and
_SAVE_WAVESYNTH_TRACK
global variables.
Only saves original functions the first time it's called, so that
reenable_track_actions
will work even if disable_track_actions
is
called multiple times.
Note that you may want to use this function with
specifications.add_module_prep
to ensure that submitted code
doesn't try to call playTrack
or saveTrack
during import and
waste evaluation time.
1673def reenable_track_actions(): 1674 """ 1675 Restores the `saveTrack` and `playTrack` functions after 1676 `disable_track_actions` has disabled them. 1677 """ 1678 global _PLAY_WAVESYNTH_TRACK, _SAVE_WAVESYNTH_TRACK 1679 import wavesynth 1680 if _PLAY_WAVESYNTH_TRACK is not None: 1681 wavesynth.playTrack = _PLAY_WAVESYNTH_TRACK 1682 wavesynth.saveTrack = _SAVE_WAVESYNTH_TRACK 1683 _PLAY_WAVESYNTH_TRACK = None 1684 _SAVE_WAVESYNTH_TRACK = None
Restores the saveTrack
and playTrack
functions after
disable_track_actions
has disabled them.
1687def ensure_or_stub_simpleaudio(): 1688 """ 1689 Tries to import the `simpleaudio` module, and if that's not possible, 1690 creates a stub module named "simpleaudio" which raises an attribute 1691 error on any access attempt. The stub module will be inserted in 1692 `sys.modules` as if it were `simpleaudio`. 1693 1694 Note that you may want to set this up as a prep function using 1695 `specifications.add_module_prep` to avoid crashing if submitted code 1696 tries to import `simpleaudio` (although it will still crash if 1697 student code tries to use anything from `simpleaudio`). 1698 """ 1699 # We also try to import simpleaudio, but set up a dummy module in its 1700 # place if it's not available, since we don't need or want to play 1701 # the sounds for grading purposes. 1702 try: 1703 import simpleaudio # noqa F401 1704 except Exception: 1705 def missing(name): 1706 """ 1707 Fake getattr to raise a reasonable-seeming error if someone 1708 tries to use our fake simpleaudio. 1709 """ 1710 raise AttributeError( 1711 "During grading, simpleaudio is not accessible. We have" 1712 " disabled playTrack and saveTrack for testing purposes" 1713 " anyway, and your code should not need to use" 1714 " simpleaudio directly either." 1715 ) 1716 fake_simpleaudio = imp.new_module("simpleaudio") 1717 fake_simpleaudio.__getattr__ = missing 1718 sys.modules["simpleaudio"] = fake_simpleaudio
Tries to import the simpleaudio
module, and if that's not possible,
creates a stub module named "simpleaudio" which raises an attribute
error on any access attempt. The stub module will be inserted in
sys.modules
as if it were simpleaudio
.
Note that you may want to set this up as a prep function using
specifications.add_module_prep
to avoid crashing if submitted code
tries to import simpleaudio
(although it will still crash if
student code tries to use anything from simpleaudio
).
1721def capturing_wavesynth_audio(payload, just_capture=None, label=None): 1722 """ 1723 Creates a modified version of the given payload which establishes 1724 "notes" and "audio" slots in addition to the base slots. "notes" 1725 holds the result of `wavesynth.trackDescription` (a list of strings) 1726 while "audio" holds a dictionary with the following keys: 1727 1728 - "mimetype": The MIME type for the captured data. 1729 - "data": The captured binary data, as a bytes object. 1730 - "label": A text label for the audio, if a 'label' value is 1731 provided; not present otherwise 1732 1733 The data captured is the WAV format audio that would be saved by the 1734 wavesynth module's `saveTrack` function, which in particular means 1735 it only captures whatever is in the "current track." The 1736 `resetTracks` function is called before the payload is executed, and 1737 again afterwards to clean things up. 1738 1739 If the `wavesynth` module is not installed, a `ModuleNotFoundError` 1740 will be raised. 1741 """ 1742 # Before we even build our payload, verify that wavesynth will be 1743 # available (we let any exception bubble out naturally). 1744 import wavesynth 1745 1746 # We do this here just in case student code attempts to use 1747 # simpleaudio directly, since installing simpleaudio for evaluation 1748 # purposes shouldn't be necessary. 1749 ensure_or_stub_simpleaudio() 1750 1751 def capturing_payload(context): 1752 """ 1753 Resets all tracks state, runs a base payload, and then captures 1754 what was put into the current track as both a list of note 1755 descriptions and as a dictionary indicating a MIME type, raw 1756 binary data, and maybe a label. 1757 """ 1758 # Reset all tracks 1759 wavesynth.resetTracks() 1760 1761 # Disable playTrack and saveTrack 1762 disable_track_actions() 1763 1764 # Run the base payload 1765 try: 1766 result = payload(context) 1767 finally: 1768 reenable_track_actions() 1769 1770 # capture the descriptions of the notes in the current track 1771 if just_capture in (None, "notes"): 1772 result["notes"] = wavesynth.trackDescription() 1773 1774 # capture what's in the current track as raw WAV bytes 1775 if just_capture in (None, "audio"): 1776 bio = io.BytesIO() 1777 wavesynth.saveTrack(bio) 1778 data = bio.getvalue() 1779 1780 # Add our captured audio to the "audio" slot of the result 1781 result["audio"] = { 1782 "mimetype": "audio/wav", 1783 "data": data, 1784 } 1785 1786 # Add a label 1787 if label is not None: 1788 result["audio"]["label"] = label 1789 1790 # Reset all tracks (again) 1791 wavesynth.resetTracks() 1792 1793 return result 1794 1795 return capturing_payload
Creates a modified version of the given payload which establishes
"notes" and "audio" slots in addition to the base slots. "notes"
holds the result of wavesynth.trackDescription
(a list of strings)
while "audio" holds a dictionary with the following keys:
- "mimetype": The MIME type for the captured data.
- "data": The captured binary data, as a bytes object.
- "label": A text label for the audio, if a 'label' value is provided; not present otherwise
The data captured is the WAV format audio that would be saved by the
wavesynth module's saveTrack
function, which in particular means
it only captures whatever is in the "current track." The
resetTracks
function is called before the payload is executed, and
again afterwards to clean things up.
If the wavesynth
module is not installed, a ModuleNotFoundError
will be raised.
1802def report_argument_modifications(target, *args, **kwargs): 1803 """ 1804 This function works as a test harness but doesn't capture the value 1805 or output of the function being tested. Instead, it generates a text 1806 report on whether each mutable argument to the function was modified 1807 or not after the function is finished. It only checks arguments which 1808 are are lists or dictionaries at the top level, so its definition of 1809 modifiable is rather narrow. 1810 1811 The report uses argument positions when the test case is given 1812 positional arguments and argument names when it's given keyword 1813 arguments. 1814 1815 (Note: the last two paragraphs of this docstring are picked up 1816 automatically as rubric values for tests using this harness. fname 1817 will be substituted in, which is why it appears in curly braces 1818 below.) 1819 1820 Description: 1821 1822 <code>{fname}</code> must only modify arguments it is supposed to 1823 modify. 1824 1825 We will call <code>{fname}</code> and check to make sure that the 1826 values provided as arguments are not changed by the function, except 1827 where such changes are explicitly required. Note that only mutable 1828 values, like dictionaries or lists, may be modified by a function, so 1829 this check is not applied to any string or number arguments. 1830 """ 1831 # Identify mutable arguments 1832 mposargs = [ 1833 i 1834 for i in range(len(args)) 1835 if isinstance(args[i], (list, dict)) 1836 ] 1837 mkwargs = [k for k in kwargs if isinstance(kwargs[k], (list, dict))] 1838 if target.__kwdefaults__ is not None: 1839 mkwdefaults = [k for k in target.__kwdefaults__ if k not in kwargs] 1840 else: 1841 mkwdefaults = [] 1842 # This code could be used to get argument names for positional 1843 # arguments, but we actually don't want them. 1844 #nargs = target.__code__.co_argcount + target.__code__.co_kwonlyargcount 1845 #margnames = [target.__code__.co_varnames[:nargs][i] for i in mposargs] 1846 #mposnames = margnames[:len(mposargs)] 1847 mposvals = [copy.deepcopy(args[i]) for i in mposargs] 1848 mkwvals = [copy.deepcopy(kwargs[k]) for k in mkwargs] 1849 mkwdefvals = { 1850 k: copy.deepcopy(target.__kwdefaults__[k]) 1851 for k in mkwdefaults 1852 } 1853 1854 # Call the target function 1855 _ = target(*args, **kwargs) 1856 1857 # Report on which arguments were modified 1858 result = "" 1859 1860 # Changes in positional argument values 1861 for argindex, orig in zip(mposargs, mposvals): 1862 final = args[argindex] 1863 result += "Your code {} the value of the {} argument.\n".format( 1864 "modified" if orig != final else "did not modify", 1865 phrasing.ordinal(argindex) 1866 ) 1867 1868 # Changes in keyword argument values 1869 for name, orig in zip(mkwargs, mkwvals): 1870 final = kwargs[name] 1871 result += "Your code {} the value of the '{}' argument.\n".format( 1872 "modified" if orig != final else "did not modify", 1873 name 1874 ) 1875 1876 # Changes in values of unsupplied keyword arguments (i.e., changes to 1877 # defaults, which if unintentional is usually bad!) 1878 for name, orig in zip(mkwdefaults, mkwdefvals): 1879 final = target.__kwdefaults__[name] 1880 result += "Your code {} the value of the '{}' argument.\n".format( 1881 "modified" if orig != final else "did not modify", 1882 name 1883 ) 1884 1885 # The report by default will be compared against an equivalent report 1886 # from the solution function, so that's how we figure out which 1887 # arguments *should* be modified or not. 1888 return result
This function works as a test harness but doesn't capture the value or output of the function being tested. Instead, it generates a text report on whether each mutable argument to the function was modified or not after the function is finished. It only checks arguments which are are lists or dictionaries at the top level, so its definition of modifiable is rather narrow.
The report uses argument positions when the test case is given positional arguments and argument names when it's given keyword arguments.
(Note: the last two paragraphs of this docstring are picked up automatically as rubric values for tests using this harness. fname will be substituted in, which is why it appears in curly braces below.)
Description:
{fname}
must only modify arguments it is supposed to
modify.
We will call {fname}
and check to make sure that the
values provided as arguments are not changed by the function, except
where such changes are explicitly required. Note that only mutable
values, like dictionaries or lists, may be modified by a function, so
this check is not applied to any string or number arguments.
1891def returns_a_new_value(target, *args, **kwargs): 1892 """ 1893 Checks whether or not the target function returns a value which is 1894 new (i.e., not the same object as one of its arguments). Uses the 1895 'is' operator to check for same-object identity, so it will catch 1896 cases in which an object is modified and then returned. Returns a 1897 string indicating whether or not a newly-constructed value is 1898 returned. 1899 1900 Note: won't catch cases where the result is a structure which 1901 *includes* one of the arguments. And does not check whether the 1902 result is equivalent to one of the arguments, just whether it's 1903 actually the same object or not. 1904 1905 (Note: the last two paragraphs of this docstring are picked up 1906 automatically as rubric values for tests using this harness. fname 1907 will be substituted in, which is why it appears in curly braces 1908 below. This harness can also be used to ensure that a function 1909 doesn't return a new value, in which case an alternate description 1910 should be used.) 1911 1912 Description: 1913 1914 <code>{fname}</code> must return a new value, rather than returning 1915 one of its arguments. 1916 1917 We will call <code>{fname}</code> and check to make sure that the 1918 value it returns is a new value, rather than one of the arguments it 1919 was given (modified or not). 1920 """ 1921 # Call the target function 1922 fresult = target(*args, **kwargs) 1923 1924 # Check the result against each of the arguments 1925 nargs = target.__code__.co_argcount + target.__code__.co_kwonlyargcount 1926 for argindex, argname in enumerate(target.__code__.co_varnames[:nargs]): 1927 if argindex < len(args): 1928 # a positional argument 1929 argval = args[argindex] 1930 argref = phrasing.ordinal(argindex) 1931 else: 1932 # a keyword argument (possibly defaulted via omission) 1933 argval = kwargs.get(argname, target.__kwdefaults__[argname]) 1934 argref = repr(argname) 1935 1936 if fresult is argval: 1937 return ( 1938 "Returned the {} argument (possibly with modifications)." 1939 ).format(argref) 1940 1941 # Since we didn't return in the loop above, there's no match 1942 return "Returned a new value."
Checks whether or not the target function returns a value which is new (i.e., not the same object as one of its arguments). Uses the 'is' operator to check for same-object identity, so it will catch cases in which an object is modified and then returned. Returns a string indicating whether or not a newly-constructed value is returned.
Note: won't catch cases where the result is a structure which includes one of the arguments. And does not check whether the result is equivalent to one of the arguments, just whether it's actually the same object or not.
(Note: the last two paragraphs of this docstring are picked up automatically as rubric values for tests using this harness. fname will be substituted in, which is why it appears in curly braces below. This harness can also be used to ensure that a function doesn't return a new value, in which case an alternate description should be used.)
Description:
{fname}
must return a new value, rather than returning
one of its arguments.
We will call {fname}
and check to make sure that the
value it returns is a new value, rather than one of the arguments it
was given (modified or not).
1949def file_contents_setter(filename, contents): 1950 """ 1951 Returns a setup function (use with `with_setup`) which replaces the 1952 contents of the given file with the given contents. Be careful, 1953 because this will happily overwrite any file. If the desired contents 1954 is a bytes object, the file will be written in binary mode to contain 1955 exactly those bytes, otherwise contents should be a string. 1956 """ 1957 def setup_file_contents(context): 1958 """ 1959 Returns the provided context as-is, but before doing so, writes 1960 data to a specific file to set it up for the coming test. 1961 """ 1962 if isinstance(contents, bytes): 1963 with open(filename, 'wb') as fout: 1964 fout.write(contents) 1965 else: 1966 with open(filename, 'w') as fout: 1967 fout.write(contents) 1968 return context 1969 1970 return setup_file_contents
Returns a setup function (use with with_setup
) which replaces the
contents of the given file with the given contents. Be careful,
because this will happily overwrite any file. If the desired contents
is a bytes object, the file will be written in binary mode to contain
exactly those bytes, otherwise contents should be a string.
1973def capturing_file_contents(payload, filename, binary=False): 1974 """ 1975 Captures the entire contents of the given filename as a string (or a 1976 bytes object if binary is set to True), and stores it in the 1977 "output_file_contents" context slot. Also stores the file name of the 1978 file that was read in in the "output_filename" slot. 1979 """ 1980 def capturing_payload(context): 1981 """ 1982 Runs a base payload and then reads the contents of a specific 1983 file, adding that data as a "output_file_contents" context slot 1984 and also adding an "output_filename" slot holding the filename 1985 that was read from. 1986 """ 1987 # Run base payload 1988 result = payload(context) 1989 1990 # Record filename in result 1991 result["output_filename"] = filename 1992 1993 # Decide on open flags 1994 if binary: 1995 flags = 'rb' 1996 else: 1997 flags = 'r' 1998 1999 with open(filename, flags) as fin: 2000 file_contents = fin.read() 2001 2002 # Add file contents 2003 result["output_file_contents"] = file_contents 2004 2005 return result 2006 2007 return capturing_payload
Captures the entire contents of the given filename as a string (or a bytes object if binary is set to True), and stores it in the "output_file_contents" context slot. Also stores the file name of the file that was read in in the "output_filename" slot.