Description
@pauldhein and I were discussing this a while ago - it occurred to us that since the output of the program analysis pipeline is a pickled Python object, there is no reason, in principle, why the lambda functions couldn't be pickled alongside the rest of the output. For example, the following function in PETPT_lambdas.py
:
def PETPT__lambda__TD_0(TMAX, TMIN):
TD = ((0.6*TMAX)+(0.4*TMIN))
return TD
Could be constructed as follows:
PETPT__lambda__TD_0 = eval("lambda TMAX, TMIN: ((0.6*TMAX)+(0.4*TMIN))")
Here, the string argument to eval
could be constructed in the same way the second line of the existing lambda functions in the lambdas.py
files are built up from parsing the XML AST output of OFP.
Alternatively (and this seems to me to be the right way), one could take advantage of type annotations and use the more powerful def
syntax for declaring functions -
exec("def PETPT__lambda__TD_0(TMAX: float, TMIN: float) -> float: return ((0.6*TMAX)+(0.4*TMIN))")
(assuming we can get these types - can we?)
and later the PETPT__lambda__TD_0
object can be used as a value in the dict
object produced by genPGM.py
.
Since functions are first class objects in Python, you can actually set attributes for functions as well - perhaps this might make it easier to keep track of things like the function type (assign
/lambda
/condition
, etc.), the reference, and so on:
PETPT__lambda__TD_0.fn_type = "lambda"
PETPT__lambda__TD_0.reference = 9
PETPT__lambda__TD_0.target = "TD"
And then if someone wants to serialize the GrFN object to a JSON file, we could define the following function:
import inspect
def to_json_serialized_dict(function):
return {
"name": function.__name__,
"type": function.fn_type,
"target": function.target,
"sources": inspect.signature(function)
# Plus some processing to massage the above into a JSON-serializable dict
...
}
Not super urgent but I do think that it might be a investment worth making to simplify things in the long run...