-
Notifications
You must be signed in to change notification settings - Fork 120
Description
Describe the bug
I wanted to check and see if there's a reason that trip scheduling choice was never wired up with the explicit chunking mechanism when it was added (happy for this not to be called a bug if this was deliberate!). In the DTP model I've been looking at, this step is now the memory peak (after explicit chunking has been configured in the other steps).
Happy to split out a PR since I think this is trivial to wire up, but thought I'd see if there's a good reason not to first. I can see that explicit chunk size is also not fed through in trip purpose, initialize_tvpb, run_tour_scheduling_probabilistic and simple_simulate, maybe there was just a pragmatic choice about what was likely enough to be memory bottlenecks? Though these all would have been handled under the old training/ production mode options I think?
# run_trip_scheduling_choice snippet
if len(indirect_tours) > 0:
# Iterate through the chunks
result_list = []
for (
i,
choosers,
chunk_trace_label,
chunk_sizer,
) in chunk.adaptive_chunked_choosers(state, indirect_tours, trace_label):
# Sort the choosers and get the schedule alternatives
choosers = choosers.sort_index()
schedules = generate_schedule_alternatives(choosers).sort_index()the adaptive chunking helper doesn't feed through a chunksize, and the respective pydantic class TripSchedulingChoiceSettings also doesn't have a key to set the chunk size paramter.