Skip to content

Conversation

@Pablo1990
Copy link
Owner

@Pablo1990 Pablo1990 commented Feb 2, 2026

I have prepared some tests that should pass, but they are not passing:

  • test_vertexModel.test_vertices_shouldnt_be_going_wild_3
  • test_vertexModel.test_vertices_shouldnt_be_going_wild
  • test_vertexModel.test_weird_bug_should_not_happen

Summary by CodeRabbit

  • New Features

    • Added a configurable convergence tolerance to improve simulation stability control.
  • Tests

    • Added edge-case tests for vertex stability and non-convergence; removed some legacy test cases.
    • Extended support for additional test data formats to make test runs more flexible.
  • Chores

    • Updated test data paths and improved runtime behavior to avoid unnecessary file output.
    • Introduced logging and warning routing for clearer test/runtime diagnostics.

Copilot AI review requested due to automatic review settings February 2, 2026 10:44
@Pablo1990 Pablo1990 linked an issue Feb 2, 2026 that may be closed by this pull request
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 2, 2026

Warning

Rate limit exceeded

@Pablo1990 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 19 minutes and 32 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Tests and test-loading were refactored: several vertex-model test functions removed and three behavioral tests added; test data paths and loading now support both .mat and .pkl; output file I/O is guarded by OutputFolder checks; a configurable dt_tolerance replaces a fixed convergence threshold.

Changes

Cohort / File(s) Summary
Vertex-model tests
Tests/test_vertexModel.py
Removed imports and tests for calculate_neighbours, build_2d_voronoi_from_image, populate_vertices_info, calculate_vertices; added behavioral tests test_weird_bug_should_not_happen, test_vertices_shouldnt_be_going_wild, test_vertices_shouldnt_be_going_wild_3; adjusted test data paths to Tests/Tests_data.
Test loader / data handling
Tests/tests.py
Made .mat loading conditional; added .pkl loading via VertexModelVoronoiFromTimeImage and load_state; returns model (with output disabled) for pickle cases; improved handling for missing Geo and Set entries; raises FileNotFoundError for unknown file types.
Runtime output & convergence
src/pyVertexModel/algorithm/vertexModel.py
Guarded creation of temporary images directory and all VTK/visualization/state-saving behind OutputFolder is not None; replaced fixed convergence check with configurable dt_tolerance.
Parameters / defaults
src/pyVertexModel/parameters/set.py
Added public attribute dt_tolerance (default 1e-6); simplified implicit_method boolean check; updated embedded data path reference to Tests_data/Geo_var_cyst.mat.
Test package init / logging
Tests/__init__.py
Added PROJECT_DIRECTORY derivation, configured module logger pyVertexModel with handler/formatter, and installed warnings.showwarningwarning_handler to route warnings to the logger.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐇 I hopped through tests and nudged some files aside,
Pickles now ride where MATs used to hide,
Tolerance tuned, outputs gated tight,
New tests watch the vertices dance at night,
A tiny rabbit cheers: the changes look spry!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title refers to fixing an issue with geometry breaking, which aligns with the PR's focus on addressing vertex instability and convergence issues that occur after scutoid remodeling.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 5-fix-issue-with-geometry-breaking-from-time-to-time

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Pablo1990
Copy link
Owner Author

@copilot this is a complex task, but these three tests are not passing and I don't know why. The model breaks in those situations where a cell has been remodelled and now it is an scutoid. I want them to be scutoids, but there is something wrong in the vertices (cell.Y) or the forces that makes the model go wild as seen in the tests.
Here is one of the logs where vertices start moving super fast where the dt/dt0 decreases trying to tackle that, but fails:

2026-01-18 11:50:01,968 - pyVertexModel - INFO - Step: 754, Iter: 0 ||gr||= 0.040238222023553386 ||dyr||= 0.002472117606828581 dt/dt0=1
2026-01-18 11:50:01,968 - pyVertexModel - INFO - New gradient norm: 4.024e-02
2026-01-18 11:50:01,969 - pyVertexModel - INFO - STEP 754 has converged ...
2026-01-18 11:50:06,135 - pyVertexModel - INFO - Time: 7.471835937499885
2026-01-18 11:50:13,809 - pyVertexModel - INFO - Surface: 0.0034243497041001976
2026-01-18 11:50:13,810 - pyVertexModel - INFO - Volume: 0.0003897954758668866
2026-01-18 11:50:13,810 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.789457122452421e-05
2026-01-18 11:50:13,810 - pyVertexModel - INFO - Substrate: 0.0004258242728553473
2026-01-18 11:50:16,503 - pyVertexModel - INFO - Step: 755, Iter: 0 ||gr||= 0.04279767761282011 ||dyr||= 0.003230633341861223 dt/dt0=1
2026-01-18 11:50:16,503 - pyVertexModel - INFO - New gradient norm: 4.280e-02
2026-01-18 11:50:16,503 - pyVertexModel - INFO - STEP 755 has converged ...
2026-01-18 11:50:20,637 - pyVertexModel - INFO - Time: 7.481835937499885
2026-01-18 11:50:28,393 - pyVertexModel - INFO - Surface: 0.0034466541899497807
2026-01-18 11:50:28,393 - pyVertexModel - INFO - Volume: 0.00039771072709857873
2026-01-18 11:50:28,393 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7895097560818708e-05
2026-01-18 11:50:28,394 - pyVertexModel - INFO - Substrate: 0.00042829719274150473
2026-01-18 11:50:31,044 - pyVertexModel - INFO - Step: 756, Iter: 0 ||gr||= 0.0887132208080734 ||dyr||= 0.011560956867195105 dt/dt0=1
2026-01-18 11:50:31,044 - pyVertexModel - INFO - New gradient norm: 8.871e-02
2026-01-18 11:50:33,112 - pyVertexModel - INFO - Time: 7.476835937499885
2026-01-18 11:50:41,052 - pyVertexModel - INFO - Surface: 0.0034466541899497807
2026-01-18 11:50:41,052 - pyVertexModel - INFO - Volume: 0.00039771072709857873
2026-01-18 11:50:41,052 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7895097560818708e-05
2026-01-18 11:50:41,052 - pyVertexModel - INFO - Substrate: 0.00042829719274150473
2026-01-18 11:50:43,696 - pyVertexModel - INFO - Step: 756, Iter: 0 ||gr||= 0.0887132208080734 ||dyr||= 0.005780478433597552 dt/dt0=0.5
2026-01-18 11:50:43,697 - pyVertexModel - INFO - New gradient norm: 8.871e-02
2026-01-18 11:50:43,697 - pyVertexModel - INFO - STEP 756 has converged ...
2026-01-18 11:50:47,574 - pyVertexModel - INFO - Time: 7.481835937499885
2026-01-18 11:50:55,885 - pyVertexModel - INFO - Surface: 0.0035526018935134692
2026-01-18 11:50:55,885 - pyVertexModel - INFO - Volume: 0.0005730979460180129
2026-01-18 11:50:55,885 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7899492353300192e-05
2026-01-18 11:50:55,885 - pyVertexModel - INFO - Substrate: 0.0004407250332552648
2026-01-18 11:50:58,373 - pyVertexModel - INFO - Step: 757, Iter: 0 ||gr||= 0.7935147590059549 ||dyr||= 0.08493018901161697 dt/dt0=0.75
2026-01-18 11:50:58,373 - pyVertexModel - INFO - New gradient norm: 7.935e-01
2026-01-18 11:51:00,538 - pyVertexModel - INFO - Time: 7.480585937499885
2026-01-18 11:51:08,208 - pyVertexModel - INFO - Surface: 0.0035526018935134692
2026-01-18 11:51:08,208 - pyVertexModel - INFO - Volume: 0.0005730979460180129
2026-01-18 11:51:08,208 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7899492353300192e-05
2026-01-18 11:51:08,208 - pyVertexModel - INFO - Substrate: 0.0004407250332552648
2026-01-18 11:51:10,761 - pyVertexModel - INFO - Step: 757, Iter: 0 ||gr||= 0.7935147590059549 ||dyr||= 0.04246509450580849 dt/dt0=0.375
2026-01-18 11:51:10,762 - pyVertexModel - INFO - New gradient norm: 7.935e-01
2026-01-18 11:51:12,770 - pyVertexModel - INFO - Time: 7.478710937499885
2026-01-18 11:51:20,919 - pyVertexModel - INFO - Surface: 0.0035526018935134692
2026-01-18 11:51:20,919 - pyVertexModel - INFO - Volume: 0.0005730979460180129
2026-01-18 11:51:20,919 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7899492353300192e-05
2026-01-18 11:51:20,919 - pyVertexModel - INFO - Substrate: 0.0004407250332552648
2026-01-18 11:51:23,427 - pyVertexModel - INFO - Step: 757, Iter: 0 ||gr||= 0.7935147590059549 ||dyr||= 0.021232547252904244 dt/dt0=0.188
2026-01-18 11:51:23,428 - pyVertexModel - INFO - New gradient norm: 7.935e-01
2026-01-18 11:51:25,494 - pyVertexModel - INFO - Time: 7.477773437499885
2026-01-18 11:51:33,322 - pyVertexModel - INFO - Surface: 0.0035526018935134692
2026-01-18 11:51:33,322 - pyVertexModel - INFO - Volume: 0.0005730979460180129
2026-01-18 11:51:33,322 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7899492353300192e-05
2026-01-18 11:51:33,322 - pyVertexModel - INFO - Substrate: 0.0004407250332552648
2026-01-18 11:51:35,841 - pyVertexModel - INFO - Step: 757, Iter: 0 ||gr||= 0.7935147590059549 ||dyr||= 0.010616273626452122 dt/dt0=0.0938
2026-01-18 11:51:35,842 - pyVertexModel - INFO - New gradient norm: 7.935e-01
2026-01-18 11:51:37,856 - pyVertexModel - INFO - Time: 7.477304687499885
2026-01-18 11:51:45,893 - pyVertexModel - INFO - Surface: 0.0035526018935134692
2026-01-18 11:51:45,894 - pyVertexModel - INFO - Volume: 0.0005730979460180129
2026-01-18 11:51:45,894 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.7899492353300192e-05
2026-01-18 11:51:45,894 - pyVertexModel - INFO - Substrate: 0.0004407250332552648
2026-01-18 11:51:48,406 - pyVertexModel - INFO - Step: 757, Iter: 0 ||gr||= 0.7935147590059549 ||dyr||= 0.005308136813226061 dt/dt0=0.0469
2026-01-18 11:51:48,406 - pyVertexModel - INFO - New gradient norm: 7.935e-01
2026-01-18 11:51:48,406 - pyVertexModel - INFO - STEP 757 has converged ...
2026-01-18 11:51:52,183 - pyVertexModel - INFO - Time: 7.477773437499884
2026-01-18 11:52:00,226 - pyVertexModel - INFO - Surface: 0.003719104453064215
2026-01-18 11:52:00,226 - pyVertexModel - INFO - Volume: 0.0014207875634069187
2026-01-18 11:52:00,226 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.790534736083651e-05
2026-01-18 11:52:00,226 - pyVertexModel - INFO - Substrate: 0.0004578263089038988
2026-01-18 11:52:02,736 - pyVertexModel - INFO - Step: 758, Iter: 0 ||gr||= 2.9074044331038547 ||dyr||= 0.029201556604961827 dt/dt0=0.0703
2026-01-18 11:52:02,736 - pyVertexModel - INFO - New gradient norm: 2.907e+00
2026-01-18 11:52:04,806 - pyVertexModel - INFO - Time: 7.477656249999884
2026-01-18 11:52:12,653 - pyVertexModel - INFO - Surface: 0.003719104453064215
2026-01-18 11:52:12,654 - pyVertexModel - INFO - Volume: 0.0014207875634069187
2026-01-18 11:52:12,654 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.790534736083651e-05
2026-01-18 11:52:12,654 - pyVertexModel - INFO - Substrate: 0.0004578263089038988
2026-01-18 11:52:15,249 - pyVertexModel - INFO - Step: 758, Iter: 0 ||gr||= 2.9074044331038547 ||dyr||= 0.014600778302480914 dt/dt0=0.0352
2026-01-18 11:52:15,249 - pyVertexModel - INFO - New gradient norm: 2.907e+00
2026-01-18 11:52:17,293 - pyVertexModel - INFO - Time: 7.477480468749885
2026-01-18 11:52:25,360 - pyVertexModel - INFO - Surface: 0.003719104453064215
2026-01-18 11:52:25,360 - pyVertexModel - INFO - Volume: 0.0014207875634069187
2026-01-18 11:52:25,360 - pyVertexModel - INFO - TriEnergyBarrierAR: 1.790534736083651e-05
2026-01-18 11:52:25,360 - pyVertexModel - INFO - Substrate: 0.0004578263089038988
2026-01-18 11:52:27,883 - pyVertexModel - INFO - Step: 758, Iter: 0 ||gr||= 2.9074044331038547 ||dyr||= 0.007300389151240457 dt/dt0=0.0176

Copy link
Contributor

Copilot AI commented Feb 2, 2026

@Pablo1990 I've opened a new pull request, #7, to work on those changes. Once the pull request is ready, I'll request review from you.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@src/pyVertexModel/algorithm/vertexModel.py`:
- Around line 536-538: The code dereferences self.set.dt_tolerance directly
which breaks for legacy pickled Set objects and allows non‑positive values to
cause endless halving; fix by reading a validated tolerance first (e.g. tol =
getattr(self.set, "dt_tolerance", DEFAULT_DT_TOLERANCE)), clamp/replace any
non‑positive or otherwise invalid values (e.g. if tol is None or tol <= 0: tol =
DEFAULT_DT_TOLERANCE), then use that validated tol in the conditional that
references dt_tolerance (the if block that checks (self.set.dt / self.set.dt0) >
... and resets self.set.MaxIter). Define a clear DEFAULT_DT_TOLERANCE constant
or use an existing config value and update the condition to use tol instead of
self.set.dt_tolerance.

In `@Tests/tests.py`:
- Around line 26-31: The current check on set_test.OutputFolder only compares to
b'' and None, but scipy.io.loadmat may return '' (str) or b'' (bytes) so empty
values can slip through; update the logic that sets set_test.OutputFolder (for
the Set(...) result) to treat None, b'' and '' as empty by normalizing bytes to
str (or checking both types) and then assigning '../Result/Test' when the
normalized value is empty; reference Set and set_test.OutputFolder to locate the
code and ensure the check uses isinstance or explicit comparisons for both b''
and '' before assigning the default OutputFolder.

Comment on lines 536 to 538
if (self.set.iter >= self.set.MaxIter and
(self.set.dt / self.set.dt0) > 1e-6):
(self.set.dt / self.set.dt0) > self.set.dt_tolerance):
self.set.MaxIter = self.set.MaxIter0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard dt_tolerance for legacy pickles / invalid values.

Line 536 now dereferences self.set.dt_tolerance; pickled Set objects created before this field existed will raise AttributeError, and non‑positive values can lead to endless halving. Consider a safe default + validation.

🛠️ Suggested fix
-            if (self.set.iter >= self.set.MaxIter and
-                    (self.set.dt / self.set.dt0) > self.set.dt_tolerance):
+            dt_tol = getattr(self.set, "dt_tolerance", 1e-6)
+            if dt_tol <= 0:
+                logger.warning("Invalid dt_tolerance=%s; defaulting to 1e-6", dt_tol)
+                dt_tol = 1e-6
+            if (self.set.iter >= self.set.MaxIter and
+                    (self.set.dt / self.set.dt0) > dt_tol):
🤖 Prompt for AI Agents
In `@src/pyVertexModel/algorithm/vertexModel.py` around lines 536 - 538, The code
dereferences self.set.dt_tolerance directly which breaks for legacy pickled Set
objects and allows non‑positive values to cause endless halving; fix by reading
a validated tolerance first (e.g. tol = getattr(self.set, "dt_tolerance",
DEFAULT_DT_TOLERANCE)), clamp/replace any non‑positive or otherwise invalid
values (e.g. if tol is None or tol <= 0: tol = DEFAULT_DT_TOLERANCE), then use
that validated tol in the conditional that references dt_tolerance (the if block
that checks (self.set.dt / self.set.dt0) > ... and resets self.set.MaxIter).
Define a clear DEFAULT_DT_TOLERANCE constant or use an existing config value and
update the condition to use tol instead of self.set.dt_tolerance.

Comment on lines +26 to +31
if 'Set' in mat_info.keys():
set_test = Set(mat_info['Set'])
if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None:
set_test.OutputFolder = '../Result/Test'
else:
set_test = None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Normalize empty OutputFolder values ('' vs b'').

scipy.io.loadmat can return '' (str) as well as b'' (bytes). If '' slips through, later OutputFolder is not None checks can still trigger filesystem writes to CWD. Consider a broader empty check.

🛠️ Suggested fix
-                if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None:
-                    set_test.OutputFolder = '../Result/Test'
+                if (set_test.OutputFolder is None or
+                        (isinstance(set_test.OutputFolder, (bytes, str)) and len(set_test.OutputFolder) == 0)):
+                    set_test.OutputFolder = '../Result/Test'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if 'Set' in mat_info.keys():
set_test = Set(mat_info['Set'])
if set_test.OutputFolder.__eq__(b'') or set_test.OutputFolder is None:
set_test.OutputFolder = '../Result/Test'
else:
set_test = None
if 'Set' in mat_info.keys():
set_test = Set(mat_info['Set'])
if (set_test.OutputFolder is None or
(isinstance(set_test.OutputFolder, (bytes, str)) and len(set_test.OutputFolder) == 0)):
set_test.OutputFolder = '../Result/Test'
else:
set_test = None
🤖 Prompt for AI Agents
In `@Tests/tests.py` around lines 26 - 31, The current check on
set_test.OutputFolder only compares to b'' and None, but scipy.io.loadmat may
return '' (str) or b'' (bytes) so empty values can slip through; update the
logic that sets set_test.OutputFolder (for the Set(...) result) to treat None,
b'' and '' as empty by normalizing bytes to str (or checking both types) and
then assigning '../Result/Test' when the normalized value is empty; reference
Set and set_test.OutputFolder to locate the code and ensure the check uses
isinstance or explicit comparisons for both b'' and '' before assigning the
default OutputFolder.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to fix intermittent “geometry breaking / vertices going wild” failures by making time-step backoff configurable and by preventing file I/O during tests that load saved model states.

Changes:

  • Introduces a configurable dt_tolerance (used to decide whether dt can keep shrinking) and applies it in the non-convergence logic.
  • Makes VertexModel skip VTK/image/state outputs when set.OutputFolder is None (enabling non-I/O tests).
  • Adds new regression tests + many new .mat/.pkl fixtures under Tests_data/, and updates test data loading paths.

Reviewed changes

Copilot reviewed 4 out of 79 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
src/pyVertexModel/parameters/set.py Adds dt_tolerance default and updates cyst fixture path.
src/pyVertexModel/algorithm/vertexModel.py Gates output by OutputFolder and uses configurable dt_tolerance in time-step reduction.
Tests/tests.py Extends load_data to support .pkl model states and updates test data paths.
Tests/test_vertexModel.py Updates fixture paths and adds 3 new regression tests for “vertices going wild”.
Tests_data/* Adds new fixtures required by updated and new tests.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 268 to +257
file_name = 'LblImg_imageSequence.mat'
test_dir = 'Tests/data/%s' % file_name
test_dir = 'Tests/Tests_data/%s' % file_name
if exists(test_dir):
Twg_test, X_test = vModel_test.obtain_initial_x_and_tetrahedra(test_dir)
else:
Twg_test, X_test = vModel_test.obtain_initial_x_and_tetrahedra('data/%s' % file_name)
Twg_test, X_test = vModel_test.obtain_initial_x_and_tetrahedra('Tests_data/%s' % file_name)
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These tests reference Tests/Tests_data/..., but fixtures are under the repository root Tests_data/. Because the fallback uses a relative path, this can fail when tests aren't executed from the repo root. Prefer using an absolute path such as os.path.join(TEST_DIRECTORY, 'Tests_data', file_name) for both branches (and remove the exists() check entirely).

Copilot uses AI. Check for mistakes.
Comment on lines 478 to +390
file_name = 'voronoi_40cells.pkl'
test_dir = TEST_DIRECTORY + '/Tests/data/%s' % file_name
test_dir = TEST_DIRECTORY + '/Tests/Tests_data/%s' % file_name
if exists(test_dir):
vModel_test.set.initial_filename_state = test_dir
else:
vModel_test.set.initial_filename_state = 'data/%s' % file_name
vModel_test.set.initial_filename_state = 'Tests_data/%s' % file_name
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

test_dir is built as TEST_DIRECTORY + '/Tests/Tests_data/...', but the repository data directory is TEST_DIRECTORY/Tests_data. As written, the exists() check will always fail in a normal checkout, and the code relies on a relative 'Tests_data/...' path (CWD-dependent). Use os.path.join(TEST_DIRECTORY, 'Tests_data', file_name) consistently.

Copilot uses AI. Check for mistakes.
Comment on lines +434 to +436
# Run for 10 iterations. dt should not decrease to 1e-1
vModel_test.set.tend = vModel_test.t + 20 * vModel_test.set.dt0

Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says “Run for 10 iterations”, but tend is set to t + 20 * dt0 (20 time steps at dt0). Either adjust the comment or the tend calculation so the test intent is unambiguous.

Copilot uses AI. Check for mistakes.
Comment on lines 535 to +537
if (self.set.iter >= self.set.MaxIter and
(self.set.dt / self.set.dt0) > 1e-6):
(self.set.dt / self.set.dt0) > self.set.dt_tolerance):
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Backward-compatibility risk: self.set.dt_tolerance is a new attribute. If a VertexModel is restored from an older .pkl where the Set instance predates this field, iteration_did_not_converged will raise AttributeError. Consider using getattr(self.set, 'dt_tolerance', 1e-6) here (or ensure Set defines it in a post-load hook) to keep old saved states runnable.

Copilot uses AI. Check for mistakes.
Comment on lines +13 to +18
test_dir = abspath('Tests/Tests_data/%s' % file_name)
if file_name.endswith('.mat'):
if exists(test_dir):
mat_info = scipy.io.loadmat(test_dir)
else:
mat_info = scipy.io.loadmat('Tests_data/%s' % file_name)
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

load_data builds test_dir using abspath('Tests/Tests_data/...'), but the repository test fixtures live under the top-level Tests_data/ directory. This makes the helper depend on the current working directory and the fallback relative path, which can break when tests are run from a different CWD. Prefer building an absolute path from Tests.TEST_DIRECTORY (or Path(__file__).resolve()) and always loading from .../Tests_data/<file>.

Copilot uses AI. Check for mistakes.
v_model.set.export_images = False
v_model.set.VTK = False

return v_model
Copy link

Copilot AI Feb 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

load_data returns a (geo, set, mat_info) tuple for .mat files but returns a VertexModel... instance for .pkl files. This polymorphic return type is easy to misuse (callers can accidentally unpack a .pkl result and fail at runtime). Consider splitting into load_mat_data(...) and load_model_state(...), or always returning a consistent tuple (e.g., (None, None, v_model) or (geo, set, extra) with a documented contract).

Suggested change
return v_model
return None, None, v_model

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@Tests/tests.py`:
- Around line 53-54: The raised FileNotFoundError for non-.mat/.pkl inputs is
misleading; change the exception to a more appropriate type (e.g., ValueError or
a custom UnsupportedFileTypeError) and update the message to indicate an
unsupported file extension including the offending file_name or its extension;
locate the raise statement that references file_name in Tests/tests.py and
replace it with a descriptive "Unsupported file type" exception and message.
🧹 Nitpick comments (3)
Tests/__init__.py (1)

5-7: TEST_DIRECTORY and PROJECT_DIRECTORY are computed identically.

Both variables use the same expression os.path.dirname(os.path.dirname(os.path.abspath(__file__))). If they serve different purposes, consider clarifying with comments or distinct logic. If PROJECT_DIRECTORY is meant to allow override via environment variable while TEST_DIRECTORY is the fixed path, the naming and intent could be documented.

Tests/tests.py (2)

15-21: Path fallback logic may produce confusing errors for missing .mat files.

If test_dir doesn't exist and the fallback path 'Tests_data/%s' also doesn't exist, scipy.io.loadmat will raise its own FileNotFoundError with the fallback path. This could be confusing since the error won't mention the primary path that was tried first.

Consider adding an existence check for the fallback path or logging which path is being used:

♻️ Suggested improvement
 def load_data(file_name, return_geo=True):
     test_dir = abspath('Tests/Tests_data/%s' % file_name)
     if file_name.endswith('.mat'):
         if exists(test_dir):
             mat_info = scipy.io.loadmat(test_dir)
-        else:
+        elif exists('Tests_data/%s' % file_name):
             mat_info = scipy.io.loadmat('Tests_data/%s' % file_name)
+        else:
+            raise FileNotFoundError('MAT file %s not found in Tests/Tests_data or Tests_data' % file_name)

40-45: Same fallback pattern without existence check for .pkl files.

Similar to the .mat handling, if neither path exists for a .pkl file, load_state will fail with an error that may not clearly indicate both paths were tried.

Comment on lines 53 to +54
else:
geo_test = None
set_test = None
raise FileNotFoundError('File %s not found' % file_name)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Error message is misleading for unsupported file extensions.

FileNotFoundError with "not found" message is raised for files that don't end with .mat or .pkl, but this is actually an "unsupported file type" error, not a "file not found" error. This could mislead debugging efforts.

♻️ Suggested fix
     else:
-        raise FileNotFoundError('File %s not found' % file_name)
+        raise ValueError('Unsupported file type: %s (expected .mat or .pkl)' % file_name)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
else:
geo_test = None
set_test = None
raise FileNotFoundError('File %s not found' % file_name)
else:
raise ValueError('Unsupported file type: %s (expected .mat or .pkl)' % file_name)
🤖 Prompt for AI Agents
In `@Tests/tests.py` around lines 53 - 54, The raised FileNotFoundError for
non-.mat/.pkl inputs is misleading; change the exception to a more appropriate
type (e.g., ValueError or a custom UnsupportedFileTypeError) and update the
message to indicate an unsupported file extension including the offending
file_name or its extension; locate the raise statement that references file_name
in Tests/tests.py and replace it with a descriptive "Unsupported file type"
exception and message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants