Skip to content

fix: Python SDK all tests passing (346/346 unit, 138/138 cross-SDK)#11

Open
joalves wants to merge 21 commits intomainfrom
fix/python-sdk-all-tests-passing
Open

fix: Python SDK all tests passing (346/346 unit, 138/138 cross-SDK)#11
joalves wants to merge 21 commits intomainfrom
fix/python-sdk-all-tests-passing

Conversation

@joalves
Copy link

@joalves joalves commented Feb 21, 2026

Summary

  • Fix all 5 remaining unit test failures to achieve 346/346 pass rate with exit code 0
  • Add custom_assignments field to ContextConfig alongside legacy cassigmnents typo field
  • Fix double error logging in refresh_async (emitted both string and exception events)
  • Fix DefaultAudienceDeserializer to return None on invalid JSON instead of raising ValueError
  • Fix error message casing, publish race conditions, lock ordering, and null-safety issues

Test Results

  • Unit Tests: 346/346 passed, exit code 0
  • Cross-SDK Tests: 138/138 scenarios passed, exit code 0

Key Changes

  • sdk/context_config.py: Added custom_assignments field
  • sdk/context.py: Read both custom_assignments and cassigmnents; removed redundant error logging; fixed lock ordering and race conditions
  • sdk/default_audience_deserializer.py: Catch all exceptions and return None
  • Various other fixes for casing, null-safety, and thread-safety

Test plan

  • Run python3 -m pytest test/ -v --tb=short — 346/346 passed
  • Run cross-SDK test suite — 138/138 scenarios passed
  • Verify exit code 0

Summary by CodeRabbit

  • New Features

    • Named-parameter SDK creation (timeout/retries), factory init flow, ready-made examples and a new finalize lifecycle event; new context getters (units/attributes) and finalize/close wrappers.
  • Deprecations

    • Several old names aliased with deprecation warnings to preserve backwards compatibility.
  • Documentation

    • README and examples rewritten with unified init, context patterns, event-logging examples and platform guides.
  • Tests

    • Large additions and updates covering concurrency, publishing, context flows, parsing and utilities.
  • Chores

    • Expanded project ignore list.

- Add custom_field_value method for accessing custom field values
- Fix context_event_logger event types
- Fix default_variable_parser for better type handling
- Fix variant_assigner for consistent behavior
- Fix type coercion for custom field values (json, number, boolean types)
The attrs_seq assignment was happening during a read operation but mutating
shared state. This could cause race conditions with concurrent access.
The attrs_seq is already properly set in the write path when assignments
are created.
- Add SDK initialization tests
- Add provider and publisher tests
- Add publish timeout tests
- Add attribute management tests
- Add unit validation tests
- Add concurrent operations tests
- Add error recovery tests

Total: 36 new tests added, all 128 tests pass
Add 203 tests aligned with canonical test specs: MD5 (14),
Murmur3 (36), variant assignment (66), audience matcher (3),
and context canonical tests (87). Fix test_variable_value bug
using wrong fixture.
- Add custom_assignments field to ContextConfig alongside legacy cassigmnents
- Fix Context constructor to read both custom_assignments and cassigmnents fields
- Fix double error logging in refresh_async (was emitting string + exception)
- Fix DefaultAudienceDeserializer to return None on invalid JSON instead of raising
- Fix error message casing (ABSmartly -> ABsmartly) for consistency
- Fix publish race condition with read/write lock ordering
- Fix closing_future.exception -> set_exception call
- Fix data_lock release_write -> release_read in read-only methods
- Add null-safety for variable parsing and custom field parsing
- Add refresh timer guard for closed/closing state
@coderabbitai
Copy link

coderabbitai bot commented Feb 21, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

This PR renames ABSmartly/ABSmartlyConfig to ABsmartly/ABsmartlyConfig with deprecated aliases and adds ABsmartly.create(...) factory wiring client and HTTP defaults. Introduces ContextPublisher abstraction and DefaultContextPublisher, deprecating ContextEventHandler/DefaultContextEventHandler. Client gains unified response handling and POST support. Context and ContextConfig rename cassignments → custom_assignments, add attribute sequencing, audience-matching, safer parsing and lifecycle guards. Parsers/deserializers and regex operator add logging, stricter error handling and timeouts. Adds many tests, examples, README updates, new SDK exports (ContextPublisher) and .gitignore entries.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

I’m a rabbit with a tiny wrench,
I hop through code and stitch each quench,
Rename a class and guard the thread,
Timeouts, logs and tests well-fed,
Hooray for tidy SDKs — munch, munch! 🐇

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.44% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly describes the main accomplishment: all tests passing with specific counts (346 unit, 138 cross-SDK), which is the primary change highlighted in the PR summary.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/python-sdk-all-tests-passing
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Fix BinaryOperator for proper numeric type coercion and InOperator
for string containment and collection membership checks.
…lias

The misspelled field cassigmnents is now a deprecated property alias
that forwards to the correctly named custom_assignments field.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (11)
sdk/internal/murmur32.py (1)

66-67: Misleading function name: performs left rotation, not right.

The function is named rotate_right but the implementation (n << d) | (n >> (32 - d)) performs a left rotation (ROTL). This matches the MurmurHash3 algorithm correctly, but the naming could confuse future maintainers.

♻️ Suggested rename for clarity
-def rotate_right(n, d):
+def rotate_left(n, d):
     return (n << d) | (n >> (32 - d)) & 0xFFFFFFFF

Note: This would require updating the call sites at lines 26, 30, and 46.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/internal/murmur32.py` around lines 66 - 67, The function rotate_right is
misnamed because it implements a left rotation; rename the function to
rotate_left (or rotate_rotl) and update all call sites that reference
rotate_right to the new name (check uses in this module). While renaming, also
tighten the masking by returning ((n << d) | (n >> (32 - d))) & 0xFFFFFFFF so
the mask applies to the combined result and preserves 32-bit behavior; keep the
existing semantics otherwise.
sdk/internal/lock/atomic_bool.py (1)

19-21: Align AtomicInt encapsulation with AtomicBool for consistency.

AtomicBool uses a private _value attribute with a @property decorator for thread-safe access, ensuring all modifications go through the lock. AtomicInt, however, uses a public value attribute without property protection. This means direct assignments like int_obj.value = 5 bypass the lock, creating an inconsistency. Apply the same property-based encapsulation pattern to AtomicInt.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/internal/lock/atomic_bool.py` around lines 19 - 21, Make AtomicInt match
AtomicBool's encapsulation by renaming its public value attribute to a private
_value and exposing a thread-safe `@property` getter and setter that acquire
self._lock; update all AtomicInt methods (e.g.,
increment/decrement/add/get/whatever methods referencing value) to use
self._value internally and remove direct public assignments so external code
must go through the property, ensuring all reads/writes are performed under the
lock and preserving existing method semantics and return values.
sdk/context.py (2)

262-264: Unused loop variable value.

The loop variable value is not used within the loop body. Use _ to indicate it's intentionally unused.

♻️ Proposed fix
-                        for key, value in variables.items():
+                        for key, _ in variables.items():
                             index_variables[key] = experiment_variables
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 262 - 264, The loop currently unpacks (key,
value) from variables.items() but never uses value; change the unpack to (key,
_) in that loop so the intent is clear and to avoid an unused-variable warning —
update the loop where index_variables is populated (the for key, value in
variables.items() block that assigns index_variables[key] =
experiment_variables) to use an underscore for the unused second element.

176-181: Remove unused helper method _handle_future_callback.

This helper method is defined but never called anywhere in the class. The existing future callback handling still uses inline implementations.

♻️ Proposed fix: Remove dead code
-    def _handle_future_callback(self, future: Future, on_success, on_error):
-        """Helper method to reduce duplication in future callback handling."""
-        if future.done() and not future.cancelled() and future.exception() is None:
-            on_success(future.result())
-        elif not future.cancelled() and future.exception() is not None:
-            on_error(future.exception())
-

Alternatively, refactor the existing callbacks to use this helper if deduplication is desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 176 - 181, The method _handle_future_callback in
the class is dead code (defined but never used); remove its entire definition
from sdk/context.py to eliminate unused helper, or alternatively refactor
existing inline future callbacks to call _handle_future_callback in places where
Future callbacks are created (so update those callback registrations to pass the
Future and on_success/on_error handlers). Ensure no remaining references to
_handle_future_callback remain after the change.
sdk/jsonexpr/operators/match_operator.py (1)

12-26: Signal-based timeout only works on Unix systems.

The signal.SIGALRM approach for regex timeouts is Unix-only. The fallback on lines 25-26 silently yields without any timeout protection on Windows. This is acceptable for cross-platform compatibility, but consider logging a debug message when the fallback is triggered so that users are aware timeouts are not enforced.

💡 Optional: Log when timeout protection is unavailable
     except (AttributeError, ValueError):
+        logger.debug("Signal-based timeout unavailable on this platform")
         yield
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/jsonexpr/operators/match_operator.py` around lines 12 - 26, The
signal-based timeout in the timeout contextmanager (function timeout and inner
timeout_handler) silently falls back on non-Unix platforms; modify the except
(AttributeError, ValueError) block to emit a debug-level log message indicating
that SIGALRM is unavailable and regex timeouts will not be enforced on this
platform (use the module logger via logging.getLogger(__name__) or an existing
logger), then yield as before so behavior is unchanged except for the debug
notification.
test/test_named_param_init.py (1)

1-8: Remove unused import.

The patch import from unittest.mock is not used in this test file.

♻️ Proposed fix
 import unittest
-from unittest.mock import Mock, patch
+from unittest.mock import Mock

 from sdk.absmartly import ABsmartly
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_named_param_init.py` around lines 1 - 8, The import 'patch' from
unittest.mock in test_named_param_init.py is unused; remove it by deleting
'patch' from the import statement (leaving "from unittest.mock import Mock") or
simplify the import to only what is used so there are no unused imports
(reference symbol: patch).
test/test_publisher.py (2)

5-6: Remove unused import.

ConnectionError is imported but not used in any test.

♻️ Proposed fix
 from requests import Response
-from requests.exceptions import Timeout, HTTPError, ConnectionError
+from requests.exceptions import Timeout, HTTPError
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_publisher.py` around lines 5 - 6, Remove the unused import
ConnectionError from the import line in test/test_publisher.py; update the
import statement that currently reads "from requests.exceptions import Timeout,
HTTPError, ConnectionError" to only import the exceptions actually used (e.g.,
"Timeout, HTTPError") so there are no unused symbols.

78-84: Remove or utilise the unused result variables.

The result variables on lines 79 and 155 are assigned but never used. Either remove the assignment or add assertions on the result.

♻️ Proposed fix
         future = self.publisher.publish(mock_context, event)
-        result = future.result(timeout=5)
+        future.result(timeout=5)  # Ensures completion without timeout

         self.http_client.put.assert_called_once()

And similarly for line 155.

Also applies to: 154-159

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_publisher.py` around lines 78 - 84, The test assigns result =
future.result(timeout=5) but never uses it; either remove the unused assignment
or assert on the returned value. Update the tests around
self.publisher.publish(...) (the variables future and result) to drop the result
assignment if it's not needed, or add assertions that validate the published
response (e.g., the content, status, or type returned by future.result()) so the
result variable is used; apply the same change for the second occurrence around
line 155 that also assigns result without using it.
test/test_sdk.py (1)

1-4: Remove unused import.

The patch import from unittest.mock is not used in this test file.

♻️ Proposed fix
 import unittest
 from concurrent.futures import Future
-from unittest.mock import MagicMock, Mock, patch
+from unittest.mock import MagicMock, Mock
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_sdk.py` around lines 1 - 4, Remove the unused import "patch" from
the unittest.mock import list in test/test_sdk.py; edit the import line that
currently reads "from unittest.mock import MagicMock, Mock, patch" to only
import the used symbols (MagicMock, Mock) so there are no unused imports in the
test module.
sdk/__init__.py (1)

11-23: Consider sorting __all__ alphabetically for consistency.

The static analysis tool flags that __all__ is not sorted. While the current grouping (primary class followed by its alias) is logical, alphabetical sorting is a common convention that aids discoverability.

♻️ Optional: Sort __all__ alphabetically
 __all__ = [
     "ABsmartly",
     "ABSmartly",
     "ABsmartlyConfig",
     "ABSmartlyConfig",
     "Client",
     "ClientConfig",
     "Context",
     "ContextConfig",
     "ContextEventLogger",
     "DefaultHTTPClient",
     "DefaultHTTPClientConfig",
 ]

Note: The current list is already nearly sorted; only minor reordering would be needed if strict alphabetical order is desired.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/__init__.py` around lines 11 - 23, The __all__ export list is not
strictly alphabetized; reorder the entries in the __all__ list so they are in
alphabetical order (e.g., "ABSmartly" before "ABsmartly", etc.) to satisfy the
linter and improve consistency—update the list that currently contains
"ABsmartly", "ABSmartly", "ABsmartlyConfig", "ABSmartlyConfig", "Client",
"ClientConfig", "Context", "ContextConfig", "ContextEventLogger",
"DefaultHTTPClient", "DefaultHTTPClientConfig" to be sorted alphabetically while
keeping the same symbols and aliases.
test/test_context_canonical.py (1)

978-1001: Avoid fixed sleep() in delay tests to reduce flakiness.

The hard-coded time.sleep(0.3) makes these tests timing-sensitive under CI load. Prefer polling with a bounded timeout and early exit once the queue drains.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 978 - 1001, Replace the fixed
time.sleep(0.3) in test_publish_delay_triggers_after_exposure and
test_publish_delay_triggers_after_goal with a bounded polling loop that checks
context.get_pending_count() until it reaches 0 or a max wait (e.g. 1s) elapses,
sleeping a short interval between polls (e.g. 10-50ms) to avoid busy-waiting;
keep assertions that timeout is set after scheduling and assert pending is 0
after the loop, and ensure context.close() is still called in the exposure test
(and add it to the goal test if needed).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 20-25: The dependency code fence in README.md is missing a
language specifier; update the fenced block that contains the four dependency
lines (setuptools, requests, urllib3, jsons) to include a language token (e.g.,
change ``` to ```text or ```ini) so the markdown linter and syntax highlighting
treat it as plain text.
- Around line 145-147: The code wrongly awaits a concurrent.futures.Future
returned by ctx.wait_until_ready_async(); update the call site that currently
does await ctx.wait_until_ready_async() to instead call
ctx.wait_until_ready_async() (and if you need to block, call .result() on the
returned Future), i.e., remove the await and keep the async Future-handling
pattern used where ctx = sdk.create_context(context_config) is created and later
wait_until_ready_async() is invoked without awaiting.

In `@sdk/context_config.py`:
- Line 11: Replace the incorrect dict literal type annotation for the attribute
custom_assignments with a proper typing annotation: import Optional, Dict, and
Any from typing and change custom_assignments: {} = None to use
Optional[Dict[str, Any]] (or Optional[dict] if you prefer) so type checkers can
understand the attribute; update any other similar attributes in the same file
to follow the same pattern and ensure the typing imports are added at the top of
sdk/context_config.py.

In `@sdk/context.py`:
- Around line 183-188: There are two identical definitions of the method
_build_audience_attributes which causes the latter to shadow the former; remove
the duplicate definition (the second occurrence) and keep the original
implementation so only one _build_audience_attributes exists, ensuring any
references to self.attributes and audience_attributes remain unchanged and tests
still pass.

In `@sdk/default_audience_deserializer.py`:
- Around line 18-20: The null-check uses bytes_[offset:offset+length] but
jsons.loadb is still called with the full bytes_ buffer; change the
deserialization to operate on the same slice used for the guard by passing
bytes_[offset:offset+length] (or a memoryview(bytes_)[offset:offset+length])
into jsons.loadb so parsing consistently respects offset and length; update the
return line that currently calls jsons.loadb(bytes_, dict) to use the sliced
segment instead.

In `@sdk/default_variable_parser.py`:
- Around line 20-25: The parse method's return annotation is incorrect: update
the typing to import Any and change the return type from Optional[dict] to
Optional[Any] on the parse method and on the parent interface declaration (the
VariableParser.parse signature), and simplify the implementation to return
json.loads(config) directly (no conditional dict check); ensure both the
implementation (parse) and the interface signature are updated to Optional[Any].

In `@sdk/jsonexpr/operators/match_operator.py`:
- Around line 49-54: In the except handlers inside the regex matching code (in
match_operator.py, within the function performing regex execution), remove the
unnecessary f-string prefix from the TimeoutError warning (change
logger.warning(f"...") to logger.warning("...")) and replace logger.error(...)
in the generic except Exception as e block with logger.exception(...) so the
stack trace and exception context are logged; keep returning None as before.

In `@test/test_concurrency.py`:
- Around line 151-193: Add assertions to verify publish actually ran and the
queue is drained: after joining threads assert publish_count[0] == 5 to confirm
counting_publish was called for each tracked goal, keep the existing
self.assertEqual(0, len(errors)), and then assert any publish queue is empty —
e.g. if the context exposes a flush()/join_publisher() use that and assert it
completed, or if it has an internal queue like context._publish_queue assert
context._publish_queue.empty() before calling context.close(); reference
symbols: test_concurrent_publish, counting_publish, publish_count,
context.track, context.publish(), and context.close().

In `@test/test_context_canonical.py`:
- Around line 157-164: The test helper create_ready_context currently maps the
legacy 'cassignments' kwarg to the misspelled legacy attribute 'cassigmnents' on
ContextConfig, so the new canonical path ContextConfig.custom_assignments isn't
exercised; update create_ready_context to accept and set
kwargs['custom_assignments'] onto config.custom_assignments (and still preserve
backward compatibility by mapping 'cassignments' to both
config.custom_assignments and the legacy config.cassigmnents if needed), and fix
the typo reference to 'cassigmnents' where appropriate so tests exercise the
canonical custom_assignments path.

In `@test/test_sdk.py`:
- Around line 55-64: The test test_sdk_create_missing_api_key constructs a
Client with no api_key but never asserts behavior and leaves the unused client
variable; decide on desired behavior and fix both implementation and test: if
api_key must be required, add validation in Client.__init__ (in the Client
class) to raise a ValueError (or a specific exception) when
client_config.api_key is missing, then change test_sdk_create_missing_api_key to
assertRaises that exception when instantiating Client; otherwise, if missing
api_key is acceptable, remove the unused client variable from the test (or
replace it with an explicit assertion documenting acceptance, e.g.,
assertIsNone(client_config.api_key) and a comment) and keep Client.__init__
unchanged.

In `@test/test_variant_assigner.py`:
- Around line 8-10: The test helper uses hashlib.md5 in function hash_unit to
create deterministic fixtures, so add a targeted Ruff suppression to silence
S324: append an inline comment like "# noqa: S324" to the hashlib.md5 call (the
line assigning dig) and include a short explanatory comment that this use is
non-cryptographic / for deterministic testing only; leave the rest of hash_unit
unchanged.

---

Nitpick comments:
In `@sdk/__init__.py`:
- Around line 11-23: The __all__ export list is not strictly alphabetized;
reorder the entries in the __all__ list so they are in alphabetical order (e.g.,
"ABSmartly" before "ABsmartly", etc.) to satisfy the linter and improve
consistency—update the list that currently contains "ABsmartly", "ABSmartly",
"ABsmartlyConfig", "ABSmartlyConfig", "Client", "ClientConfig", "Context",
"ContextConfig", "ContextEventLogger", "DefaultHTTPClient",
"DefaultHTTPClientConfig" to be sorted alphabetically while keeping the same
symbols and aliases.

In `@sdk/context.py`:
- Around line 262-264: The loop currently unpacks (key, value) from
variables.items() but never uses value; change the unpack to (key, _) in that
loop so the intent is clear and to avoid an unused-variable warning — update the
loop where index_variables is populated (the for key, value in variables.items()
block that assigns index_variables[key] = experiment_variables) to use an
underscore for the unused second element.
- Around line 176-181: The method _handle_future_callback in the class is dead
code (defined but never used); remove its entire definition from sdk/context.py
to eliminate unused helper, or alternatively refactor existing inline future
callbacks to call _handle_future_callback in places where Future callbacks are
created (so update those callback registrations to pass the Future and
on_success/on_error handlers). Ensure no remaining references to
_handle_future_callback remain after the change.

In `@sdk/internal/lock/atomic_bool.py`:
- Around line 19-21: Make AtomicInt match AtomicBool's encapsulation by renaming
its public value attribute to a private _value and exposing a thread-safe
`@property` getter and setter that acquire self._lock; update all AtomicInt
methods (e.g., increment/decrement/add/get/whatever methods referencing value)
to use self._value internally and remove direct public assignments so external
code must go through the property, ensuring all reads/writes are performed under
the lock and preserving existing method semantics and return values.

In `@sdk/internal/murmur32.py`:
- Around line 66-67: The function rotate_right is misnamed because it implements
a left rotation; rename the function to rotate_left (or rotate_rotl) and update
all call sites that reference rotate_right to the new name (check uses in this
module). While renaming, also tighten the masking by returning ((n << d) | (n >>
(32 - d))) & 0xFFFFFFFF so the mask applies to the combined result and preserves
32-bit behavior; keep the existing semantics otherwise.

In `@sdk/jsonexpr/operators/match_operator.py`:
- Around line 12-26: The signal-based timeout in the timeout contextmanager
(function timeout and inner timeout_handler) silently falls back on non-Unix
platforms; modify the except (AttributeError, ValueError) block to emit a
debug-level log message indicating that SIGALRM is unavailable and regex
timeouts will not be enforced on this platform (use the module logger via
logging.getLogger(__name__) or an existing logger), then yield as before so
behavior is unchanged except for the debug notification.

In `@test/test_context_canonical.py`:
- Around line 978-1001: Replace the fixed time.sleep(0.3) in
test_publish_delay_triggers_after_exposure and
test_publish_delay_triggers_after_goal with a bounded polling loop that checks
context.get_pending_count() until it reaches 0 or a max wait (e.g. 1s) elapses,
sleeping a short interval between polls (e.g. 10-50ms) to avoid busy-waiting;
keep assertions that timeout is set after scheduling and assert pending is 0
after the loop, and ensure context.close() is still called in the exposure test
(and add it to the goal test if needed).

In `@test/test_named_param_init.py`:
- Around line 1-8: The import 'patch' from unittest.mock in
test_named_param_init.py is unused; remove it by deleting 'patch' from the
import statement (leaving "from unittest.mock import Mock") or simplify the
import to only what is used so there are no unused imports (reference symbol:
patch).

In `@test/test_publisher.py`:
- Around line 5-6: Remove the unused import ConnectionError from the import line
in test/test_publisher.py; update the import statement that currently reads
"from requests.exceptions import Timeout, HTTPError, ConnectionError" to only
import the exceptions actually used (e.g., "Timeout, HTTPError") so there are no
unused symbols.
- Around line 78-84: The test assigns result = future.result(timeout=5) but
never uses it; either remove the unused assignment or assert on the returned
value. Update the tests around self.publisher.publish(...) (the variables future
and result) to drop the result assignment if it's not needed, or add assertions
that validate the published response (e.g., the content, status, or type
returned by future.result()) so the result variable is used; apply the same
change for the second occurrence around line 155 that also assigns result
without using it.

In `@test/test_sdk.py`:
- Around line 1-4: Remove the unused import "patch" from the unittest.mock
import list in test/test_sdk.py; edit the import line that currently reads "from
unittest.mock import MagicMock, Mock, patch" to only import the used symbols
(MagicMock, Mock) so there are no unused imports in the test module.

ℹ️ Review info

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7910b9b and 08d140c.

📒 Files selected for processing (35)
  • .gitignore
  • README.md
  • example/example.py
  • example/simple_example.py
  • sdk/__init__.py
  • sdk/absmartly.py
  • sdk/absmartly_config.py
  • sdk/client.py
  • sdk/context.py
  • sdk/context_config.py
  • sdk/context_event_logger.py
  • sdk/default_audience_deserializer.py
  • sdk/default_context_data_deserializer.py
  • sdk/default_variable_parser.py
  • sdk/internal/lock/atomic_bool.py
  • sdk/internal/lock/concurrency.py
  • sdk/internal/murmur32.py
  • sdk/internal/variant_assigner.py
  • sdk/jsonexpr/expr_evaluator.py
  • sdk/jsonexpr/operators/binary_operator.py
  • sdk/jsonexpr/operators/in_operator.py
  • sdk/jsonexpr/operators/match_operator.py
  • sdk/jsonexpr/operators/not_operator.py
  • test/test_audience_matcher.py
  • test/test_client.py
  • test/test_concurrency.py
  • test/test_context.py
  • test/test_context_canonical.py
  • test/test_md5.py
  • test/test_murmur32.py
  • test/test_named_param_init.py
  • test/test_provider.py
  • test/test_publisher.py
  • test/test_sdk.py
  • test/test_variant_assigner.py
💤 Files with no reviewable changes (1)
  • sdk/jsonexpr/operators/not_operator.py

Comment on lines 20 to 25
```
setuptools~=60.2.0
requests~=2.28.1
urllib3~=1.26.12
setuptools~=60.2.0
requests~=2.28.1
urllib3~=1.26.12
jsons~=1.6.3
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Specify a language for the dependencies code fence.

This block is currently untyped, which triggers markdown linting and weakens formatting consistency.

Proposed README fix
-```
+```text
 setuptools~=60.2.0
 requests~=2.28.1
 urllib3~=1.26.12
 jsons~=1.6.3
</details>

<details>
<summary>🧰 Tools</summary>

<details>
<summary>🪛 markdownlint-cli2 (0.21.0)</summary>

[warning] 20-20: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @README.md around lines 20 - 25, The dependency code fence in README.md is
missing a language specifier; update the fenced block that contains the four
dependency lines (setuptools, requests, urllib3, jsons) to include a language
token (e.g., change totext or ```ini) so the markdown linter and syntax
highlighting treat it as plain text.


</details>

<!-- fingerprinting:phantom:poseidon:hawk -->

<!-- This is an auto-generated comment by CodeRabbit -->

Comment on lines +145 to +147
ctx = sdk.create_context(context_config)
ctx.wait_until_ready_async()
```
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, check if sdk/context.py exists and find the method definition
find . -name "context.py" -type f | head -20

Repository: absmartly/python3-sdk

Length of output: 81


🏁 Script executed:

# Search for wait_until_ready_async definition and usage
rg -n "def wait_until_ready_async|wait_until_ready_async" --type py -A 3 | head -50

Repository: absmartly/python3-sdk

Length of output: 1298


🏁 Script executed:

# Check the README.md at the specified lines (145-147 and 482-484)
cat -n README.md | sed -n '140,150p'
cat -n README.md | sed -n '477,490p'

Repository: absmartly/python3-sdk

Length of output: 848


🏁 Script executed:

cat -n sdk/context.py | sed -n '574,600p'

Repository: absmartly/python3-sdk

Length of output: 1078


Fix incorrect await usage in line 483.

The wait_until_ready_async() method returns a concurrent.futures.Future object, which is not awaitable. Line 483 incorrectly uses await ctx.wait_until_ready_async(), which will raise a TypeError at runtime. Line 146 shows the correct pattern: call the method directly without await and optionally call .result() on the returned Future to block until ready. Remove the await from line 483 to match the correct usage demonstrated in line 146.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 145 - 147, The code wrongly awaits a
concurrent.futures.Future returned by ctx.wait_until_ready_async(); update the
call site that currently does await ctx.wait_until_ready_async() to instead call
ctx.wait_until_ready_async() (and if you need to block, call .result() on the
returned Future), i.e., remove the await and keep the async Future-handling
pattern used where ctx = sdk.create_context(context_config) is created and later
wait_until_ready_async() is invoked without awaiting.

Comment on lines +49 to +54
except TimeoutError:
logger.warning(f"Regex execution timeout (potential ReDoS)")
return None
except Exception as e:
logger.error(f"Unexpected error in regex matching: {e}")
return None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the extraneous f prefix and use logging.exception for unexpected errors.

Line 50 contains an f-string without any placeholders, and line 53 uses logger.error which loses the stack trace for unexpected exceptions.

🐛 Proposed fix
                 except TimeoutError:
-                    logger.warning(f"Regex execution timeout (potential ReDoS)")
+                    logger.warning("Regex execution timeout (potential ReDoS)")
                     return None
                 except Exception as e:
-                    logger.error(f"Unexpected error in regex matching: {e}")
+                    logger.exception("Unexpected error in regex matching: %s", e)
                     return None
🧰 Tools
🪛 Ruff (0.15.2)

[error] 50-50: f-string without any placeholders

Remove extraneous f prefix

(F541)


[warning] 52-52: Do not catch blind exception: Exception

(BLE001)


[warning] 53-53: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/jsonexpr/operators/match_operator.py` around lines 49 - 54, In the except
handlers inside the regex matching code (in match_operator.py, within the
function performing regex execution), remove the unnecessary f-string prefix
from the TimeoutError warning (change logger.warning(f"...") to
logger.warning("...")) and replace logger.error(...) in the generic except
Exception as e block with logger.exception(...) so the stack trace and exception
context are logged; keep returning None as before.

Comment on lines +55 to +64
def test_sdk_create_missing_api_key(self):
client_config = ClientConfig()
client_config.endpoint = "https://sandbox.test.io/v1"
client_config.application = "website"
client_config.environment = "dev"

http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
client = Client(client_config, http_client)

self.assertIsNone(client_config.api_key)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Test does not verify expected behaviour for missing api_key.

This test creates a Client with a missing api_key but doesn't assert any specific behaviour or failure. The client variable is assigned but never used. Consider either:

  1. Removing this test if missing api_key is acceptable (no validation in Client)
  2. Adding assertions about the expected behaviour (e.g., requests fail with auth errors)
  3. Adding validation to Client.__init__ if missing api_key should raise an error
💡 Suggested improvement

If missing api_key should be allowed, consider documenting this explicitly:

     def test_sdk_create_missing_api_key(self):
+        """Verify that Client can be created without api_key (validation happens on request)."""
         client_config = ClientConfig()
         client_config.endpoint = "https://sandbox.test.io/v1"
         client_config.application = "website"
         client_config.environment = "dev"

         http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
-        client = Client(client_config, http_client)
+        client = Client(client_config, http_client)  # No validation at construction time

         self.assertIsNone(client_config.api_key)
+        # Note: Requests will fail at runtime due to missing X-API-Key header

Or remove the unused variable if no further assertions are needed:

-        client = Client(client_config, http_client)
+        _ = Client(client_config, http_client)  # Should not raise
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def test_sdk_create_missing_api_key(self):
client_config = ClientConfig()
client_config.endpoint = "https://sandbox.test.io/v1"
client_config.application = "website"
client_config.environment = "dev"
http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
client = Client(client_config, http_client)
self.assertIsNone(client_config.api_key)
def test_sdk_create_missing_api_key(self):
"""Verify that Client can be created without api_key (validation happens on request)."""
client_config = ClientConfig()
client_config.endpoint = "https://sandbox.test.io/v1"
client_config.application = "website"
client_config.environment = "dev"
http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
client = Client(client_config, http_client) # No validation at construction time
self.assertIsNone(client_config.api_key)
# Note: Requests will fail at runtime due to missing X-API-Key header
🧰 Tools
🪛 Ruff (0.15.2)

[error] 62-62: Local variable client is assigned to but never used

Remove assignment to unused variable client

(F841)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_sdk.py` around lines 55 - 64, The test
test_sdk_create_missing_api_key constructs a Client with no api_key but never
asserts behavior and leaves the unused client variable; decide on desired
behavior and fix both implementation and test: if api_key must be required, add
validation in Client.__init__ (in the Client class) to raise a ValueError (or a
specific exception) when client_config.api_key is missing, then change
test_sdk_create_missing_api_key to assertRaises that exception when
instantiating Client; otherwise, if missing api_key is acceptable, remove the
unused client variable from the test (or replace it with an explicit assertion
documenting acceptance, e.g., assertIsNone(client_config.api_key) and a comment)
and keep Client.__init__ unchanged.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
README.md (2)

20-25: ⚠️ Potential issue | 🟡 Minor

Still missing a fence language.

This block is still untyped, so markdownlint will keep flagging MD040.

Proposed README fix
-```
+```text
 setuptools~=60.2.0
 requests~=2.28.1
 urllib3~=1.26.12
 jsons~=1.6.3
</details>

<details>
<summary>🤖 Prompt for AI Agents</summary>

Verify each finding against the current code and only fix it if needed.

In @README.md around lines 20 - 25, The fenced code block listing Python
packages (setuptools, requests, urllib3, jsons) is missing a fence language
which triggers markdownlint MD040; update that block by changing the opening
triple backticks to include a language token (e.g., "text" or "bash") so the
block becomes text (or bash) followed by the four package lines and the
closing ```, ensuring the fenced block around the
setuptools/requests/urllib3/jsons list is properly typed.


</details>

---

`462-483`: _⚠️ Potential issue_ | _🟠 Major_

**Use an asyncio-compatible await here.**

If `wait_until_ready_async()` returns a `concurrent.futures.Future`, Line 483 will fail at runtime because that object is not directly awaitable. In a FastAPI handler, wrap it for asyncio instead of blocking the event loop.  
  

<details>
<summary>Proposed README fix</summary>

```diff
+import asyncio
 from fastapi import FastAPI, Request
 from absmartly import ABsmartly, ContextConfig
 import uuid
@@
     ctx = sdk.create_context(context_config)
-    await ctx.wait_until_ready_async()
+    await asyncio.wrap_future(ctx.wait_until_ready_async())
```
</details>

```shell
#!/bin/bash
set -euo pipefail

# Verify what Context.wait_until_ready_async() returns.
rg -n -C3 'def wait_until_ready_async|concurrent\.futures\.Future|executor\.submit|return .*Future' --type py sdk

# Show the README call sites that use the method.
rg -n -C2 'wait_until_ready_async\(' README.md
```

Expected result: if `sdk/context.py` shows `wait_until_ready_async()` returning a `concurrent.futures.Future` or an executor-submitted future, the FastAPI example should use `asyncio.wrap_future(...)` rather than `await` directly.

<details>
<summary>🤖 Prompt for AI Agents</summary>

```
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 462 - 483, The README's FastAPI example awaits
ctx.wait_until_ready_async() which may return a concurrent.futures.Future (not
awaitable); update the example to use an asyncio-compatible await by importing
asyncio and awaiting asyncio.wrap_future(ctx.wait_until_ready_async()) (or
otherwise converting the returned Future to an awaitable), keeping the
surrounding ContextConfig, sdk.create_context and ctx usage unchanged.
```

</details>

</blockquote></details>

</blockquote></details>

<details>
<summary>🤖 Prompt for all review comments with AI agents</summary>

Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In @README.md:

  • Around line 20-25: The fenced code block listing Python packages (setuptools,
    requests, urllib3, jsons) is missing a fence language which triggers
    markdownlint MD040; update that block by changing the opening triple backticks
    to include a language token (e.g., "text" or "bash") so the block becomes
    text (or bash) followed by the four package lines and the closing ```,
    ensuring the fenced block around the setuptools/requests/urllib3/jsons list is
    properly typed.
  • Around line 462-483: The README's FastAPI example awaits
    ctx.wait_until_ready_async() which may return a concurrent.futures.Future (not
    awaitable); update the example to use an asyncio-compatible await by importing
    asyncio and awaiting asyncio.wrap_future(ctx.wait_until_ready_async()) (or
    otherwise converting the returned Future to an awaitable), keeping the
    surrounding ContextConfig, sdk.create_context and ctx usage unchanged.

</details>

---

<details>
<summary>ℹ️ Review info</summary>

<details>
<summary>⚙️ Run configuration</summary>

**Configuration used**: Organization UI

**Review profile**: CHILL

**Plan**: Pro

**Run ID**: `93da8944-767f-4098-bdfb-ff21e4a1e7a8`

</details>

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 08d140cf9442bd90d6294f9374908001959b61da and d9d85442c26b8ecf7b52bc955c1c1089caaaadc1.

</details>

<details>
<summary>📒 Files selected for processing (1)</summary>

* `README.md`

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

joalves added 6 commits March 15, 2026 16:25
…stom_field_type

- Add finalize(), finalize_async(), is_finalized(), is_finalizing() as aliases
- Rename get_custom_field_type() to get_custom_field_value_type(), keep old as alias
- Standardize error messages: include unit type in unit errors, add trailing periods
…ready or finalized

get_treatment/peek_treatment return 0, get_variable_value/peek_variable_value return
default_value, get_variable_keys returns {}, get_experiments returns [], custom field
methods return None/[]. Write methods (track, set_unit, set_attribute) keep throwing.
- variable_keys now returns dict[str, list[str]] instead of dict[str, str]
- get_treatment/peek_treatment/get_variable_value return defaults when not ready instead of raising
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
sdk/default_variable_parser.py (1)

14-19: ⚠️ Potential issue | 🟡 Minor

Return type annotation does not match actual return values.

The parse method declares Optional[dict] but json.loads(config) can return any JSON type (dict, list, str, int, float, bool, or None). This misaligns the type contract with the actual implementation.

Additionally, the error handling behaviour differs from DefaultAudienceDeserializer which returns None on invalid JSON, whereas this raises ValueError. Consider aligning error handling strategies across deserializers for consistency.

🔧 Proposed fix for return type
-from typing import Optional
+from typing import Any, Optional
 import json

 from sdk.context import Context
 from sdk.variable_parser import VariableParser


 class DefaultVariableParser(VariableParser):

     def parse(self,
               context: Context,
               experiment_name: str,
               variant_name: str,
-              config: str) -> Optional[dict]:
+              config: str) -> Optional[Any]:
         try:
             result = json.loads(config)
             return result
         except json.JSONDecodeError as e:
             raise ValueError(f"Invalid JSON in variant config: {e}") from e
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/default_variable_parser.py` around lines 14 - 19, The parse method's
return annotation Optional[dict] is incorrect because json.loads(config) can
produce any JSON type; update the annotation on parse (in class
DefaultVariableParser) to Optional[Any] or a JSON union type (e.g.,
Optional[Union[dict, list, str, int, float, bool, NoneType]]) and adjust the
error handling to match DefaultAudienceDeserializer by returning None on
json.JSONDecodeError instead of raising ValueError; also update the parse
method's docstring/type comments to reflect the broader JSON return type and the
consistent None-on-error behavior.
test/test_context_canonical.py (1)

157-164: ⚠️ Potential issue | 🟡 Minor

Exercise the canonical custom_assignments path in the helper.

create_ready_context still only wires legacy cassignments via config.cassigmnents, so the canonical ContextConfig.custom_assignments path is not exercised.

Proposed helper update
     def create_ready_context(self, **kwargs):
         config = ContextConfig()
         config.units = kwargs.get('units', self.units)
         if 'overrides' in kwargs:
             config.overrides = kwargs['overrides']
+        if 'custom_assignments' in kwargs:
+            config.custom_assignments = kwargs['custom_assignments']
         if 'cassignments' in kwargs:
-            config.cassigmnents = kwargs['cassignments']
+            config.custom_assignments = kwargs['cassignments']
+            config.cassigmnents = kwargs['cassignments']
         data_future = kwargs.get('data_future', self.data_future_ready)
         return self.create_context(config, data_future)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 157 - 164, The helper
create_ready_context currently only wires legacy cassignments into the
misspelled config.cassigmnents and never exercises the canonical
ContextConfig.custom_assignments; update create_ready_context so that if kwargs
contains 'custom_assignments' it assigns that value to config.custom_assignments
(and optionally continue to support legacy 'cassignments' by mapping it to
config.custom_assignments as well), and fix the typo 'cassigmnents' to reference
the real attribute name; ensure the method still sets units and data_future as
before.
test/test_sdk.py (1)

55-64: ⚠️ Potential issue | 🟡 Minor

Clarify missing api_key behaviour and remove the unused local.

This test still constructs Client without asserting behaviour, and client is unused. Either assert that missing api_key is allowed (no-raise) or assert the expected failure mode.

Proposed test tidy-up (if missing api_key is allowed)
     def test_sdk_create_missing_api_key(self):
         client_config = ClientConfig()
         client_config.endpoint = "https://sandbox.test.io/v1"
         client_config.application = "website"
         client_config.environment = "dev"

         http_client = DefaultHTTPClient(DefaultHTTPClientConfig())
-        client = Client(client_config, http_client)
+        _ = Client(client_config, http_client)  # Should not raise at construction time

         self.assertIsNone(client_config.api_key)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_sdk.py` around lines 55 - 64, The test
test_sdk_create_missing_api_key constructs a Client but never asserts its
behaviour and leaves the local variable client unused; fix by either (A)
asserting that creating Client(ClientConfig without api_key, http_client)
succeeds (no exception) and then remove the unused client_config.api_key
assertion or use the created Client for a follow-up check (e.g., client is not
None or client.some_public_property), or (B) if missing api_key should fail,
change the test to expect the proper exception/validation error when calling
Client(...) (use the test framework's assertRaises) and remove the unused client
variable; locate code around test_sdk_create_missing_api_key, ClientConfig,
Client, DefaultHTTPClient and DefaultHTTPClientConfig to implement the chosen
assertion and eliminate the unused local.
🧹 Nitpick comments (5)
sdk/context.py (1)

279-284: Unused loop variable value in variant config parsing.

The loop iterates over variables.items() but only uses the key. Use _ or rename to indicate the value is intentionally unused.

🧹 Proposed fix
-                        for key, value in variables.items():
+                        for key in variables.keys():
                             if key in index_variables:
                                 if experiment_variables not in index_variables[key]:
                                     index_variables[key].append(experiment_variables)
                             else:
                                 index_variables[key] = [experiment_variables]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 279 - 284, The loop over variables uses an
unused variable named value; update the iteration to indicate the value is
intentionally unused (e.g., change the loop signature from "for key, value in
variables.items()" to use a throwaway variable like "_" or iterate only keys) in
the block that updates index_variables with experiment_variables (references:
variables, index_variables, experiment_variables) so linters stop flagging the
unused variable.
test/test_context.py (1)

1642-1646: Unused loop variable key.

The loop iterates over res.items() but only uses experiments. Consider using _ to indicate the key is intentionally unused.

🧹 Proposed fix
-        for key, experiments in res.items():
+        for _, experiments in res.items():
             self.assertIsInstance(experiments, list)
             self.assertGreater(len(experiments), 0)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context.py` around lines 1642 - 1646, The test loop in
test_context.py currently unpacks res.items() as "for key, experiments in
res.items()" but never uses "key"; update the loop to either use a blank
identifier or iterate values directly to signal the unused variable: replace
"for key, experiments in res.items()" with "for _, experiments in res.items()"
or "for experiments in res.values()" and leave the rest of the assertions
(assertIsInstance/assertGreater) and the context.close() call unchanged.
sdk/default_audience_deserializer.py (1)

5-5: Unused import: DeserializationError.

The DeserializationError is imported but never used since the exception handling now catches the broad Exception type. Consider removing this unused import.

🧹 Proposed fix
 import jsons
-from jsons import DeserializationError
 
 from sdk.audience_deserializer import AudienceDeserializer
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/default_audience_deserializer.py` at line 5, Remove the unused import
DeserializationError from the top of sdk/default_audience_deserializer.py since
exception handling in the file catches the broad Exception type (so
DeserializationError is never referenced); delete the line "from jsons import
DeserializationError" and run a quick lint/flake8 check to confirm no other
references to DeserializationError remain in functions/methods within that
module.
test/test_context_canonical.py (1)

85-106: Avoid mutable class-level dictionaries in the shared test base.

These dictionaries are shared across all tests in the class hierarchy (RUF012), which can leak state if mutated. Prefer immutable mappings or per-test copies.

Refactor to immutable mappings
+from types import MappingProxyType
...
-    expectedVariants = {
+    expectedVariants = MappingProxyType({
         "exp_test_ab": 1,
         "exp_test_abc": 2,
         "exp_test_not_eligible": 0,
         "exp_test_fullon": 2,
         "exp_test_new": 1,
-    }
+    })

-    expectedVariables = {
+    expectedVariables = MappingProxyType({
         "banner.border": 1.0,
         "banner.size": "large",
         "button.color": "red",
         "submit.color": "blue",
         "submit.shape": "rect",
         "show-modal": True,
-    }
+    })

-    units = {
+    units = MappingProxyType({
         "session_id": "e791e240fcd3df7d238cfc285f475e8152fcc0ec",
         "user_id": "123456789",
         "email": "bleh@absmartly.com"
-    }
+    })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 85 - 106, The class-level
mutable dictionaries expectedVariants, expectedVariables, and units in
test/test_context_canonical.py can be mutated across tests; change them to
immutable defaults or ensure tests get per-test copies: replace the shared
mutable mappings with immutable equivalents (e.g., frozendict/tuple of pairs) or
initialize fresh dict copies in setUp()/each test before use (refer to
expectedVariants, expectedVariables, units) so no test mutates a shared
structure.
test/jsonexpr/operators/test_match.py (1)

35-37: This test name overstates what it proves.

It only checks that a simple regex still matches, so it will keep passing if the timeout guard regresses or the implementation switches away from the claimed mechanism. Please either rename it to the behaviour it asserts, or add a mocked regression that exercises the timeout branch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/jsonexpr/operators/test_match.py` around lines 35 - 37, The test named
test_match_uses_thread_pool_not_sigalrm overstates its guarantee because it only
calls self.operator.binary(self.evaluator, "hello world", "hello") and asserts
True; either rename the test to reflect that it only verifies a simple match
(e.g., test_simple_match_returns_true) or extend it to exercise the timeout
branch by mocking/stubbing the timeout guard used inside the match
implementation (mock the timeout mechanism or the regex call to simulate a
long-running match) and then assert the expected timeout behavior; locate the
test function test_match_uses_thread_pool_not_sigalrm and the call to
self.operator.binary / self.evaluator to implement the rename or add the mock
and timeout assertion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@sdk/internal/murmur32.py`:
- Around line 66-67: The rotate_left function incorrectly omits 32-bit masking
on the left-shifted portion and relies on wrong operator precedence; update
rotate_left so both the left-shifted (n << d) and right-shifted (n >> (32 - d))
portions are masked to 32 bits and the final combined value is also masked (use
& 0xFFFFFFFF) to ensure the result stays within 32 bits; locate the rotate_left
function and apply masks around the shifted operands and/or the final OR result
to fix the bug.

In `@sdk/jsonexpr/operators/match_operator.py`:
- Around line 26-36: The current use of ThreadPoolExecutor in match_operator.py
around compiled.match with future.result(timeout=self.REGEX_TIMEOUT_SECONDS) is
ineffective because ThreadPoolExecutor.__exit__ waits for the worker and
future.cancel() cannot kill a running regex; replace the thread-based isolation
with a process-based one (e.g., concurrent.futures.ProcessPoolExecutor or
spawning a multiprocessing.Process) so the child can be forcefully terminated on
timeout, ensure the worker receives any required pattern/text (recompile in the
child if needed), use the same timeout value self.REGEX_TIMEOUT_SECONDS, and on
timeout explicitly terminate the process and return None instead of relying on
future.cancel().

---

Duplicate comments:
In `@sdk/default_variable_parser.py`:
- Around line 14-19: The parse method's return annotation Optional[dict] is
incorrect because json.loads(config) can produce any JSON type; update the
annotation on parse (in class DefaultVariableParser) to Optional[Any] or a JSON
union type (e.g., Optional[Union[dict, list, str, int, float, bool, NoneType]])
and adjust the error handling to match DefaultAudienceDeserializer by returning
None on json.JSONDecodeError instead of raising ValueError; also update the
parse method's docstring/type comments to reflect the broader JSON return type
and the consistent None-on-error behavior.

In `@test/test_context_canonical.py`:
- Around line 157-164: The helper create_ready_context currently only wires
legacy cassignments into the misspelled config.cassigmnents and never exercises
the canonical ContextConfig.custom_assignments; update create_ready_context so
that if kwargs contains 'custom_assignments' it assigns that value to
config.custom_assignments (and optionally continue to support legacy
'cassignments' by mapping it to config.custom_assignments as well), and fix the
typo 'cassigmnents' to reference the real attribute name; ensure the method
still sets units and data_future as before.

In `@test/test_sdk.py`:
- Around line 55-64: The test test_sdk_create_missing_api_key constructs a
Client but never asserts its behaviour and leaves the local variable client
unused; fix by either (A) asserting that creating Client(ClientConfig without
api_key, http_client) succeeds (no exception) and then remove the unused
client_config.api_key assertion or use the created Client for a follow-up check
(e.g., client is not None or client.some_public_property), or (B) if missing
api_key should fail, change the test to expect the proper exception/validation
error when calling Client(...) (use the test framework's assertRaises) and
remove the unused client variable; locate code around
test_sdk_create_missing_api_key, ClientConfig, Client, DefaultHTTPClient and
DefaultHTTPClientConfig to implement the chosen assertion and eliminate the
unused local.

---

Nitpick comments:
In `@sdk/context.py`:
- Around line 279-284: The loop over variables uses an unused variable named
value; update the iteration to indicate the value is intentionally unused (e.g.,
change the loop signature from "for key, value in variables.items()" to use a
throwaway variable like "_" or iterate only keys) in the block that updates
index_variables with experiment_variables (references: variables,
index_variables, experiment_variables) so linters stop flagging the unused
variable.

In `@sdk/default_audience_deserializer.py`:
- Line 5: Remove the unused import DeserializationError from the top of
sdk/default_audience_deserializer.py since exception handling in the file
catches the broad Exception type (so DeserializationError is never referenced);
delete the line "from jsons import DeserializationError" and run a quick
lint/flake8 check to confirm no other references to DeserializationError remain
in functions/methods within that module.

In `@test/jsonexpr/operators/test_match.py`:
- Around line 35-37: The test named test_match_uses_thread_pool_not_sigalrm
overstates its guarantee because it only calls
self.operator.binary(self.evaluator, "hello world", "hello") and asserts True;
either rename the test to reflect that it only verifies a simple match (e.g.,
test_simple_match_returns_true) or extend it to exercise the timeout branch by
mocking/stubbing the timeout guard used inside the match implementation (mock
the timeout mechanism or the regex call to simulate a long-running match) and
then assert the expected timeout behavior; locate the test function
test_match_uses_thread_pool_not_sigalrm and the call to self.operator.binary /
self.evaluator to implement the rename or add the mock and timeout assertion.

In `@test/test_context_canonical.py`:
- Around line 85-106: The class-level mutable dictionaries expectedVariants,
expectedVariables, and units in test/test_context_canonical.py can be mutated
across tests; change them to immutable defaults or ensure tests get per-test
copies: replace the shared mutable mappings with immutable equivalents (e.g.,
frozendict/tuple of pairs) or initialize fresh dict copies in setUp()/each test
before use (refer to expectedVariants, expectedVariables, units) so no test
mutates a shared structure.

In `@test/test_context.py`:
- Around line 1642-1646: The test loop in test_context.py currently unpacks
res.items() as "for key, experiments in res.items()" but never uses "key";
update the loop to either use a blank identifier or iterate values directly to
signal the unused variable: replace "for key, experiments in res.items()" with
"for _, experiments in res.items()" or "for experiments in res.values()" and
leave the rest of the assertions (assertIsInstance/assertGreater) and the
context.close() call unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 412f161b-91ed-40eb-a217-e3b506009e60

📥 Commits

Reviewing files that changed from the base of the PR and between d9d8544 and b6cffa5.

📒 Files selected for processing (14)
  • sdk/absmartly.py
  • sdk/absmartly_config.py
  • sdk/context.py
  • sdk/context_config.py
  • sdk/default_audience_deserializer.py
  • sdk/default_variable_parser.py
  • sdk/internal/murmur32.py
  • sdk/jsonexpr/operators/match_operator.py
  • test/jsonexpr/operators/test_match.py
  • test/test_context.py
  • test/test_context_canonical.py
  • test/test_default_audience_deserializer.py
  • test/test_murmur32.py
  • test/test_sdk.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • sdk/context_config.py
  • test/test_murmur32.py

Comment on lines +26 to +36
with ThreadPoolExecutor(max_workers=1) as pool:
future = pool.submit(compiled.match, text)
try:
result = future.result(
timeout=self.REGEX_TIMEOUT_SECONDS
)
return bool(result)
except TimeoutError:
logger.warning("Regex execution timeout (potential ReDoS)")
future.cancel()
return None
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
python - <<'PY'
import time
from concurrent.futures import ThreadPoolExecutor, TimeoutError

def slow():
    time.sleep(2)

start = time.monotonic()
with ThreadPoolExecutor(max_workers=1) as pool:
    future = pool.submit(slow)
    try:
        future.result(timeout=0.1)
    except TimeoutError:
        print("cancel_returned=", future.cancel())

elapsed = time.monotonic() - start
print(f"elapsed={elapsed:.2f}s")
PY

Repository: absmartly/python3-sdk

Length of output: 101


🏁 Script executed:

fd "match_operator.py" | head -20

Repository: absmartly/python3-sdk

Length of output: 105


🏁 Script executed:

cat -n sdk/jsonexpr/operators/match_operator.py | sed -n '1,45p'

Repository: absmartly/python3-sdk

Length of output: 2078


The timeout path still blocks until the regex finishes.

future.result(timeout=...) only stops the caller from waiting. Once this branch returns, ThreadPoolExecutor.__exit__() calls shutdown(wait=True), and future.cancel() cannot stop a regex that is already running (it returns False for running tasks). A pathological regex pattern can therefore still hold this method open until the worker completes, making the ReDoS mitigation ineffective. This requires a killable isolation boundary such as a separate process rather than a thread.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/jsonexpr/operators/match_operator.py` around lines 26 - 36, The current
use of ThreadPoolExecutor in match_operator.py around compiled.match with
future.result(timeout=self.REGEX_TIMEOUT_SECONDS) is ineffective because
ThreadPoolExecutor.__exit__ waits for the worker and future.cancel() cannot kill
a running regex; replace the thread-based isolation with a process-based one
(e.g., concurrent.futures.ProcessPoolExecutor or spawning a
multiprocessing.Process) so the child can be forcefully terminated on timeout,
ensure the worker receives any required pattern/text (recompile in the child if
needed), use the same timeout value self.REGEX_TIMEOUT_SECONDS, and on timeout
explicitly terminate the process and return None instead of relying on
future.cancel().

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
sdk/context.py (1)

142-154: ⚠️ Potential issue | 🟠 Major

Handle already-completed failures synchronously.

When the provider returns an already-completed future with an exception, the RuntimeError raised in the callback (lines 152–154) is logged and swallowed by Future.add_done_callback() rather than propagated. This allows the Context to be constructed and returned to the caller despite the failed initialisation, with no indication of the error. In contrast, when the future is not yet done, the callback skips raising the error entirely, relying on the caller to check self.ready_future to detect failure. The asymmetric handling means already-failed futures silently construct a usable Context object.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 142 - 154, The code currently registers a
done-callback (when_finished) but when data_future is already completed with an
exception the RuntimeError raised inside the callback gets swallowed; fix by
detecting an already-failed future immediately after checking
data_future.done(): if data_future.exception() is not None call
self.set_data_failed(exception), self.log_error(exception) and then raise a
RuntimeError("Failed to initialize ABSmartly Context") from that exception
synchronously (instead of only inside the callback), keeping the existing
callback logic for non-completed futures; reference symbols: data_future,
when_finished, set_data_failed, log_error, set_data, log_event.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@sdk/context_publisher.py`:
- Around line 9-14: The ContextPublisher class is intended as an abstract base
but doesn't inherit from abc.ABC, so make it inherit from ABC: update the class
declaration for ContextPublisher to extend ABC (importing ABC from abc) so the
`@abstractmethod` on publish(self, context, event: PublishEvent) ->
Future[Optional[ContextData]] is enforced and prevents direct instantiation when
publish is not implemented.

In `@sdk/context.py`:
- Around line 545-556: The current close() swallows exceptions from
close_async().result(), making finalize() report success even on publish
failures; change close() to still set self.close_error and call
self.log_error(e) but then re-raise the exception so callers see failures (i.e.,
after logging raise the caught exception). Ensure finalize() propagates that
exception by returning self.close() (unchanged) or otherwise not catching it,
and keep finalize_async() delegating to close_async(); reference the methods
close, close_async, finalize, finalize_async, close_error, and log_error when
making the change.
- Around line 212-229: get_attribute and get_attributes currently return
internal attribute values directly which allows callers to mutate lists/dicts
without locking; change them to return defensive copies: in get_attribute(name:
str) detect if the found value is a dict or list (or other mutable like set) and
return a shallow copy (e.g., value.copy() for dict, list(value) or set(value))
or use copy.deepcopy for nested safety, and in get_attributes() build the result
dict by copying each attribute value similarly before returning; keep the
existing read-lock usage (context_lock.acquire_read/release_read) and only copy
values while the read lock is held to avoid races, referencing the
get_attribute, get_attributes methods and the attributes collection.

---

Outside diff comments:
In `@sdk/context.py`:
- Around line 142-154: The code currently registers a done-callback
(when_finished) but when data_future is already completed with an exception the
RuntimeError raised inside the callback gets swallowed; fix by detecting an
already-failed future immediately after checking data_future.done(): if
data_future.exception() is not None call self.set_data_failed(exception),
self.log_error(exception) and then raise a RuntimeError("Failed to initialize
ABSmartly Context") from that exception synchronously (instead of only inside
the callback), keeping the existing callback logic for non-completed futures;
reference symbols: data_future, when_finished, set_data_failed, log_error,
set_data, log_event.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 92a621f8-3913-47a4-ba7e-4535b6c19919

📥 Commits

Reviewing files that changed from the base of the PR and between b6cffa5 and aa0c329.

📒 Files selected for processing (7)
  • sdk/absmartly.py
  • sdk/absmartly_config.py
  • sdk/context.py
  • sdk/context_event_handler.py
  • sdk/context_publisher.py
  • sdk/default_context_event_handler.py
  • sdk/default_context_publisher.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • sdk/absmartly_config.py

joalves added 3 commits March 18, 2026 13:52
- Fix rotate_left operator precedence bug in murmur32 (critical hash corruption fix)
- Make ContextPublisher inherit from ABC to enforce abstract method contract
- Fix return type annotations from Optional[dict] to Optional[Any] in variable parsers
- Add defensive copies in get_attribute/get_attributes to prevent external mutation
- Re-raise exceptions from close() so callers can detect publish failures
- Fix f-string without placeholders and use logger.exception in match_operator
- Add noqa: S324 suppression for non-cryptographic MD5 usage in test helper
- Strengthen test_concurrent_publish with behavioral assertions
- Add custom_assignments path to create_ready_context test helper
- Fix test_sdk unused variable and update tests to handle close() re-raise
Add context_publisher as the primary field in ABsmartlyConfig, replacing
context_event_handler. The old name is kept as a deprecated property that
delegates to context_publisher for backward compatibility. Export
ContextPublisher from the public SDK API.
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
sdk/context.py (1)

372-384: ⚠️ Potential issue | 🟡 Minor

set_timeout() can still schedule two timers.

self.timeout is None is checked before timeout_lock is held. Two concurrent track() / queue_exposure() calls can both observe None, and one timer becomes untracked. That stale timer can then fire after the queue has already been flushed or while shutdown is in progress.

Possible fix
     def set_timeout(self):
         if self.is_ready() and self.publish_delay >= 0:
-            if self.timeout is None:
-                try:
-                    self.timeout_lock.acquire_write()
-
-                    def flush():
-                        self.flush()
-
-                    self.timeout = threading.Timer(self.publish_delay, flush)
-                    self.timeout.start()
-                finally:
-                    self.timeout_lock.release_write()
+            try:
+                self.timeout_lock.acquire_write()
+                if self.timeout is None:
+                    def flush():
+                        self.flush()
+
+                    self.timeout = threading.Timer(self.publish_delay, flush)
+                    self.timeout.start()
+            finally:
+                self.timeout_lock.release_write()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 372 - 384, Move the "timeout is None" check
inside the write lock and re-check state after acquiring timeout_lock in
set_timeout(); specifically, acquire self.timeout_lock.write, then if
self.is_ready() and self.publish_delay >= 0 and self.timeout is None create and
start the threading.Timer and assign it to self.timeout; also ensure you handle
any existing stale timer by cancelling it when scheduling a new one or during
shutdown (check and call cancel() on self.timeout if present before replacing)
so concurrent track()/queue_exposure() calls cannot create an untracked timer.
♻️ Duplicate comments (3)
sdk/jsonexpr/operators/match_operator.py (1)

26-36: ⚠️ Potential issue | 🔴 Critical

Thread-based timeout still fails to bound execution time.

On Line 26, the with ThreadPoolExecutor(...) context will wait for worker completion on exit. After timeout on Line 33, Line 35 future.cancel() does not stop an already-running regex task, so pathological patterns can still block this method and weaken the ReDoS guard.

Suggested direction (process-isolated timeout)
-from concurrent.futures import ThreadPoolExecutor, TimeoutError
+from multiprocessing import Process, Queue
+from queue import Empty
+
+def _regex_match_worker(pattern: str, text: str, out: Queue) -> None:
+    try:
+        out.put(bool(re.compile(pattern).match(text)))
+    except Exception:
+        out.put(None)
...
-                try:
-                    compiled = re.compile(pattern)
-                    with ThreadPoolExecutor(max_workers=1) as pool:
-                        future = pool.submit(compiled.match, text)
-                        try:
-                            result = future.result(
-                                timeout=self.REGEX_TIMEOUT_SECONDS
-                            )
-                            return bool(result)
-                        except TimeoutError:
-                            logger.warning("Regex execution timeout (potential ReDoS)")
-                            future.cancel()
-                            return None
+                try:
+                    out = Queue(maxsize=1)
+                    proc = Process(target=_regex_match_worker, args=(pattern, text, out), daemon=True)
+                    proc.start()
+                    proc.join(timeout=self.REGEX_TIMEOUT_SECONDS)
+                    if proc.is_alive():
+                        logger.warning("Regex execution timeout (potential ReDoS)")
+                        proc.terminate()
+                        proc.join()
+                        return None
+                    try:
+                        return out.get_nowait()
+                    except Empty:
+                        return None
#!/bin/bash
# Verify ThreadPool timeout behaviour: caller times out, but context manager still waits.
python - <<'PY'
import time
from concurrent.futures import ThreadPoolExecutor, TimeoutError

def slow():
    time.sleep(2)

start = time.monotonic()
with ThreadPoolExecutor(max_workers=1) as pool:
    fut = pool.submit(slow)
    try:
        fut.result(timeout=0.05)
    except TimeoutError:
        print("cancel_returned =", fut.cancel())
        print("elapsed_before_exit =", round(time.monotonic() - start, 3))

print("elapsed_total =", round(time.monotonic() - start, 3))
PY
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/jsonexpr/operators/match_operator.py` around lines 26 - 36, The current
ThreadPoolExecutor approach in match_operator.py (using ThreadPoolExecutor,
future.result with self.REGEX_TIMEOUT_SECONDS, and future.cancel()) doesn't
actually bound execution because the context manager waits for the thread to
finish and cancel() won't stop a running regex; replace the thread-based timeout
with a process-isolated execution (e.g., spawn a short-lived Process or use
ProcessPoolExecutor) to run compiled.match(text) so the parent can terminate the
worker process on timeout, update the timeout handling in the method that calls
compiled.match to kill/terminate the worker and return None on timeout, and
remove reliance on future.cancel() and ThreadPoolExecutor to ensure pathological
regexes cannot block; keep the same return semantics (True/False/None) and
reference REGEX_TIMEOUT_SECONDS for the timeout value.
test/test_context_canonical.py (1)

340-348: ⚠️ Potential issue | 🟡 Minor

Please exercise custom_assignments, not just the legacy alias.

Both of these regressions still go through cassignments, so the backwards-compatibility path is covered but the new canonical field is not. Since create_ready_context() already accepts custom_assignments, switch one of these to the canonical name and keep the other as the legacy check.

Also applies to: 729-736

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 340 - 348, The test
test_treatment_queues_exposure_with_custom_assignment currently only exercises
the legacy alias cassignments; update it to pass the canonical field
custom_assignments to create_ready_context (e.g.,
create_ready_context(custom_assignments={"exp_test_ab": 2})) while leaving the
other test that uses cassignments in place so the backwards-compatibility path
remains covered; ensure assertions still check get_treatment("exp_test_ab"),
pending count, and that the exposure.custom flag is true after calling
context.close().
sdk/context.py (1)

212-245: ⚠️ Potential issue | 🟠 Major

The defensive-copy hardening is still incomplete.

get_attribute() / get_attributes() only copy dict and list, and set_attribute() still stores the caller’s object by reference. A caller can still mutate a set, or mutate the original dict/list after set_attribute(), and change self.attributes without the write lock or an _attrs_seq bump. That can leave audience-cache decisions and published attributes out of sync.

Possible fix
     def get_attribute(self, name: str):
         try:
             self.context_lock.acquire_read()
             result = None
             for attr in self.attributes:
                 if attr.name == name:
                     result = attr.value
-            return copy.deepcopy(result) if isinstance(result, (dict, list)) else result
+            return copy.deepcopy(result) if isinstance(result, (dict, list, set)) else result
         finally:
             self.context_lock.release_read()
@@
     def get_attributes(self):
         try:
             self.context_lock.acquire_read()
             result = {}
             for attr in self.attributes:
                 value = attr.value
-                result[attr.name] = copy.deepcopy(value) if isinstance(value, (dict, list)) else value
+                result[attr.name] = copy.deepcopy(value) if isinstance(value, (dict, list, set)) else value
             return result
         finally:
             self.context_lock.release_read()
@@
     def set_attribute(self, name: str, value: object):
         self.check_not_closed()
         attribute = Attribute()
         attribute.name = name
-        attribute.value = value
+        attribute.value = copy.deepcopy(value) if isinstance(value, (dict, list, set)) else value
         attribute.setAt = self.clock.millis()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 212 - 245, get_attribute/get_attributes only
deepcopy dict/list and set_attribute stores caller objects by reference; fix by
doing a full defensive deepcopy on all values when storing and when returning.
Specifically, in set_attribute make attribute.value = copy.deepcopy(value) (do
the deepcopy before appending) so the stored Attribute in set_attribute is a
copy and cannot be mutated by the caller, and in get_attribute and
get_attributes always return copy.deepcopy(result) for any non-primitive result
(or simply deepcopy unconditionally) instead of checking isinstance against
dict/list; update references to get_attribute, get_attributes, and set_attribute
accordingly.
🧹 Nitpick comments (3)
test/test_context.py (1)

1648-1654: Rename unused loop variable to _key.

The loop variable key is not used within the loop body.

Suggested fix
-        for key, experiments in res.items():
+        for _key, experiments in res.items():
             self.assertIsInstance(experiments, list)
             self.assertGreater(len(experiments), 0)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context.py` around lines 1648 - 1654, The loop in the test iterates
over context.get_variable_keys() with an unused loop variable named key; rename
that loop variable to _key (e.g., for _key, experiments in res.items():) so the
unused variable follows Python conventions and avoids linter warnings while
keeping the rest of the assertions and context.close() unchanged.
test/test_sdk.py (1)

166-168: Fragile access to base class via __bases__.

Using ABSmartlyConfig.__bases__[0]() to get ABsmartlyConfig is fragile and will break if the inheritance hierarchy changes. Consider importing ABsmartlyConfig directly.

Suggested fix
     def test_absmartly_deprecation_warning(self):
         import warnings
-        from sdk.absmartly import ABsmartly
+        from sdk.absmartly import ABsmartly
+        from sdk.absmartly_config import ABsmartlyConfig as ABsmartlyConfigNew
         mock_data_provider = Mock(spec=ContextDataProvider)
         mock_event_handler = Mock(spec=ContextEventHandler)

-        config_lowercase = ABSmartlyConfig.__bases__[0]()
+        config_lowercase = ABsmartlyConfigNew()
         config_lowercase.context_data_provider = mock_data_provider
         config_lowercase.context_event_handler = mock_event_handler
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_sdk.py` around lines 166 - 168, The test is instantiating the base
class via ABSmartlyConfig.__bases__[0]() which is fragile; instead import the
actual base class (e.g., ABsmartlyConfig) at top of the test file and replace
the dynamic instantiation with a direct one (e.g., config_lowercase =
ABsmartlyConfig()) so config_lowercase.context_data_provider and
config_lowercase.context_event_handler are set on a stable, explicitly
referenced class.
sdk/absmartly.py (1)

141-149: Remove redundant import.

The warnings module is already imported at the top of the file (line 1).

Suggested fix
 class ABSmartly(ABsmartly):
     def __init__(self, *args, **kwargs):
-        import warnings
         warnings.warn(
             "ABSmartly is deprecated, use ABsmartly instead",
             DeprecationWarning,
             stacklevel=2,
         )
         super().__init__(*args, **kwargs)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/absmartly.py` around lines 141 - 149, The inner import of warnings inside
ABSmartly.__init__ is redundant; remove the line "import warnings" from the
constructor and call warnings.warn(...) directly (preserve the deprecation
message, DeprecationWarning type, and stacklevel=2) so the class ABSmartly
simply issues the warning and delegates to super().__init__(*args, **kwargs).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@sdk/context.py`:
- Around line 356-365: The set_refresh_timer logic can race with close_async
causing refresh_async (which calls check_not_closed) to raise a RuntimeError
during shutdown; modify set_refresh_timer so the inner ref() wraps the call to
self.refresh_async() in a try/except that catches RuntimeError raised by
check_not_closed and treats it as a no-op (swallow it) while re-raising other
unexpected errors, and only schedules a new threading.Timer (assigning to
self.refresh_timer and starting it) if self.is_closed() and self.is_closing()
remain False after the call; reference set_refresh_timer, ref, refresh_async,
refresh_timer, is_closing, is_closed and check_not_closed in your change.
- Around line 546-552: The close() method duplicates the error emitted by
flush() because flush() already logs/emits EventType.ERROR before completing its
future exceptionally; to fix it, remove the extra logging in close() by not
calling self.log_error(e) (instead just set self.close_error = e and re-raise)
or guard the call so it only logs if the error wasn't already emitted (e.g.,
check self.close_error or an "already_logged" flag); update the close()
implementation that calls close_async().result() accordingly to avoid
double-emitting the same error.

In `@test/test_context_canonical.py`:
- Around line 982-1005: The tests test_publish_delay_triggers_after_exposure and
test_publish_delay_triggers_after_goal use fixed time.sleep(0.3) which is flaky;
replace the fixed sleep with a polling loop that waits up to a short deadline
(e.g. 1s) checking context.get_pending_count() (or the internal
context._publish_calls count if available) and breaks as soon as pending_count
reaches 0, asserting failure if the deadline elapses; use
ContextConfig/create_context/timeout names from the diff to locate the tests and
ensure you clear/close the context as before.
- Around line 785-823: Tests are directly accessing context.units and
context.attributes, bypassing the new locked getters; update the failing tests
to call the public API methods get_unit(), get_units(), get_attribute(), and
get_attributes() instead of reading context.units/context.attributes directly
(e.g. replace checks like "self.assertIn('session_id', context.units)" with
assertions against context.get_units(), replace length and last-value checks on
attributes with calls to context.get_attributes('user_age') or
context.get_attributes() and then assert length/last.value), and keep the
existing behavioral assertions and context.close() calls unchanged.
- Around line 643-700: The tests in this block call context.close() instead of
the new public API finalize() so they don't exercise the wrapper; update each
test (e.g., test_finalize_does_not_publish_when_empty,
test_finalize_publishes_pending_events, test_finalize_includes_exposure_data,
test_finalize_includes_goal_data, test_finalize_includes_attribute_data,
test_finalize_clears_queue, test_finalize_stops_refresh_timer) to call
context.finalize() in place of context.close(), and add at least one async test
that awaits context.finalize_async() (use the test framework's async test
support or asyncio.run/loop to await finalize_async()) to verify the
asynchronous path; keep assertions the same and ensure any setup (like
data_future for refresh_timer) remains compatible with async finalization.

---

Outside diff comments:
In `@sdk/context.py`:
- Around line 372-384: Move the "timeout is None" check inside the write lock
and re-check state after acquiring timeout_lock in set_timeout(); specifically,
acquire self.timeout_lock.write, then if self.is_ready() and self.publish_delay
>= 0 and self.timeout is None create and start the threading.Timer and assign it
to self.timeout; also ensure you handle any existing stale timer by cancelling
it when scheduling a new one or during shutdown (check and call cancel() on
self.timeout if present before replacing) so concurrent track()/queue_exposure()
calls cannot create an untracked timer.

---

Duplicate comments:
In `@sdk/context.py`:
- Around line 212-245: get_attribute/get_attributes only deepcopy dict/list and
set_attribute stores caller objects by reference; fix by doing a full defensive
deepcopy on all values when storing and when returning. Specifically, in
set_attribute make attribute.value = copy.deepcopy(value) (do the deepcopy
before appending) so the stored Attribute in set_attribute is a copy and cannot
be mutated by the caller, and in get_attribute and get_attributes always return
copy.deepcopy(result) for any non-primitive result (or simply deepcopy
unconditionally) instead of checking isinstance against dict/list; update
references to get_attribute, get_attributes, and set_attribute accordingly.

In `@sdk/jsonexpr/operators/match_operator.py`:
- Around line 26-36: The current ThreadPoolExecutor approach in
match_operator.py (using ThreadPoolExecutor, future.result with
self.REGEX_TIMEOUT_SECONDS, and future.cancel()) doesn't actually bound
execution because the context manager waits for the thread to finish and
cancel() won't stop a running regex; replace the thread-based timeout with a
process-isolated execution (e.g., spawn a short-lived Process or use
ProcessPoolExecutor) to run compiled.match(text) so the parent can terminate the
worker process on timeout, update the timeout handling in the method that calls
compiled.match to kill/terminate the worker and return None on timeout, and
remove reliance on future.cancel() and ThreadPoolExecutor to ensure pathological
regexes cannot block; keep the same return semantics (True/False/None) and
reference REGEX_TIMEOUT_SECONDS for the timeout value.

In `@test/test_context_canonical.py`:
- Around line 340-348: The test
test_treatment_queues_exposure_with_custom_assignment currently only exercises
the legacy alias cassignments; update it to pass the canonical field
custom_assignments to create_ready_context (e.g.,
create_ready_context(custom_assignments={"exp_test_ab": 2})) while leaving the
other test that uses cassignments in place so the backwards-compatibility path
remains covered; ensure assertions still check get_treatment("exp_test_ab"),
pending count, and that the exposure.custom flag is true after calling
context.close().

---

Nitpick comments:
In `@sdk/absmartly.py`:
- Around line 141-149: The inner import of warnings inside ABSmartly.__init__ is
redundant; remove the line "import warnings" from the constructor and call
warnings.warn(...) directly (preserve the deprecation message,
DeprecationWarning type, and stacklevel=2) so the class ABSmartly simply issues
the warning and delegates to super().__init__(*args, **kwargs).

In `@test/test_context.py`:
- Around line 1648-1654: The loop in the test iterates over
context.get_variable_keys() with an unused loop variable named key; rename that
loop variable to _key (e.g., for _key, experiments in res.items():) so the
unused variable follows Python conventions and avoids linter warnings while
keeping the rest of the assertions and context.close() unchanged.

In `@test/test_sdk.py`:
- Around line 166-168: The test is instantiating the base class via
ABSmartlyConfig.__bases__[0]() which is fragile; instead import the actual base
class (e.g., ABsmartlyConfig) at top of the test file and replace the dynamic
instantiation with a direct one (e.g., config_lowercase = ABsmartlyConfig()) so
config_lowercase.context_data_provider and
config_lowercase.context_event_handler are set on a stable, explicitly
referenced class.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9843dde3-98d0-4128-a577-88718c5ae5d6

📥 Commits

Reviewing files that changed from the base of the PR and between aa0c329 and 2a4b8fb.

📒 Files selected for processing (15)
  • sdk/__init__.py
  • sdk/absmartly.py
  • sdk/absmartly_config.py
  • sdk/context.py
  • sdk/context_publisher.py
  • sdk/default_variable_parser.py
  • sdk/internal/murmur32.py
  • sdk/jsonexpr/operators/match_operator.py
  • sdk/variable_parser.py
  • test/test_concurrency.py
  • test/test_context.py
  • test/test_context_canonical.py
  • test/test_named_param_init.py
  • test/test_sdk.py
  • test/test_variant_assigner.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • test/test_named_param_init.py
  • sdk/default_variable_parser.py

Comment on lines 356 to +365
def set_refresh_timer(self):
if self.refresh_interval > 0 and self.refresh_timer is None and not self.is_closing() and not self.is_closed():
def ref():
self.refresh_async()
self.refresh_timer = threading.Timer(
self.refresh_interval,
ref)
self.refresh_timer.start()
if not self.is_closed() and not self.is_closing():
self.refresh_async()
if not self.is_closed() and not self.is_closing():
self.refresh_timer = threading.Timer(
self.refresh_interval,
ref)
self.refresh_timer.start()
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

refresh_timer can still throw during a normal close.

The checks at Line 359 and Line 361 are racy with close_async(). If shutdown flips closing after the first check but before refresh_async(), the timer thread will hit check_not_closed() and raise RuntimeError('ABsmartly Context is finalizing.'). That close/finalising race should be treated as a no-op instead.

Possible fix
             def ref():
-                if not self.is_closed() and not self.is_closing():
-                    self.refresh_async()
-                    if not self.is_closed() and not self.is_closing():
-                        self.refresh_timer = threading.Timer(
-                            self.refresh_interval,
-                            ref)
-                        self.refresh_timer.start()
+                if self.is_closed() or self.is_closing():
+                    return
+
+                try:
+                    self.refresh_async()
+                except RuntimeError:
+                    return
+
+                if not self.is_closed() and not self.is_closing():
+                    self.refresh_timer = threading.Timer(
+                        self.refresh_interval,
+                        ref)
+                    self.refresh_timer.start()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 356 - 365, The set_refresh_timer logic can race
with close_async causing refresh_async (which calls check_not_closed) to raise a
RuntimeError during shutdown; modify set_refresh_timer so the inner ref() wraps
the call to self.refresh_async() in a try/except that catches RuntimeError
raised by check_not_closed and treats it as a no-op (swallow it) while
re-raising other unexpected errors, and only schedules a new threading.Timer
(assigning to self.refresh_timer and starting it) if self.is_closed() and
self.is_closing() remain False after the call; reference set_refresh_timer, ref,
refresh_async, refresh_timer, is_closing, is_closed and check_not_closed in your
change.

Comment on lines 546 to +552
def close(self):
self.close_async().result()
try:
self.close_async().result()
except Exception as e:
self.close_error = e
self.log_error(e)
raise
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

close() now logs the same publish failure twice.

flush() already emits EventType.ERROR before completing its future exceptionally. Logging e again here duplicates the error event for one failed final publish.

Possible fix
     def close(self):
         try:
             self.close_async().result()
         except Exception as e:
             self.close_error = e
-            self.log_error(e)
             raise
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sdk/context.py` around lines 546 - 552, The close() method duplicates the
error emitted by flush() because flush() already logs/emits EventType.ERROR
before completing its future exceptionally; to fix it, remove the extra logging
in close() by not calling self.log_error(e) (instead just set self.close_error =
e and re-raise) or guard the call so it only logs if the error wasn't already
emitted (e.g., check self.close_error or an "already_logged" flag); update the
close() implementation that calls close_async().result() accordingly to avoid
double-emitting the same error.

Comment on lines +643 to +700
def test_finalize_does_not_publish_when_empty(self):
self.set_up()
context = self.create_ready_context()
context.close()
self.assertEqual(0, len(self.client._publish_calls))
self.assertTrue(context.is_closed())

def test_finalize_publishes_pending_events(self):
self.set_up()
context = self.create_ready_context()
context.track("goal1", {"amount": 100})
context.close()
self.assertEqual(1, len(self.client._publish_calls))
self.assertTrue(context.is_closed())

def test_finalize_includes_exposure_data(self):
self.set_up()
context = self.create_ready_context()
context.get_treatment("exp_test_ab")
context.close()
self.assertEqual(1, len(self.client._publish_calls))
event = self.client._publish_calls[0]
self.assertIsNotNone(event.exposures)
self.assertEqual(1, len(event.exposures))

def test_finalize_includes_goal_data(self):
self.set_up()
context = self.create_ready_context()
context.track("goal1", {"amount": 100})
context.close()
event = self.client._publish_calls[0]
self.assertIsNotNone(event.goals)
self.assertEqual(1, len(event.goals))

def test_finalize_includes_attribute_data(self):
self.set_up()
context = self.create_ready_context()
context.set_attribute("attr1", "value1")
context.track("goal1", {"amount": 100})
context.close()
event = self.client._publish_calls[0]
self.assertIsNotNone(event.attributes)
self.assertEqual(1, len(event.attributes))

def test_finalize_clears_queue(self):
self.set_up()
context = self.create_ready_context()
context.track("goal1", {"amount": 100})
context.close()
self.assertEqual(0, context.get_pending_count())

def test_finalize_stops_refresh_timer(self):
self.set_up()
context = self.create_ready_context(data_future=self.data_future_refresh)
self.assertIsNotNone(context.refresh_timer)
context.close()
self.assertIsNone(context.refresh_timer)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ContextFinalizeTests never actually hit finalize() / finalize_async().

Every case in this block still exercises close(), so a broken wrapper on the new public API would leave the suite green. Please convert these to finalize() and add at least one finalize_async() path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 643 - 700, The tests in this
block call context.close() instead of the new public API finalize() so they
don't exercise the wrapper; update each test (e.g.,
test_finalize_does_not_publish_when_empty,
test_finalize_publishes_pending_events, test_finalize_includes_exposure_data,
test_finalize_includes_goal_data, test_finalize_includes_attribute_data,
test_finalize_clears_queue, test_finalize_stops_refresh_timer) to call
context.finalize() in place of context.close(), and add at least one async test
that awaits context.finalize_async() (use the test framework's async test
support or asyncio.run/loop to await finalize_async()) to verify the
asynchronous path; keep assertions the same and ensure any setup (like
data_future for refresh_timer) remains compatible with async finalization.

Comment on lines +785 to +823
def test_set_unit_before_ready(self):
self.set_up()
config = ContextConfig()
context = self.create_context(config, self.data_future)
self.assertFalse(context.is_ready())
context.set_unit("session_id", "some-session")
self.assertIn("session_id", context.units)
context.close()

def test_set_unit_throws_after_close(self):
self.set_up()
context = self.create_ready_context()
context.close()
with self.assertRaises(RuntimeError):
context.set_unit("device_id", "device-123")


class ContextAttributeTests(ContextCanonicalTestBase):

def test_set_attribute_before_ready(self):
self.set_up()
config = ContextConfig()
config.units = self.units
context = self.create_context(config, self.data_future)
self.assertFalse(context.is_ready())
context.set_attribute("user_age", 25)
self.assertEqual(1, len(context.attributes))
context.close()

def test_get_attribute_returns_last_set_value(self):
self.set_up()
context = self.create_ready_context()
context.set_attribute("user_age", 25)
context.set_attribute("user_age", 30)
age_attrs = [a for a in context.attributes if a.name == "user_age"]
self.assertEqual(2, len(age_attrs))
self.assertEqual(30, age_attrs[-1].value)
context.close()

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

These assertions bypass the new locked getters.

The PR adds get_unit(), get_units(), get_attribute() and get_attributes(), but these tests still inspect context.units / context.attributes directly. That leaves the new public API effectively untested.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 785 - 823, Tests are directly
accessing context.units and context.attributes, bypassing the new locked
getters; update the failing tests to call the public API methods get_unit(),
get_units(), get_attribute(), and get_attributes() instead of reading
context.units/context.attributes directly (e.g. replace checks like
"self.assertIn('session_id', context.units)" with assertions against
context.get_units(), replace length and last-value checks on attributes with
calls to context.get_attributes('user_age') or context.get_attributes() and then
assert length/last.value), and keep the existing behavioral assertions and
context.close() calls unchanged.

Comment on lines +982 to +1005
def test_publish_delay_triggers_after_exposure(self):
self.set_up()
config = ContextConfig()
config.units = self.units
config.publish_delay = 0.1
context = self.create_context(config, self.data_future_ready)
context.get_treatment("exp_test_ab")
self.assertEqual(1, context.get_pending_count())
self.assertIsNotNone(context.timeout)
time.sleep(0.3)
self.assertEqual(0, context.get_pending_count())
context.close()

def test_publish_delay_triggers_after_goal(self):
self.set_up()
config = ContextConfig()
config.units = self.units
config.publish_delay = 0.1
context = self.create_context(config, self.data_future_ready)
context.track("goal1", {"amount": 100})
self.assertEqual(1, context.get_pending_count())
self.assertIsNotNone(context.timeout)
time.sleep(0.3)
self.assertEqual(0, context.get_pending_count())
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fixed sleeps will make the publish-delay tests flaky.

time.sleep(0.3) assumes the timer thread and publish callback always complete within 300 ms, which is brittle on busy CI runners. Prefer waiting on a condition/event, or polling with a deadline against _publish_calls / pending_count.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/test_context_canonical.py` around lines 982 - 1005, The tests
test_publish_delay_triggers_after_exposure and
test_publish_delay_triggers_after_goal use fixed time.sleep(0.3) which is flaky;
replace the fixed sleep with a polling loop that waits up to a short deadline
(e.g. 1s) checking context.get_pending_count() (or the internal
context._publish_calls count if available) and breaks as soon as pending_count
reaches 0, asserting failure if the deadline elapses; use
ContextConfig/create_context/timeout names from the diff to locate the tests and
ensure you clear/close the context as before.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant