Skip to content

[SYNPY-1749]Allow quote, apostrophe and ellipsis in store_row_async#1316

Open
danlu1 wants to merge 19 commits intodevelopfrom
SYNPY-1749-allow-quote-apostrophe-in-store-rows
Open

[SYNPY-1749]Allow quote, apostrophe and ellipsis in store_row_async#1316
danlu1 wants to merge 19 commits intodevelopfrom
SYNPY-1749-allow-quote-apostrophe-in-store-rows

Conversation

@danlu1
Copy link
Contributor

@danlu1 danlu1 commented Feb 9, 2026

Problem:

A JSON serialization issue occurs when a DataFrame passed to store_row_async contains a list or dictionary with strings that include both double quotes and apostrophes.

Solution:

  1. Add default value "escapechar": "\\" to to_csv_kwargs in store_rows_async
  2. Add to_csv_kwargs to _stream_and_update_from_df so it can take the passed to_csv_kwargs values for downstream data processing.

Testing:

Unit test and integration test have been added.

@danlu1 danlu1 requested a review from a team as a code owner February 9, 2026 20:01
@andrewelamb
Copy link
Contributor

@danlu1 Is this still WIP, or are you looking for reviews?

@danlu1
Copy link
Contributor Author

danlu1 commented Feb 10, 2026

@andrewelamb sorry I should have marked this a draft.

@danlu1 danlu1 marked this pull request as draft February 10, 2026 00:48
@danlu1 danlu1 marked this pull request as ready for review February 18, 2026 18:36
@danlu1
Copy link
Contributor Author

danlu1 commented Feb 18, 2026

The integration test failures are in the recordset and submission modules and do not appear to be related to my changes.

Copy link
Contributor

@linglp linglp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think overall it looks good. The tests can be consolidated a bit to test all the edge cases in fewer integration tests to improve performance, and the docstring can be updated to reflect the new state of the code since json.dumps() was removed. There's also some logic that can be simplified in the redundant checks where sample_values is created but never actually used. The function name could also be more descriptive of what it actually does now (sanitizing special values rather than just converting dtypes)

@danlu1 danlu1 requested a review from linglp February 24, 2026 00:15
Copy link
Member

@BryanFauble BryanFauble left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking great, once we get the last few items handled (and develop merged in), I can approve!

Copy link
Member

@BryanFauble BryanFauble left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work on this fix -- the backslash-escaping approach for embedded quotes makes sense, and the Ellipsis/pd.NA handling is solid. I flagged one issue that I think needs to be addressed before merge, plus a few cleanup nits.

Note: This comment was drafted with AI assistance and reviewed by me for accuracy.

@danlu1 danlu1 requested review from BryanFauble and linglp March 9, 2026 19:03
Copy link
Contributor

@linglp linglp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @danlu1 ! Thanks for your hard work. I think I found two bugs after a few rounds of testing myself:

Bug 1: Nested np.nan not handled

df = pd.DataFrame({"val": [[1.0, np.nan], [2.0, 3.0]]})

The nested np.nan would currently pass through to JSON serialization, which could cause issues since np.nan is not valid JSON.

This happened becauseconvert_dtypes() only converts top-level np.nan to pd.NA but nested np.nan would remain unchanged. And even though in your code, you have:

if obj is pd.NA:

But this won't handle np.nan. The fix would just be:

def _reformat_special_values(obj):
    if pd.isna(obj):      # Catches pd.NA, np.nan, and None
        return None

Bug 2: Top-level missing values not handled

def _serialize_json_value(x):
    if isinstance(x, (list, dict)):
        # pd.NA handling is only here, inside _reformat_special_values
        ...
    if x is ...:
        return "..."
    return x  # <-- Top-level pd.NA, np.nan, None just pass through

Top-level pd.NA (or np.nan) isn't converted to None. It only works now because .replace({pd.NA: None}) is called later.

df[col] = (
df[col].replace({pd.NA: None}).astype(object)
) # this will convert the int64 and float64 columns to object columns
sample_values = df[col].dropna()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dropna() + len() check is probably unnecessary.
Since apply() works fine on empty series and the function handles nulls, we could just apply the function directly:

    for col in df.columns:
        df[col] = df[col].apply(_serialize_json_value)

I tried ^this and all the unit tests could pass.

df[col] = df[col].apply(_serialize_json_value)
# restore the original values of the column especially for the int64 and float64 columns since apply function changes the dtype
df[col] = df[col].convert_dtypes()
df[col] = df[col].replace({pd.NA: None}).astype(object)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the .replace({pd.NA: None}) is doing two things:

  • Catching top-level pd.NA that _serialize_json_value missed
  • Converting any pd.NA reintroduced by convert_dtypes()

I think it might be cleaner to have _serialize_json_value handles top-level NaN like this:

def _serialize_json_value(x):
    if pd.isna(x):  # Handle top-level NA
        return None

Copy link
Contributor

@linglp linglp Mar 13, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an example, the pd.NA here in the dataframe is handled by df[col].replace({pd.NA: None}) rather than _serialize_json_value.

df = pd.DataFrame({
    "name": ["Alice", pd.NA, "Charlie"],
    "age": [25, 30, pd.NA]
})
df = convert_dtypes_to_json_serializable(df2)

I am not sure if this is intended.

}

],
}).convert_dtypes()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we still need to call .convert_dtypes() here? since convert_dtypes_to_json_serializable also calls convert_dtypes in one of the steps?

def _reformat_special_values(obj):
if obj is ...:
return "..."
if obj is pd.NA:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried:

df = pd.DataFrame({"val": [[1.0, np.nan], [3.0, pd.NA], [None, None]]})
convert_dtypes_to_json_serializable(df)
print(df)

and then I saw:

            val
0    [1.0, nan]
1   [3.0, None]
2  [None, None]

As you could see, np.nan would remain as nan. Should this be changed to if pd.isna(obj) to catch all different representations of None, including `pd.NA, np.nan, and None?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants