Skip to content

feat: add functions for add and replacing data directly with datafiles#723

Merged
zeroshade merged 1 commit intoapache:mainfrom
agaddis02:main
Feb 19, 2026
Merged

feat: add functions for add and replacing data directly with datafiles#723
zeroshade merged 1 commit intoapache:mainfrom
agaddis02:main

Conversation

@agaddis02
Copy link
Copy Markdown
Contributor

@agaddis02 agaddis02 commented Feb 12, 2026

Context

If you want to write your own parquet files and only use iceberg to handle the metadata, you are only left with the option (for the most part) of leveraging the ReplaceDataFiles function.

This function takes in a list of existing files and a list of new file paths to override that previous data with.

This function works fine for the most part, but the function includes a scan in it which means it's not actually taking your word that your new parquet files match the table schema.

This scan proves to be problematic in some cases when you are writing files very fast and leveraging multipart uploads. You know the location of all files, know they are valid parquet files, but the commit has the possibility to return an error because at the time of commit the file might not be fully available.

the error looks something like this at commit time: failed to replace data files: error encountered during file conversion: parquet: could not read 8 bytes from end of file.

Solution

We have tested this out in vendor code and opened a fork that adds a new function.

ReplaceDataFiles is scanning your file paths to try and ensure the schema of said files match the schema of the table you are inputting them into.

We, and I would assume a lot of people writing their own parquet files, don't need this. Our ingestion framework guarantees we will never get a incorrect parquet file, and we also have access to our Parquet Schema and Arrow Schema for the entirety of the ingestion.

So I can build data files directly and would much rather just pass my own datafiles to this function, as I know the files will eventually be available and they will be correct. all this is doing is telling the metadata where to look at said file, there is no real harm in committing before that file is actually available unless you are querying it right away and it happens to not be available.

This also speeds up the commit time tremendously as this library doesn't need to go through scan all of the files for every single commit.

@agaddis02 agaddis02 marked this pull request as ready for review February 12, 2026 22:16
@agaddis02 agaddis02 changed the title feat: add functions for add and replacing data directly with datafiles #724 feat: add functions for add and replacing data directly with datafiles Feb 12, 2026
@agaddis02
Copy link
Copy Markdown
Contributor Author

@rockwotj, @zeroshade & @subkanthi, this pull request has been updated based on the comments left on the other PR: #710.

Let me know if anything else needs to be added or changed, I believe this addresses the main points of concerns in terms of adding test, explicitly warning users that this can be dangerous, and attempting to curb that danger as much as possible by validating all of the possible items we can (with out scanning the file).

Copy link
Copy Markdown
Member

@zeroshade zeroshade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the tests are missing some of the error cases such as if AddDataFiles or ReplaceDataFiles is trying to add a duplicate file path

@agaddis02
Copy link
Copy Markdown
Contributor Author

That makes sense, will add more test cases that try to reach all possible paths of the functions

@agaddis02
Copy link
Copy Markdown
Contributor Author

@zeroshade updated to include test that should test all possible error paths of the new functions.

Also resolved conflicts.

Copy link
Copy Markdown
Member

@zeroshade zeroshade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only one nitpick, but otherwise this looks good!

Comment thread table/transaction.go
@zeroshade zeroshade changed the title #724 feat: add functions for add and replacing data directly with datafiles feat: add functions for add and replacing data directly with datafiles Feb 18, 2026
@zeroshade zeroshade merged commit a576506 into apache:main Feb 19, 2026
13 checks passed
@rockwotj
Copy link
Copy Markdown
Contributor

Thank you @zeroshade - any ideas when the next release is? I can use tip of main but it's been since October and there are plenty of good things queued up for release (schema update apis, this etc)

@zeroshade
Copy link
Copy Markdown
Member

I was just thinking about that @rockwotj, I'm thinking that once we get #709 merged that'll be a good milestone for the next release.

Comment thread table/transaction.go
func (t *Transaction) validateDataFilesToAdd(dataFiles []iceberg.DataFile, operation string) (map[string]struct{}, error) {
currentSpec, err := t.meta.CurrentSpec()
if err != nil || currentSpec == nil {
return nil, fmt.Errorf("could not get current partition spec: %w", err)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

err might be nil here and it would break formatting.

Might be better to return a separate error no current partition spec found

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment thread table/transaction.go
Comment on lines +442 to +444
if partitionData == nil {
partitionData = map[int]any{}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading nil map is fine, we don't need this, right?

Comment thread table/transaction.go
}

if len(referenced) > 0 {
return fmt.Errorf("cannot add files that are already referenced by table, files: %s", referenced)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

referenced is a slice, not a string. %v would be better or converted to a string with join

zeroshade pushed a commit that referenced this pull request Feb 20, 2026
related to #723

Signed-off-by: ferhat elmas <elmas.ferhat@gmail.com>
rockwotj pushed a commit to rockwotj/iceberg-go that referenced this pull request Mar 18, 2026
apache#723)

If you want to write your own parquet files and only use iceberg to
handle the metadata, you are only left with the option (for the most
part) of leveraging the `ReplaceDataFiles` function.

This function takes in a list of existing files and a list of new file
paths to override that previous data with.

This function works fine for the most part, but the function includes a
scan in it which means it's not actually taking your word that your new
parquet files match the table schema.

This scan proves to be problematic in some cases when you are writing
files very fast and leveraging multipart uploads. You know the location
of all files, know they are valid parquet files, but the commit has the
possibility to return an error because at the time of commit the file
might not be fully available.

the error looks something like this at commit time: `failed to replace
data files: error encountered during file conversion: parquet: could not
read 8 bytes from end of file`.

We have tested this out in vendor code and opened a fork that adds a new
function.

`ReplaceDataFiles` is scanning your file paths to try and ensure the
schema of said files match the schema of the table you are inputting
them into.

We, and I would assume a lot of people writing their own parquet files,
don't need this. Our ingestion framework guarantees we will never get a
incorrect parquet file, and we also have access to our Parquet Schema
and Arrow Schema for the entirety of the ingestion.

So I can build data files directly and would much rather just pass my
own datafiles to this function, as I know the files will eventually be
available and they will be correct. all this is doing is telling the
metadata where to look at said file, there is no real harm in committing
before that file is actually available unless you are querying it right
away and it happens to not be available.

This also speeds up the commit time tremendously as this library doesn't
need to go through scan all of the files for every single commit.

Co-authored-by: Adam Gaddis <adamtyler@cloudflare.com>
rockwotj pushed a commit to rockwotj/iceberg-go that referenced this pull request Mar 18, 2026
related to apache#723

Signed-off-by: ferhat elmas <elmas.ferhat@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants