Description
Description
While using the csvutil library to decode CSV data, I've come across an issue when a CSV row contains more fields than what is defined in the corresponding struct, even with csvReader.FieldsPerRecord = -1. The library currently appears to require that all rows must match the number of fields defined in the header, even if some fields are not represented in the struct.
Example
package main
import (
"bytes"
"encoding/csv"
"io"
"log"
"github.com/jszwec/csvutil"
)
type Record struct {
Name string `csv:"name"`
}
func main() {
reader := bytes.NewBufferString(`name,items
alice,apple
bob,banana,orange`)
csvReader := csv.NewReader(reader)
csvReader.LazyQuotes = true
csvReader.FieldsPerRecord = -1
dec, err := csvutil.NewDecoder(csvReader)
if err != nil {
log.Fatal(err)
}
for {
var record Record
if err := dec.Decode(&record); err == io.EOF {
break
} else if err != nil {
log.Fatal(err)
}
log.Printf("%+v", record)
}
}
2023/08/06 15:33:52 {Name:alice}
2023/08/06 15:33:52 wrong number of fields in record
Expected Behavior
I expect the library to ignore extra fields in the CSV data that are not represented in the struct, without resulting in an error.
Actual Behavior
An error is encountered when trying to decode the row with extra fields (bob,banana,orange).
Potential Solution
A possible enhancement could be to allow the library to ignore unmapped fields in the CSV data, making handling inconsistent CSV data more robust.
Additional Context
This behavior can pose challenges when working with real-world CSV data, where some rows may have inconsistent numbers of fields. Being able to gracefully handle such inconsistencies would be a valuable feature.
Thank you for this convenient library. Please let me know if further details or clarifications are required. Your attention to this matter is greatly appreciated!
Activity