Skip to content

Instantly share code, notes, and snippets.

@michaeloyer
Last active January 9, 2022 22:47
Show Gist options
  • Save michaeloyer/6ed37e70d2b46c2c812318f1e1274cf8 to your computer and use it in GitHub Desktop.
Save michaeloyer/6ed37e70d2b46c2c812318f1e1274cf8 to your computer and use it in GitHub Desktop.
Example Showing how CSV Provider will use chunks of a stream while iterating
#r "nuget:FSharp.Data"
open System.IO
open System.Text
open FSharp.Data
type Csv = CsvProvider<"A,B,C\n1,2,3", CacheRows=false>
let csvStream =
let stream = new MemoryStream()
stream.Write(Encoding.UTF8.GetBytes("A,B,C\n"))
for i in 1..500 do
stream.Write(Encoding.UTF8.GetBytes($"{i},{i+1},{i+2}\n"))
stream.Position <- 0
stream
let mutable position: int64 = 0
for row in Csv.Load(csvStream).Rows do
if position <> csvStream.Position then
position <- csvStream.Position
printfn $"A = {row.A}; B = {row.B}; C = {row.C}; StreamPosition = {csvStream.Position}"
A = 1; B = 2; C = 3; StreamPosition = 1024
A = 112; B = 113; C = 114; StreamPosition = 2048
A = 197; B = 198; C = 199; StreamPosition = 3072
A = 283; B = 284; C = 285; StreamPosition = 4096
A = 368; B = 369; C = 370; StreamPosition = 5120
A = 453; B = 454; C = 455; StreamPosition = 5688
@HarryMcCarney
Copy link

HarryMcCarney commented Jan 9, 2022

This is very elegant. But doesn't it assume that the csv is already in memory as a byte array?

When I execute the code below it seems like Movements.Load is trying to load the whole file into memory before processing the first row.. Or am I doing something wrong/stupid? Code is working well on smaller files.

let data = "some20gbdatafile.csv"
 let sr = new StreamReader(data)   
    try 
        for row in Movements.Load(sr).Rows do
            WriteRowsToDb row

@michaeloyer
Copy link
Author

Yes, my example does have the whole CSV in a MemoryStream just to show how the file is being loaded in chunks without having to actually write to my file system (by printing the Stream's position), but you can replace with a StreamReader like you have since Load has an overload for a TextReader.

It turns out I'm the dummy today and forgot that you also need to set CacheRows to false since it actually defaults to true:

type Csv = CsvProvider<"A,B,C\n1,2,3", CacheRows=false>

I updated the example above. Hopefully that works, that's what I get for not creating a 20GB file for myself to make extra sure it does what I think it does. 😆

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment