Hello all,
I'm stuck again. I'm in my 5th month of Ruby on Rails and getting
better but...
I'm uploading a .csv file using Paperclip and FasterCSV, then processing
the file after the upload is complete.
Small files work fine. I tested a larger file and found that one cell
had 13,000 chars. needless to say the ORA-01704 was thrown.
When processing the file, I do line by line.
- controller -
def proc_csv @import = Import.find(params[:id])
puts @import.csv.path
lines = parse_csv_file(@import.csv.path)
lines.shift #comment this line out if your CSV file doesn't contain
a header row
if lines.size > 0 @import.processed = lines.size
lines.each do |line|
case @import.datatype
when "irb"
new_irb(line)
end
end @import.save
flash[:notice] = "CSV data processing was successful."
redirect_to :action => "show", :id => @import.id
else
flash[:error] = "CSV data processing failed."
render :action => "show", :id => @import.id
end
end
I've done some research and I know that Oracle's datatype 'Clob' will
hold the size but only at 4,000 chars. at a time.
I'm just wondering how to do it?
According to what I can find on Oracle's CLOB data type is that they can
store up to 4 gigabytes. I would hope that the Ruby Oracle database
adaptor would take care dealing with the CLOB field. You shouldn't have
to worry about that yourself.
Have you actually tired to store the 13 K byte string in a CLOB field?