You are not logged in.
I have these huge positional files (21.8GB) that I need to load into MySQL database. I'm using TOSDI for this and I find out about tMysqlBulkExec thanks to the advice from this forum. I was able to load a small test file into my table but only the first column was loaded the rest was 0's and nulls. I discovered that my positional file I created in the repository wa snot being use. Instead the properties in the advance tab were use so the job is separating the fields by ";". Just to complete the test I converted the positional file into a delimited file and executed the tMysqlBulkExec again and it worked like a charm.
I already check the documentation but I can not find where it says that only delimited files can be use with tMysqlBulkExec. My question is: Do I have to add a step into my job that will delimited the positional file in order to use the bulk load?
Thank you Pedro.
I did as you suggested and it seems to be working. What I did was I created a job with two components: tFileInputPositional and tMysqlOutputBulkExec. Is this what you meant? or is there other way to accomplish this?
How can I change the commit for these job? I mean how can I set it to commit every 1Million rows?
It just loaded the first file, inserting 9,811,613 rows in 1 hr 22 mins on a MacBook pro i7 with 4GB
I forgot to mention in my previous post that I check the documentation, in particular TOS Components 3.x because was the only version I was able to find. In that documentation there is an explanation for a filed named "Commit every" in the basic settings:
Number of rows to be completed before commiting batches of rows together into the DB. This option ensures transaction quality (but not rollback) and above all better performance on executions.
Obviously this document is outdated but there was a field for the commit...
Yes. The job should be like this.
We can't find 'commit' option now, but you might try 'Custom the flush buffer size' for reducing the cost of memory.
The component tMysqlOutputBulkExec executes the command 'LOAD DATA LOCAL INFILE' for bulk loading.
There is not any option which can 'replace' the commit feature on this component now.
Maybe you can try to increase the arguments of JVM for better performance.