You are not logged in.
I have a Job Design that will run successfully with small volume but as soon as I try larger volume it crashes with the below error. This happens both on a windows machine with the job being run via TOS, and also after deploying on a Unix box.
It looks like records (1 million) are being read ok from the input sources, but that maybe the tmap is unable to cope. I have used the 'store on disk' option, and have experimented with the 'Max Buffer Size, nb of rows. I have also tried setting the Xmx param higher in both the TOS preferences, and the Unix wrapper script. Again without success.
Any suggestions appreciated.
Starting job Recon_Dn001schedule at 10:43 22/07/2010.
[statistics] connecting to socket on port 3572
Buffer marked at index (1-Lookup) 2000000, to avoid a heap space memory error try to increase the JVM Xmx parameter.
# A fatal error has been detected by the Java Runtime Environment:
# java.lang.OutOfMemoryError: requested 4092 bytes for char in C:\BUILD_AREA\jdk6_20\hotspot\src\share\vm\utilities\stack.inline.hpp. Out of swap space?
# Internal Error (allocation.inline.hpp:39), pid=4240, tid=2812
# Error: char in C:\BUILD_AREA\jdk6_20\hotspot\src\share\vm\utilities\stack.inline.hpp
# JRE version: 6.0_20-b02
# Java VM: Java HotSpot(TM) Client VM (16.3-b01 mixed mode windows-x86 )
# An error report file with more information is saved as:
# If you would like to submit a bug report, please visit:
Job Recon_Dn001schedule ended at 10:44 22/07/2010. [exit code=1]