(All linux machines)
I have a job1.sh script run a main job with 5 subjobs in it. The 5th and final sub job is composed of 4 asynchronous subjobs. Async subjob 4 uses a tMap to combine a main mySQL table and a lookup mySQL table. There are two outputs which ultimately each lead to a tSSH component.
To start with each tSSH component connected to the target machine and simply output the 'ls' command - successful.
I then designed the target machine etl scripts and tested on them - no errors. (async subjob 4 passes a record id via the tSSH script to the target machines as a parameter to their .sh scripts so I ran the target machine etl script with a manual parameter input).
I am stuck combining the two together.
The target etl script fires and I have it output some variables so I can see the record id passed before it gets into the main execution and the variables look good.
The target etl script has two subjobs. Before these two subjobs a connection is made to a database and a tMysqlInput -> tFlowToIterate -> tMysqlInput -> tJavaRow process occurs .The first subjob checks using a bash script if a database exists - exits if doesn't exist. The second checks to see if another database exists and if not creates it.
The bash script that checks if a database exists - exits if doesn't exist ALWAYS exits. It 'appears' if the script is being run from the calling machine and not the called machine. I have checked the ip of the machine by adding some stuff to the output section - it is the called machine's ip.
I have tried using the tSystem and the ssh ip 'command' as well with no change. (If I execute the ssh ip 'command' from the calling machine - the etl script on the called machine runs just fine).
So the combining of two etl jobs (local & remote) appears to be the problem.... The db connections are shared using different names so as to eliminate that as a factor?
Anybody doing this sort of thing and want to share any gotchas?
Last edited by pgtips (2011-04-08 21:00:43)