You are not logged in.
I am trying to initialize a global variable that will be used as part of the file name (used in tFileOutputDelimited) that is later created within a log catcher subjob. That is, this all occurs within a LogCatcher subjob and this subjob can be 'called' many times -- hence the need for uniquely named log file / log catcher output.
tSetGlobalVar (global var set here)
tLogCatcher->tMap->tFileOutputDelimited (use of global var as part of file name to make file name distinct. File contains default schema/contents sent by tLogCatcher)
It works for all files except the first file. So I get files -- ordered by creation date first to last -- like (NOTE: global var is the random 6 digits of the file name):
Why does this not work for the first file? That is, the null file is there because the global variable is not populated yet subsequent times it works fine. The null file does contain the tLogCatcher data, as does all the other subsequent files.
I guess I need to know, within a tLogCatcher subjob, how can I assure that the global variable is set before I send the tLogCatcher default information to the file whose name includes the global variable?
This is because when the subjob starts, the file gets created before you actually go through the process. So at the initialization, the first file won't have a name. A workaround for that would be to have a onSubjobOk like that:
> (can't do a down arrow!)
tBufferInput -> tFileOutputDelimited
This way the name of the tFileOutputDelimited will be created only once the second job starts.
Ok. That was close. I can now get the files as desired -- that is, no "null" file. However, each subsequent file appends the data (or at least some date) from the previous fiel. I would like each file and its contents to be distrinct.
Any suggestions? And thanks for your help -- very cool!
First off, sorry about the bad typing in my previous post... :-)
In a nutshell, each file should have 1 row, the output from the tLogCatcher when it is 'called'.
Instead, except for the first file, each subsequent file has more than 1 row -- in fact, it contains the previous rows from the previous calls to tLogCatcher.
It appears that the tLogCatcher output is being 'buffered' as well and appearing in subsequent files. The 'append' check box is not being used, by the way. Any way to 'flush' the buffer contents?