In a Big Data batch Spark job, you can write custom java code in tJava for your input rdd. Is it possible to input multiple rdds to the same tJava component so the custom code can by written to leverage different data sets imported earlier in the job? If so, could someone share an example.
Last edited by jpmauss (2017-01-12 04:35:52)
After testing out a few different options, I figured out one that should work well. java/spark code to read textfile from hdfs to a new rdd within the custom code so I can leverage that data set and the rdd that is an input. Make sure to add java rdd imports in advance settings.