You are not logged in.

Unanswered posts



Important! This site has been replaced. All content here is read-only. Please visit our brand-new community at https://community.talend.com/. We look forward to hearing from you there!



#1 Re: Talend Data Preparation - Configuration, usage and feedback » Select rows and how to export it? » 2016-04-15 09:49:59

Hello,
Talend reads and processes data in rows and columns...
You can read all the data from excel and based on your business logic you can either use tfilter or tMap to reject invalid records and collect valid records...
Let me know your business usecase and we can plan further..
Vaibhav

#2 Re: Open Data Integration - Usage, Operation » Picked up JAVA_TOOL_OPTIONS: -Djava.vendor="New Oracle" » 2016-04-15 06:29:40

Hello,
I am also receiving same error message after successful execution of a program.

Picked up JAVA_TOOL_OPTIONS: -Djava.vendor="Sun Microsystems Inc."

I am using Talend open studio for Big Data  6.1.1.20151214_1323
When I see java -version, I get following message
Picked up JAVA_TOOL_OPTIONS: -Djava.vendor="Sun Microsystems Inc."
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
How can I disable this?
Thanks
Vaibhav

#3 Re: Data Integration - Non technical discussions » Talend certification exam examples » 2015-04-17 11:05:42

Hi,

Recently completed Talend Open Studio Certification. How much it is difficult to get Enterprize certification and which sections are involved in that?

THanks
Vaibahv

#4 Re: ESB - Product usage, architecture, user scenarios, suggestions & feedback » [resolved] ESB 5.6.1 - javax.mail.AuthenticationFailedException - cMail » 2015-03-30 13:04:59

Hi,

I don't think that this is some version related issue. As per your error, it is related to authentication failure..

javax.mail.AuthenticationFailedException: [AUTHENTICATIONFAILED] Invalid credentials (Failure)


Can you try by directly putting the credentials in place of context variables..? You may identify exact root cause of the issue.


Thanks
Vaibhav

#5 Re: Open Data Integration - Usage, Operation » Rollback whole data if i have billions of data to be transferre if err » 2015-03-20 11:18:00

What was your earlier performance count and what you are expecting? Can you put current job screenshot?

Vaibhav

#6 Re: Open Data Integration - Usage, Operation » Rollback whole data if i have billions of data to be transferre if err » 2015-03-20 06:13:56

Hi, 
Use fullowing flow
tPrejob-->tOracleConnection (uncheck autocommit)
Oracle Database input(Table) -> tMap -> Oracle database output(table)
              ¬
                        OnSubjobError --> tdie
¬
OnSubJObOk-->tOracleCommit


This will ensure that if the job fails due to some issue, it will end the job else if it is successful, it will commit all the data.

But this is not good approach, reason - as database has to hold billions of records in memory, if memory is not sufficient, it will through memory exceptions...

Another approach is to identify a marker in the table and if there is an error, then use delete records after that marker.

Thanks
Vaibhav

#7 Re: Data Integration on Subscription - Configuration, usage and feedback » Problem to execute a stored procedure with sybase on Talend Integratio » 2014-12-18 16:43:43

Remove onComponentOk from tsybasesp_1 and connect with onSubjobOk from tfixedflow to tsybaserow_1? and check again.

remove onComponentOk from tsybaserow_2 and connect with onSubjobOk to commit component


Vaibhav

#8 Re: Open Data Integration - Usage, Operation » [resolved] How to lookup if a database record exists or not(to get a primary key) » 2014-12-18 16:40:35

To have simplified approach which would also help you to debug and analyze data 
- use first tmap to lookup with company and reject records to csv 
- use another tmap and then lookup with the brand and use upsert operation based on the key column

Vaibhav

#9 Re: Data Integration - Installation » [resolved] Adding Header and Footer in Multiple csv file » 2014-12-18 16:37:27

Hi Tamil,
- Before data insertion, insert header using tfixedflow component
- after data , insert footer using tfixedflow component
- make use of onsubjobok link judiciously to insert header, data and footer

Vaibhav

#10 Re: Data Integration on Subscription - Configuration, usage and feedback » Problem to execute a stored procedure with sybase on Talend Integratio » 2014-12-18 14:52:06

Can you remove that onComponentOk and connect with onSubjobOk from tfixedflow? and check again.

vaibhav

#11 Re: Data Integration on Subscription - Configuration, usage and feedback » Problem to execute a stored procedure with sybase on Talend Integratio » 2014-12-18 13:31:20

Hi,

These are common problems when you use date parameters... I think there is an issue with the date format... date is format generated by the talend is not accepted by the database... you can connect a tlogrow next to db output component or connect a reject link and you should get the reason for rejection of records... and this is due to incorrect date format.

Thanks
vaibhav

#12 Re: Data Integration on Subscription - Configuration, usage and feedback » [resolved] Select Query time consuming » 2014-12-18 13:22:48

Hi RSH,

Reason for delayed query is the inner joins and filters on the data, it totally depends upon how you write your query....
But if you go ETL way, i.e. using talend,  have following approach
- Read input data using txxxdbinput component
- filter unwanted records from source
- Use tmap to lookup recordsets
- use filesystem to store lookup data
- Use multiple tMaps if your joins are complex...

try this approach

Thanks
Vaibahv

#13 Re: Open Data Integration - Usage, Operation » call a dotnet dll with talend context variables » 2014-12-18 11:31:14

Hi Aat,

All these job related parameters you can find when you capture particular job's logs stats or other. You can use those logs and play around... Have you checked this?

Vaibhav

#14 Re: Open Data Integration - Usage, Operation » [resolved] Updating staging table After successful database operation » 2014-12-18 11:28:36

Hi Siddharth,

You can add one more output and connect it to your stored procedure.

Vaibhav

#15 Re: Big Data - Configuration, usage and feedback » Break the Greenplum data in chunks » 2014-12-18 11:24:35

Hi Maya,
Create one job, read table from greenplum and write contents into the flat file using tfileoutputdelimited.
In the advance properties, there is a check box for writing data into multiple files for specific number of lines. when you select this option, you can provide 200 as row count, so when you execute the job it will create those many files having 200 records each.
Another option is use one variable and use tloop component. In the where clause, use between and for each loop increment the value between, this way it will iterate on each 200 records. You can think of using tbuffer component for this data iteration.

Try one of these.

Thanks
vaibhav

#16 Re: Open Data Integration - Usage, Operation » [resolved] How to lookup if a database record exists or not(to get a primary key) » 2014-12-18 11:13:42

Hi Sid,

I think you need to review your business logic for look-ups...
- Whether you really need to have both inner join i.e. whee clause for Brand and Company to push data forward?
- Whether going step by step i.e. having two tMaps one after another would give correct results...
- Rethink on taking left outer join and replacing value by -1 or something similar for the records which does not match with the lookup.

Please try describing your business requirement.

Thanks
vaibhav

#17 Re: Open Data Integration - Usage, Operation » [resolved] How to lookup if a database record exists or not(to get a primary key) » 2014-12-11 06:18:00

Hi Siddharth,

Above approach should work for multiple lookups as well... But when it comes to multiple lookups, where condition applies for both the inner joins and when the one of the condition does not satisfy, it will reject.
Can you show actual main and lookup data to verify what is happening ?

Vaibhav

#20 Re: Open Data Integration - Usage, Operation » Update field based on value in CSV » 2014-12-09 09:29:57

In that case, you can tweak above if-then-else expression to get the desired results.

Vaibhav

#21 Re: Open Data Integration - Usage, Operation » Update field based on value in CSV » 2014-12-09 09:27:02

Oh...

This is a string concatenation...
use following for your Email record
"email_address="+row5.Email+","+"email_address_caps="+StringHanding.UPCASE(row5.Email)+"primary_address=1"

similar for your secondary email


I think you got an idea.
Thanks
Vaibhav

#22 Re: Open Data Integration - Usage, Operation » Update field based on value in CSV » 2014-12-09 09:20:25

Hi John,

What is the logic for the selection of primary / secondary address (0/1)
I don't find any business logic based on your description... can you explain how will you define the email id which is given is primary or secondary..?
Once you know this, then you can use expression 
row5.Email==null && row5.SecondaryEmail !=null ?1:0
Above statement says that if Email is null and secondary email is not null then the address is secondary else it is primary.
Try above.
Thanks
Vaibhav

#23 Re: Open Data Integration - Usage, Operation » SQL Server single user connection » 2014-12-09 08:17:12

What is the error message displayed? do you get any message? By commenting set single_user line, can you execute the job?

Vaibhav

#24 Re: Open Data Integration - Usage, Operation » [resolved] How to send the values to next Job using context? » 2014-12-09 08:09:18

What is the need to use tMap...? I don't see any validation in the current screenshot...
you can have a flow like
tFileList---iterate---->tRunJob
set parameter from child job as context.FileName - tFileList(CurrentFileName)

Use the context.FileName in child job to print the file name

Vaibhav

#25 Re: Open Data Integration - Usage, Operation » [resolved] How to send the values to next Job using context? » 2014-12-09 07:32:47

Hi Srikanth,

There are multiple ways to do this
- Use context variable and transmit context to child job
- Set context value in the parameter
- Use getter-setter routine
- Use hash variable

You can try any one of it, all will work...

Vaibhav

Board footer

Talend Contributor Agreement - Talend Website Privacy Policy