Image | Name | Description | Data Integration | Map/Reduce | Spark Batch | Spark Streaming | Storm | Camel |
---|---|---|---|---|---|---|---|---|
AWS | ||||||||
![]() |
cAWSConnection | Establishes a connection to Amazon Web Services for data storage and retrieval. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSS3 | Stores and retrieves objects from/to Amazon's Simple Storage Service (S3) |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSSES | Sends emails with Amazon's Simple Email Service (SES). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSSNS | Sends messages to an Amazon's Simple Notification topic. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSSQS | Sends and receives messages to/from Amazon's Simple Queue Service (SQS). The AWS SQS FIFO Feature for Queues are supported. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Big Data | ||||||||
![]() |
tBigQueryBulkExec | Transfers given data to Google BigQuery. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBigQueryInput | Performs the queries supported by Google BigQuery. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBigQueryOutput | Transfers the data provided by its preceding component to Google BigQuery. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBigQueryOutputBulk | Creates a .txt or .csv file for the data of large size so that you can process it according to your needs before transferring it to Google BigQuery. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBigQuerySQLRow | Connects to Google BigQuery and performs queries to select data from tables row by row or create or delete tables in Google BigQuery. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraBulkExec | Improves performance during Insert operations to a Cassandra column family. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraClose | Disconnects a connection to a Cassandra server so as to release occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraConnection | Enables the reuse of the connection it creates to a Cassandra server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraOutput | Writes data into or deletes data from a column family of a Cassandra keyspace. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraOutputBulk | Prepares an SSTable of large size and processes it according to your needs before loading this SSTable into a column family of a Cassandra keyspace. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraOutputBulkExec | Improves performance during Insert operations to a column family of a Cassandra keyspace. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraRow | Acts on the actual DB structure or on the data, depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBFSConnection | Connects to a given DBFS (Databricks Filesystem) system so that the other DBFS components can reuse the connection it creates to communicate with this DBFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBFSGet | Copies files from a given DBFS (Databricks Filesystem) system, pastes them in a user-defined directory and if needs be, renames them. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBFSPut | Connects to a given DBFS (Databricks Filesystem) system, copies files from an user-defined directory, pastes them in this system and if needs be, renames these files. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDynamoDBOutput | Creates, updates or deletes data in an Amazon DynamoDB table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketCreate | Creates a new bucket which you can use to organize data and control access to data in Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketDelete | Deletes an empty bucket in Google Cloud Storage so as to release occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketExist | Checks the existence of a bucket in Google Cloud Storage so as to make further operations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketList | Retrieves a list of buckets from all projects or one specific project in Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSClose | Closes an active connection to Google Cloud Storage in order to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSConnection | Provides the authentication information for making requests to the Google Cloud Storage system and enables the reuse of the connection it creates to Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSCopy | Copies or moves objects within a bucket or between buckets in Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSDelete | Deletes the objects which match the specified criteria in Google Cloud Storage so as to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSGet | Retrieves objects which match the specified criteria from Google Cloud Storage and outputs them to a local directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSList | Retrieves a list of objects from Google Cloud Storage one by one. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSPut | Uploads files from a local directory to Google Cloud Storage so that you can manage them with Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseClose | Closes an HBase connection you have established in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseConnection | Establishes an HBase connection to be reused by other HBase components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHCatalogInput | Reads data from an HCatalog managed Hive database and send data to the component that follows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHCatalogLoad | Reads data directly from HDFS and writes this data into an established HCatalog managed table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHCatalogOperation | Prepares the HCatalog managed database/table/partition to be processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHCatalogOutput | Receives data from its incoming flow and writes this data into an HCatalog managed table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSCompare | Compares two files in HDFS and based on the read-only schema, generates a row flow that presents the comparison information. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSConnection | Connects to a given HDFS so that the other Hadoop components can reuse the connection it creates to communicate with this HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSCopy | Copies a source file or folder into a target directory in HDFS and removes this source if required. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSDelete | Deletes a file located on a given Hadoop distributed file system (HDFS). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSExist | Checks whether a file exists in a specific directory in HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSGet | Copies files from Hadoop distributed file system(HDFS), pastes them in a user-defined directory and if needs be, renames them. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSInput | Extracts the data in a HDFS file for other components to process it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSList | tHDFSList retrieves a list of files or folders based on a filemask pattern and iterates on each unity. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSOutput | Writes data flows it receives into a given Hadoop distributed file system (HDFS). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSOutputRaw | Transfers data of different formats such as hierarchical data in the form of a single column into a given HDFS file system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSProperties | Creates a single row flow that displays the properties of a file processed in HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSPut | Connects to Hadoop distributed file system to load large-scale files into it with optimized performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSRename | Renames the selected files or specified directory on HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSRowCount | Reads a file in HDFS row by row in order to determine the number of rows this file contains. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveClose | Closes connection to a Hive database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveConnection | Establishes a Hive connection to be reused by other Hive components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveCreateTable | Creates Hive tables that fit a wide range of Hive data formats. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveLoad | Writes data of different formats into a given Hive table or to export data from a Hive table to a directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveRow | Acts on the actual DB structure or on the data without handling data itself, depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaClose | Closes connection to an Impala database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaConnection | Establishes an Impala connection to be reused by other Impala components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaCreateTable | Creates Impala tables that fit a wide range of Impala data formats. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaInput | Executes the select queries to extract the corresponding data and sends the data to the component that follows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaLoad | Writes data of different formats into a given Impala table or to export data from an Impala table to a directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaOutput | Executes the action defined on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tImpalaRow | Acts on the actual DB structure or on the data (although without handling data). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBClose | Closes an MapRDB connection you have established in a same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBConnection | Establishes a MapRDB connection to be reused by other MapRDB components in a same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicBulkLoad | Imports local files into a MarkLogic server database in bulk mode using the MarkLogic Content Pump (MLCP) tool. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicClose | Closes an active connection to a MarkLogic database to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicConnection | Opens a connection to a MarkLogic database that can then be reused by other MarkLogic components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicInput | Searches document content in a MarkLogic database based on a string query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicOutput | Creates, updates or deletes document content in a MarkLogic database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBBulkLoad | Imports data files in different formats (CSV, TSV or JSON) into the specified MongoDB database so that the data can be further processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBClose | Closes a connection to the MongoDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBConnection | Creates a connection to a MongoDB database and reuse that connection in other components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSDelete | Automates the delete action over specific files in MongoDB GridFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSGet | Connects to a MongoDB GridFS system to copy files from it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSList | Retrieves a list of files based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSProperties | Obtains information about the properties of given files selected based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSPut | Connects to a MongoDB GridFS system to load files into it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBOutput | Executes the action defined on the collection in the MongoDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBRow | Executes the commands and functions of the MongoDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jBatchOutput | Receives data from the preceding component and writes the data into a local Neo4j database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jBatchOutputRelationship | Receives data from the preceding component and writes relationships in bulk into a local Neo4j database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jBatchSchema | Defines the schema of a local Neo4j database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jClose | Close an active connection to an Neo4j database in embedded mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jConnection | Opens a connection to a Neo4j database to be reuse by other Neo4j components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jImportTool | Uses Neo4j Import Tool to create a Neo4j database and import large amounts of data in bulk from CSV files to this database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jInput | Reads data from Neo4j and sends data in the output flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jRow | Executes the stated Cypher query onto the specified Neo4J database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Close | Closes a connection to a Neo4j version 4.x database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Connection | Establishes a connection to a Neo4j version 4.x database for later use. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Input | Reads data from Neo4j version 4.x and sends data in the output flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Output | Receives data from the preceding component and writes the data into a Neo4j version 4.x database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Row | Executes the stated Cypher query onto the specified Neo4J version 4.x database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopExport | Defines the arguments required by Sqoop for transferring data to a RDBMS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopImport | Defines the arguments required by Sqoop for writing the data of your interest into HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopImportAllTables | Defines the arguments required by Sqoop for writing all of the tables of a database into HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopMerge | Performs an incremental import that updates an older dataset with newer records. The file types of the newer and the older datasets must be the same. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Business | ||||||||
![]() |
tBonitaDeploy | Deploys a specific Bonita process to a Bonita Runtime. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBonitaInstantiateProcess | Starts an instance for a specific process deployed in a Bonita Runtime engine. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJIRAInput | Retrieves the issue information based on a JQL query or retrieve the project information based on a specified project ID from JIRA. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJIRAOutput | Inserts, updates, or deletes the issue or project information in JIRA. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLDAPAttributesInput | Analyses each object found via the LDAP query and lists a collection of attributes associated with the object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLDAPClose | Disconnects one connection to the LDAP Directory server so as to release occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLDAPConnection | Creates a connection to an LDAP Directory server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLDAPInput | Executes an LDAP query based on the given filter and corresponding to the schema definition. Then it passes on the field list to the next component via a Row > Main link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLDAPOutput | Executes an LDAP query based on the given filter and corresponding to the schema definition. Then it passes on the field list to the next component via a Row > Main link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLDAPRenameEntry | Renames ones or more entries in a specific LDAP directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoBulkExec | Imports leads or custom objects into Marketo from a local file in the REST API mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoCampaign | Retrieves campaign records, activity and campaign changes related data from Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoConnection | Opens a connection to Marketo that can then be reused by other Marketo components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoInput | Retrieves lead records, activity history, lead changes, and custom object related data from Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoListOperation | Adds/removes one or more leads to/from a list in Marketo. Also, it helps you verify the existence of one or more leads in a list in Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoOutput | Writes lead records or custom object records from the incoming data flow into Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMicrosoftCrmInput | Extracts data from a Microsoft CRM database based on conditions set on specific columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMicrosoftCrmOutput | Writes data into a Microsoft CRM database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetsuiteConnection | Creates a connection to the NetSuite SOAP server so that other NetSuite components in the Job can reuse the connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetsuiteInput | Invokes the NetSuite SOAP service and retrieves data according to the conditions you specify. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetsuiteOutput | Invokes the NetSuite SOAP service and inserts, updates, or removes data on the NetSuite SOAP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetSuiteV2019Connection | Creates a connection to a NetSuite SOAP server by leveraging NetSuite v2019 features so that other NetSuite V2019 components in the Job can reuse the connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetSuiteV2019Input | Invokes the NetSuite SOAP service and retrieves data according to the conditions you specify by leveraging NetSuite v2019 features. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetSuiteV2019Output | Invokes the NetSuite SOAP service and inserts, updates, or removes data on the NetSuite SOAP server by leveraging NetSuite v2019 features. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceBulkExec | Bulk-loads data in a given file into a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceConnection | Opens a connection to Salesforce. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceEinsteinBulkExec | Loads data into Salesforce Analytics Cloud from a local file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceEinsteinOutputBulkExec | Gains in performance during data operations to the Salesforce Analytics Cloud. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceGetDeleted | Collects data deleted during a specific period of time from a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceGetServerTimestamp | Retrieves the current date of the Salesforce server presented in a timestamp format. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceGetUpdated | Collects data updated during a specific period of time from a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceInput | Retrieves data from a Salesforce object based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceOutput | Inserts, updates, upserts, or deletes data in a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceOutputBulk | Generates the file to be processed by the tSalesforceBulkExec component for bulk processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceOutputBulkExec | Bulk-loads data in a given file into a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPADSOInput | Retrieves data of an active ADSO (Advanced Data Store Object) from an SAP BW system on an SAP HANA database or through SAP Java Connector. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPBapi | Extracts data from or loads data to an SAP server using multiple input/output parameters or the document type parameter. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPCommit | Commits a global transaction in one go, using a unique connection, instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPConnection | Commits a whole Job data in one go to the SAP system as one transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPDataSourceOutput | Writes Data Source objects into an SAP BW Data Source system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPDataSourceReceiver | Retrieves data requests stored on Talend SAP RFC server and related to a specific Data Source system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPDSOInput | Retrieves DSO data from an SAP BW system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPDSOOutput | Creates or updates DSO data in an SAP BW table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPIDocInput | Extracts IDoc data set that is used for asynchronous transactions between SAP systems or between a SAP system and another application. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPIDocOutput | Uploads IDoc data set in XML fomat to an SAP system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPIDocReceiver | Extracts data from SAP IDocs stored on an SAP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPInfoCubeInput | Retrieves InfoCube data from an SAP BW system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPInfoObjectInput | Retrieves InfoObject data from an SAP BW system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPInfoObjectOutput | Writes InfoObject data into an SAP BW system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPODPInput | Extracts business data from the ERP part of SAP (SAP Business application, SAP on HANA, SAP R/3, and S4/HANA) through ODP (Operational Data Provisioning). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPRollback | Cancels the transaction commit in the connected SAP. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPTableInput | Reads data from an SAP table on an SAP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServiceNowConnection | Opens a connection to a ServiceNow instance that can then be reused by other ServiceNow components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServiceNowInput | Accesses ServiceNow and retrieves data from it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServiceNowOutput | Performs the defined action on the data on ServiceNow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWorkdayInput | Retrieves data of a Workday client based on a query or the Workday client report. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Business Intelligence | ||||||||
![]() |
tSplunkEventCollector | Sends the event data to Splunk through Splunk HTTP Event Collector. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Business_Intelligence | ||||||||
![]() |
tBarChart | Generates a bar chart from the input data to ease technical analysis. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2SCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2SCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated DB2 SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixSCD | Tracks and shows changes which have been made to Informix SCD dedicated tables |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJasperOutput | Creates a report in rich formats using Jaspersoft's iReport. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJasperOutputExec | Creates a report in rich formats using Jaspersoft's iReport and offers a performance gain as it functions as a combination of an input component and a tJasperOutput component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCSCDELT | Tracks data changes in a source database table using SCD (Slowly Changing Dimensions) Type 1 method and/or Type 2 method and writes both the current and historical data into a specified SCD dimension table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLineChart | Reads data from an input flow and transforms the data into a line chart in a PNG image file to ease technical analysis. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlSCD | Tracks and reflects changes in a dedicated SCD table in a Microsoft SQL Server or Azure SQL database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlSCD | Reflects and tracks changes in a dedicated MySQL SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlSCDELT | Reflects and tracks changes in a dedicated MySQL SCD table through SQL queries. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaSCD | Reflects and tracks changes in a dedicated Netezza SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleSCD | Reflects and tracks changes in a dedicated Oracle SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleSCDELT | Reflects and tracks changes in a dedicated Oracle SCD table through SQL queries. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated PostgresPlus SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated DB2 SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated Sybase SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated Teradata SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaSCD | Tracks and reflects data changes in a dedicated Vertica SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Cloud | ||||||||
![]() |
tAmazonAuroraClose | Closes an active connection to an Amazon Aurora database instance to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraConnection | Opens a connection to an Amazon Aurora database instance that can then be reused by other Amazon Aurora components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraInput | Reads an Amazon Aurora database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraInvalidRows | Checks Amazon Aurora database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). Only MySQL is supported. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraOutput | Writes, updates, makes changes or suppresses entries in an Amazon Aurora database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraRollback | Rolls back any changes made in the Amazon Aurora database to prevent partial transaction commit if an error occurs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraRow | Executes query statements on a specified Amazon Aurora database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraValidRows | Checks Amazon Aurora database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). Only MySQL is supported. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonEMRListInstances | Lists the details about the instance groups in a cluster on Amazon EMR (Elastic MapReduce). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonEMRManage | Launches or terminates a cluster on Amazon EMR (Elastic MapReduce). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonEMRResize | Adds or resizes a task instance group in a cluster on Amazon EMR (Elastic MapReduce). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlClose | Closes the transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAmazonMysqlInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlRollback | Cancels the transaction commit in the connected database and avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlRow | Executes the SQL query stated onto the specified database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleClose | Closes the transaction committed in the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAmazonOracleInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleRollback | Cancels the transaction commit in the connected database and avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleRow | Executes the SQL query stated onto the specified database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonRedshiftManage | Manages Amazon Redshift clusters and snapshots. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureAdlsGen2Input | Retrieves data from an ADLS Gen2 file system of an Azure storage account and passes the data to the subsequent component connected to it through a Main>Row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureAdlsGen2Output | Uploads incoming data to an ADLS Gen2 file system of an Azure storage account in the specified format. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageConnection | Uses authentication and the protocol information to create a connection to the Microsoft Azure Storage system that can then be reused by other Azure Storage components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageContainerCreate | Creates a new storage container used to hold Azure blobs (Binary Large Object) for a given Azure storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageContainerDelete | Automates the removal of a given blob container from the space of a specific storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageContainerExist | Automates the verification of whether a given blob container exists or not within a storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageContainerList | Lists all containers in a given Azure storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageDelete | Deletes blobs from a given container for an Azure storage account according to the specified blob filters. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageGet | Retrieves blobs from a given container for an Azure storage account according to the specified filters applied on the virtual hierarchy of the blobs and then write selected blobs in a local folder. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageInputTable | Retrieves a set of entities that satisfy the specified filter criteria from an Azure storage table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageList | Lists blobs in a given container according to the specified blob filters. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageOutputTable | Performs the defined action on a given Azure storage table and inserts, replaces, merges or deletes entities in the table based on the incoming data from the preceding component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStoragePut | Uploads local files into a given container for an Azure storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueueCreate | Creates a new queue under a given Azure storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueueDelete | Deletes a specified queue permanently under a given Azure storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueueInput | Retrieves one or more messages from the front of an Azure queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueueInputLoop | Runs an endless loop to retrieve messages from the front of an Azure queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueueList | Returns all queues associated with the given Azure storage account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueueOutput | Adds messages to the back of an Azure queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureStorageQueuePurge | Purges messages in an Azure queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseBulkExec | Loads data into an Azure SQL Data Warehouse table from either Azure Blob Storage or Azure Data Lake Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseClose | Closes an active connection to an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseCommit | Commits in one go a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseConnection | Opens a connection to an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseInput | Reads data and extracts fields based on a query from an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseOutput | Writes, updates, makes changes or suppresses entries in an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseRollback | Cancels the transaction commit in the connected Azure SQL Data Warehouse database to prevent partial transaction commit if an error occurs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseRow | Executes an SQL query stated on an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBoxConnection | Creates a Box connection that the other Box components can reuse. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBoxCopy | Copies or moves a given folder or file from Box. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBoxDelete | Removes a given folder or file from Box. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBoxGet | Downloads a selected file from a Box account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBoxList | Lists the files stored in a specified directory in Box. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBoxPut | Uploads files to a Box account. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCloudStart | Starts instances on Amazon EC2 (Amazon Elastic Compute Cloud). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCloudStop | Changes the status of a launched instance on Amazon EC2 (Amazon Elastic Compute Cloud). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBBulkLoad | Imports data files in different formats (CSV, TSV or JSON) into the specified Cosmos database so that the data can be further processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBConnection | Creates a connection to a CosmosDB database and reuse that connection in other components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBInput | Retrieves certain documents from a Cosmos database collection by supplying a query document containing the fields the desired documents should match. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBOutput | Inserts, updates, upserts or deletes documents in a Cosmos database collection based on the incoming flow from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBRow | Executes the commands of the Cosmos database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDropboxConnection | Creates a Dropbox connection to a given account that the other Dropbox components can reuse. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDropboxDelete | Removes a given folder or file from Dropbox. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDropboxGet | Downloads a selected file from a Dropbox account to a specified local directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDropboxList | Lists the files stored in a specified directory on Dropbox. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDropboxPut | Uploads data to Dropbox from either a local file or a given data flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleCloudConfiguration | Provides the connection configuration to Google Cloud Platform for a Spark Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDataprocManage | Creates or deletes a Dataproc cluster in the Global region on Google Cloud Platform. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDriveConnection | Opens a Google Drive connection that can be reused by other Google Drive components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDriveCopy | Creates a copy of a file/folder in Google Drive. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDriveCreate | Creates a new folder in Google Drive. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDriveDelete | Deletes a file/folder in Google Drive. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDriveGet | Gets a file's content and downloads the file to a local directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDriveList | Lists all files, or folders, or both files and folders in a specified Google Drive folder, in the domain, including both Shared Drive and My Drive, and all shared drives. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleDrivePut | Uploads data from a data flow or a local file to Google Drive. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketCreate | Creates a new bucket which you can use to organize data and control access to data in Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketDelete | Deletes an empty bucket in Google Cloud Storage so as to release occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketExist | Checks the existence of a bucket in Google Cloud Storage so as to make further operations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSBucketList | Retrieves a list of buckets from all projects or one specific project in Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSClose | Closes an active connection to Google Cloud Storage in order to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSConnection | Provides the authentication information for making requests to the Google Cloud Storage system and enables the reuse of the connection it creates to Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSCopy | Copies or moves objects within a bucket or between buckets in Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSDelete | Deletes the objects which match the specified criteria in Google Cloud Storage so as to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSGet | Retrieves objects which match the specified criteria from Google Cloud Storage and outputs them to a local directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSList | Retrieves a list of objects from Google Cloud Storage one by one. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSPut | Uploads files from a local directory to Google Cloud Storage so that you can manage them with Google Cloud Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoBulkExec | Imports leads or custom objects into Marketo from a local file in the REST API mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoCampaign | Retrieves campaign records, activity and campaign changes related data from Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoConnection | Opens a connection to Marketo that can then be reused by other Marketo components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoInput | Retrieves lead records, activity history, lead changes, and custom object related data from Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoListOperation | Adds/removes one or more leads to/from a list in Marketo. Also, it helps you verify the existence of one or more leads in a list in Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarketoOutput | Writes lead records or custom object records from the incoming data flow into Marketo. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetsuiteConnection | Creates a connection to the NetSuite SOAP server so that other NetSuite components in the Job can reuse the connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetsuiteInput | Invokes the NetSuite SOAP service and retrieves data according to the conditions you specify. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetsuiteOutput | Invokes the NetSuite SOAP service and inserts, updates, or removes data on the NetSuite SOAP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftBulkExec | Loads data into Amazon Redshift from Amazon S3, Amazon EMR cluster, Amazon DynamoDB, or remote hosts. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftClose | Closes the transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftCommit | Provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tRedshiftInput | Reads data from a database and extracts fields based on a query so that you may apply changes to the extracted data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftOutputBulk | Prepares a delimited/CSV file that can be used by tRedshiftBulkExec to feed Amazon Redshift. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftOutputBulkExec | Executes the Insert action on the data provided. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftRollback | Cancels the transaction commit in the Redshift database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftUnload | Unloads data on Amazon Redshift to files on Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3BucketCreate | Creates a bucket on Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3BucketDelete | Deletes an empty bucket from Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3BucketExist | Verifies if the specified bucket exists on Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3BucketList | Lists all the buckets on Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Close | Shuts down a connection to Amazon S3, thus releasing the network resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Connection | Establishes a connection to Amazon S3 to store and retrieve data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Copy | Copies an Amazon S3 object from a source bucket to a destination bucket. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Delete | Deletes a file from Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Get | Retrieves a file from Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3List | Lists the files on Amazon S3 based on the bucket/file prefix settings. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Put | Uploads data onto Amazon S3 from a local file or from cache memory via the streaming mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceBulkExec | Bulk-loads data in a given file into a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceConnection | Opens a connection to Salesforce. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceEinsteinBulkExec | Loads data into Salesforce Analytics Cloud from a local file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceEinsteinOutputBulkExec | Gains in performance during data operations to the Salesforce Analytics Cloud. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceGetDeleted | Collects data deleted during a specific period of time from a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceGetServerTimestamp | Retrieves the current date of the Salesforce server presented in a timestamp format. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceGetUpdated | Collects data updated during a specific period of time from a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceInput | Retrieves data from a Salesforce object based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceOutput | Inserts, updates, upserts, or deletes data in a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceOutputBulk | Generates the file to be processed by the tSalesforceBulkExec component for bulk processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSalesforceOutputBulkExec | Bulk-loads data in a given file into a Salesforce object. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServiceNowConnection | Opens a connection to a ServiceNow instance that can then be reused by other ServiceNow components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServiceNowInput | Accesses ServiceNow and retrieves data from it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServiceNowOutput | Performs the defined action on the data on ServiceNow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeBulkExec | Loads data from files in a folder into a Snowflake table. The folder can be in an internal Snowflake stage, an Amazon Simple Storage Service (Amazon S3) bucket, or an Azure container. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeClose | Closes an active Snowflake connection to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeCommit | Provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeConnection | Opens a connection to Snowflake that can then be reused by other Snowflake components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeOutput | Uses the data incoming from its preceding component to insert, update, upsert or delete data in a Snowflake table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeOutputBulk | Writes incoming data to files generated in a folder. The folder can be in an internal Snowflake stage, an Amazon Simple Storage Service (Amazon S3) bucket, or an Azure container. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeOutputBulkExec | Writes incoming data to files generated in a folder and then loads the data into a Snowflake database table. The folder can be in an internal Snowflake stage, an Amazon Simple Storage Service (Amazon S3) bucket, or an Azure container. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeRollback | Cancels the transaction commit in the Snowflake database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeRow | Executes the SQL command stated onto a specified Snowflake database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSConnection | Opens a connection to Amazon Simple Queue Service that can then be reused by other SQS components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSInput | Retrieves one or more messages, with a maximum limit of ten messages, from an Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSMessageChangeVisibility | Changes the visibility timeout of a specified message in an Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSMessageDelete | Deletes a specified message from an Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSOutput | Delivers one or more messages to an Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSQueueAttributes | Gets attributes for a specified Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSQueueCreate | Creates a new Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSQueueDelete | Deletes an Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSQueueList | Iterates and lists the URL of Amazon SQS (Simple Queue Service) queues in a specified region. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQSQueuePurge | Purges messages in an Amazon SQS (Simple Queue Service) queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWorkdayInput | Retrieves data of a Workday client based on a query or the Workday client report. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tZendeskInput | Reads tickets or requests from Zendesk server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tZendeskOutput | Writes tickets or requests to Zendesk server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Connectivity | ||||||||
![]() |
cAMQP | Exchanges messages between a Route and a JMS provider using the AMQP broker. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSConnection | Establishes a connection to Amazon Web Services for data storage and retrieval. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSS3 | Stores and retrieves objects from/to Amazon's Simple Storage Service (S3) |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSSES | Sends emails with Amazon's Simple Email Service (SES). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSSNS | Sends messages to an Amazon's Simple Notification topic. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cAWSSQS | Sends and receives messages to/from Amazon's Simple Queue Service (SQS). The AWS SQS FIFO Feature for Queues are supported. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cFile | Provides access to file systems, allowing files to be processed by any other components or messages from other components to be saved to the disk. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cFtp | Provides access to remote file systems over the FTP, FTPS and SFTP protocols. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cHttp | Provides HTTP-based endpoints for consuming and producing HTTP resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cJMS | Exchanges messages between a Route and a JMS provider. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cKafka | Communicates with Apache Kafka message broker. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMail | Sends or receives mails in a Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMessagingEndpoint | Allows two applications to communicate by either sending or receiving messages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMQConnectionFactory | Encapsulates a set of configuration parameters to connect to a MQ server. The connection can be called by multiple cJMS, cWMQ, cAMQP or cMQTT components in a Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMQTT | Sends messages to, or consumes messages from MQTT compliant message brokers. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cREST | Provides integration with Apache CXF for connecting to JAX-RS services. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cSOAP | Provides integration with Apache CXF for connecting to JAX-WS services. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cWMQ | Exchanges messages between a Route and a JMS provider using WMQ. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Core | ||||||||
![]() |
cDirect | Produces and consumes messages synchronously in different threads within a single CamelContext. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cDirectVM | Produces and consumes messages synchronously in different threads within a single CamelContext and across CamelContexts in the same JVM. You can use this mechanism to communicate across Web applications. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cExchangePattern | Sets the message exchange mode in a Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cJavaDSLProcessor | Implements producers and consumers of message exchanges or implements a message translator using the Java Domain Specific Language (DSL). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMessagingEndpoint | Allows two applications to communicate by either sending or receiving messages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cSEDA | Produces and consumes messages asynchronously in different threads within a single CamelContext. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cSetBody | Sets the message body in the Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cSetHeader | Sets headers or customizes the default headers, if any, on each message sent to it for subsequent message processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cSetProperty | Sets properties for each message sent to it for subsequent message processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cVM | Produces and consumes messages asynchronously in different threads across CamelContext. You can use this mechanism to communicate across Web applications. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Custom | ||||||||
![]() |
cBean | Invokes a Java bean that is stored in the Code node of the Repository or registered by a cBeanRegister. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cBeanRegister | Registers a Java bean in the registry to be used in message exchanges. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cConfig | Sets the CamelContext using Java code. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cProcessor | Implements consumers of message exchanges or implements a Message Translator. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Custom_Code | ||||||||
![]() |
tGroovy | tGroovy broadens the functionality if the Job, using the Groovy language which is a simplified Java syntax. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGroovyFile | Broadens the functionality of Jobs using the Groovy language which is a simplified Java syntax. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJava | Extends the functionalities of a Job using custom Java commands. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJavaFlex | Provides a Java code editor that lets you enter personalized code in order to integrate it in program. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJavaRow | Provides a code editor that lets you enter the Java code to be applied to each row of the flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLibraryLoad | Loads useable Java libraries in a Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSetDynamicSchema | Sets a dynamic schema that can be reused by components in the subsequent subJob or subJobs to retrieve data from unknown columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSetGlobalVar | Facilitates the process of defining global variables. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Data Quality | ||||||||
![]() |
tMatchIndex | Indexes a clean and deduplicated data set in ElasticSearch for continuous matching purposes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMatchIndexPredict | Compares a new data set with a lookup data set stored in ElasticSearch, using tMatchIndex. tMatchIndexPredict outputs unique records and suspect duplicates in separate files. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMatchModel | Generates the matching model that is used by the tMatchPredict component to automatically predict the labels for the suspect pairs and groups records which match the label(s) set in the component properties. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMatchPairing | Enables you to compute pairs of suspect duplicates from any source data including large volumes in the context of machine learning on Spark. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMatchPredict | Labels suspect records automatically and groups suspect records which match the label(s) set in the component properties. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Databases | ||||||||
![]() |
tAccessBulkExec | Offers gains in performance when carrying out Insert operations in an Access database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessClose | Closes an active connection to the Access database so as to release occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAccessInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessOutputBulk | Prepares the file which contains the data used to feed the Access database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessOutputBulkExec | Executes an Insert action on the data provided, in an Access database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessRollback | Cancels the transaction commit in the connected database and avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAccessRow | Executes the SQL query stated onto the specified database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraClose | Closes an active connection to an Amazon Aurora database instance to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraConnection | Opens a connection to an Amazon Aurora database instance that can then be reused by other Amazon Aurora components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraInput | Reads an Amazon Aurora database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraInvalidRows | Checks Amazon Aurora database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). Only MySQL is supported. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraOutput | Writes, updates, makes changes or suppresses entries in an Amazon Aurora database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraRollback | Rolls back any changes made in the Amazon Aurora database to prevent partial transaction commit if an error occurs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraRow | Executes query statements on a specified Amazon Aurora database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonAuroraValidRows | Checks Amazon Aurora database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). Only MySQL is supported. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlClose | Closes the transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAmazonMysqlInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlRollback | Cancels the transaction commit in the connected database and avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonMysqlRow | Executes the SQL query stated onto the specified database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleClose | Closes the transaction committed in the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleCommit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAmazonOracleInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleRollback | Cancels the transaction commit in the connected database and avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonOracleRow | Executes the SQL query stated onto the specified database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAmazonRedshiftManage | Manages Amazon Redshift clusters and snapshots. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400CDC | Addresses data extraction and transportation needs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400Close | Closes the transaction committed in the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400Commit | Commits in one go a global transaction instead of doing that on every row or every batch, and provides gain in performance, using a unique connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400Connection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAS400Input | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400LastInsertId | Obtains the primary key value of the record that was last inserted in an AS/400 table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400Output | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400Rollback | Cancels the transaction commit in the connected database and avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAS400Row | Executes the SQL query stated onto the specified database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseBulkExec | Loads data into an Azure SQL Data Warehouse table from either Azure Blob Storage or Azure Data Lake Storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseClose | Closes an active connection to an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseCommit | Commits in one go a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseConnection | Opens a connection to an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseInput | Reads data and extracts fields based on a query from an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseOutput | Writes, updates, makes changes or suppresses entries in an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseRollback | Cancels the transaction commit in the connected Azure SQL Data Warehouse database to prevent partial transaction commit if an error occurs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAzureSynapseRow | Executes an SQL query stated on an Azure SQL Data Warehouse database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBigQueryConfiguration | Provides the connection configuration to Google BigQuery and Google Cloud Storage for a Spark Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraConfiguration | Enables the reuse of the connection configuration to a Cassandra server in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraInput | Extracts the desired data from a standard or super column family of a Cassandra keyspace so as to apply changes to the data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraLookupInput | Extracts the desired data from a standard or super column family of a Cassandra keyspace so as to apply changes to the data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBBulkLoad | Imports data files in different formats (CSV, TSV or JSON) into the specified Cosmos database so that the data can be further processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBConnection | Creates a connection to a CosmosDB database and reuse that connection in other components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBInput | Retrieves certain documents from a Cosmos database collection by supplying a query document containing the fields the desired documents should match. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBOutput | Inserts, updates, upserts or deletes documents in a Cosmos database collection based on the incoming flow from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBRow | Executes the commands of the Cosmos database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCreateTable | Creates a table for a specific type of database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2BulkExec | Executes the Insert action on the provided data and gains in performance during Insert operations to a DB2 database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2CDC | Extracts the changes done to the source operational data and makes them available to the target system(s) using database CDC views. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Close | Closes a transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Commit | Commits in one go a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Connection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tDB2Input | Executes a DB query with a strictly defined order which must correspond to the schema definition. Then tDB2Input passes on the field list to the next component via a Row > Main link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Output | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Rollback | Avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Row | Acts on the actual DB structure or on the data (although without handling data) depending on the nature of the query and the database. The SQLBuilder tool helps you write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2SCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2SCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated DB2 SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2SP | Offers a convenient way to call the database stored procedures. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBBulkExec | Offers gains in performance while executing the Insert operations on a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBCDC | Extracts only the changes made to the source operational data and makes them available to the target system(s) using database CDC views. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBCDCOutput | Synchronizes data changes in database of the selected database type in the CDC mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBClose | Closes the transaction committed in a connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBColumnList | Iterates on all columns of a given database table and lists column names. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBCommit | Validates the data processed through the Job into the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBConnection | Opens a connection to a database to be reused in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBInput | Extracts data from a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBInvalidRows | Checks database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBLastInsertId | Obtains the primary key value of the record that was last inserted in a database table by a user. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBOutputBulk | Writes a file with columns based on the defined delimiter and the standards of the selected database type. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBOutputBulkExec | Executes the Insert action in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBRollback | Cancels the transaction commit in a connected database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBRow | Executes the stated SQL query onto a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBSCD | Reflects and tracks changes in a dedicated database SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBSCDELT | Reflects and tracks changes in a dedicated SCD table through SQL queries. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBSP | Calls a database stored procedure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBSQLRow | Acts on the actual DB structure or on the data (although without handling data) depending on the nature of the query and the database. The SQLBuilder tool helps you write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBTableList | Lists the names of specified database tables using a SELECT statement based on a WHERE clause. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDBValidRows | Checks database rows against Data Quality patterns (regular expression). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDeltaLakeClose | Closes an active DeltaLake connection to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDeltaLakeConnection | Opens a connection to the specified database that can then be reused in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDeltaLakeInput | Extracts the latest version or a given snapshot of records from the Delta Lake layer of your Data Lake system and sends the data to the next component for further processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDeltaLakeOutput | Writes records in the Delta Lake layer of your Data Lake system in the Parquet format. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDeltaLakeRow | Acts on the actual DB structure or on the data (although without handling data) using the SQLBuilder tool to write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDynamoDBConfiguration | Stores connection information and credentials to be reused by other DynamoDB components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDynamoDBInput | Retrieves data from an Amazon DynamoDB table and sends them to the component that follows for transformation. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDynamoDBLookupInput | Executes a database query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolBulkExec | Imports data into an EXASolution database table using the IMPORT command provided by the EXASolution database in a fast way. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolClose | Closes an active connection to an EXASolution database instance to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolCommit | Validates the data processed through the Job into the connected EXASolution database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolConnection | Opens a connection to an EXASolution database instance that can then be reused by other EXASolution components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolInput | Retrieves data from an EXASolution database based on a query with a strictly defined order which corresponds to the schema definition, and passes the data to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolOutput | Writes, updates, modifies or deletes data in an EXASolution database by executing the action defined on the table and/or on the data in the table, based on the flow incoming from the preceding component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolRollback | Cancels the transaction commit in the connected EXASolution database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolRow | Executes SQL queries on an EXASolution database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdClose | Closes a transaction with a Firebird database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdCommit | Commits a global transaction instead of doing so on every row or every batch, thus providing a gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tFirebirdInput | Executes a database query on a Firebird database with a strictly defined order which must correspond to the schema definition then passes on the field list to the next component via a Main row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdOutput | Executes the action defined on the table in a Firebird database and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdRollback | Cancels the transation committed in the connected Firebird database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdRow | Executes the stated SQL query on the specified Firebird database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumBulkExec | Improves performance when loading data in a Greenplum database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumClose | Closes a connection to the Greenplum database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumCommit | Commits global transaction in one go instead of repeating the operation for every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tGreenplumGPLoad | Bulk loads data into a Greenplum table either from an existing data file, an input flow, or directly from a data flow in streaming mode through a named-pipe. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumOutput | Executes the action defined on the table and/or on the data of a table, according to the input flow from the previous component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumOutputBulk | Prepares the file to be used as parameter in the INSERT query to feed the Greenplum database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumOutputBulkExec | Provides performance gains during Insert operations to a Greenplum database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumRollback | Avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGreenplumSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseConfiguration | Enables the reuse of the connection configuration to HBase in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseInput | Reads data from a given HBase database and extracts columns of selection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseLookupInput | Provides lookup data to the main flow of a streaming Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseOutput | Writes columns of data into a given HBase database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveClose | Closes connection to a Hive database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveConfiguration | Enables the reuse of the connection configuration to Hive in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveConnection | Establishes a Hive connection to be reused by other Hive components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveCreateTable | Creates Hive tables that fit a wide range of Hive data formats. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveInput | Extracts data from Hive and sends the data to the component that follows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveLoad | Writes data of different formats into a given Hive table or to export data from a Hive table to a directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveOutput | Connects to a given Hive database and writes the data it receives into a given Hive table or a directory in HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveRow | Acts on the actual DB structure or on the data without handling data itself, depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveWarehouseConfiguration | Enables the reuse of the Hive Warehouse Connector connection configuration to Hive in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveWarehouseInput | Extracts data from Hive and sends the data to the component that follows using Hive Warehouse Connector. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHiveWarehouseOutput | Connects to a given Hive database and writes the received data into a given Hive table or a directory in HDFS using Hive Warehouse Connector. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHSQLDbInput | Executes a DB query with a strictly defined order which must correspond to the schema definition and then it passes on the field list to the next component via a Main row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHSQLDbOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHSQLDbRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixBulkExec | Executes Insert operations in Informix databases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixCDC | Extracts the data from a source system which has changed since the last extraction and transports it to another/other system(s). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixClose | Closes connection to Informix databases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixCommit | Makes a global commit just once instead of commiting every row or batch of rows separately. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tInformixInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixOutputBulk | Prepares the file to be used as a parameter in the INSERT query used to feed Informix databases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixOutputBulkExec | Carries out Insert operations in Informix databases using the data provided. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixRollback | Prevents involuntary transaction commits by canceling transactions in connected databases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixRow | Acts on the actual DB structure or on the data (although without handling data) thanks to the SQLBuilder that helps you write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixSCD | Tracks and shows changes which have been made to Informix SCD dedicated tables |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInformixSP | Centralises and calls multiple and complex queries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJavaDBInput | Reads a database and extracts fields based on a query |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJavaDBOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJavaDBRow | Acts on the actual database structure or on the data (although without handling data) using the SQLBuilder tool to write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCClose | Closes an active JDBC connection to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCColumnList | Lists all column names of a given JDBC table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCCommit | Commits in one go a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCConfiguration | Stores connection information and credentials to be reused by other JDBC components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tJDBCInput | Reads any database using a JDBC API connection and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCLookupInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCOutput | Executes the action defined on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCRollback | Avoids commiting part of a transaction accidentally by canceling the transaction committed in the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCRow | Acts on the actual DB structure or on the data (although without handling data) using the SQLBuilder tool to write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCSCDELT | Tracks data changes in a source database table using SCD (Slowly Changing Dimensions) Type 1 method and/or Type 2 method and writes both the current and historical data into a specified SCD dimension table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCSP | Centralizes multiple or complex queries in a database in order to call them easily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCTableList | Lists the names of a given set of JDBC tables using a select statement based on a Where clause. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKuduConfiguration | Enables the reuse of the connection configuration to Cloudera Kudu in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKuduInput | Retrieves data from a Cloudera Kudu table and sends them to the component that follows for transformation. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKuduOutput | Creates, updates or deletes data in a Cloudera Kudu table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBConfiguration | Stores connection information and credentials to be reused by other MapRDB components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBInput | Reads data from a given MapRDB database and extracts columns of selection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBLookupInput | Provides lookup data to the main flow of a streaming Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBOutput | Writes columns of data into a given MapRDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMaxDBInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMaxDBOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMaxDBRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBConfiguration | Stores connection information and credentials to be reused by other MongoDB components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBInput | Retrieves records from a collection in the MongoDB database and transfers them to the following component for display or storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBLookupInput | Executes a database query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlBulkExec | Offers gains in performance while executing the Insert operations to a Microsoft SQL Server database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlCDC | Extracts the changes made to the source operational data and makes them available to the target system(s) using database CDC views. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlClose | Closes a transaction in the MSSql databases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlColumnList | Lists all column names of a given MSSql table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlCommit | Commits in one go, using a unique connection, a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tMSSqlInput | Executes a DB query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlInvalidRows | Extracts DB rows that match a given data quality business rule. You can then implement any required correction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlLastInsertId | Retrieves the last primary keys added by a user to a MSSql table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlOutputBulk | Prepares the file to be used as parameter in the INSERT query to feed the MSSql database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlOutputBulkExec | Gains in performance during Insert operations to a Microsoft SQL Server database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlRollback | Cancels the transaction commit in the MSSql database and thus avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlRow | Acts on the actual DB structure or on the data (although without handling data). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlSCD | Tracks and reflects changes in a dedicated SCD table in a Microsoft SQL Server or Azure SQL database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlSP | Offers a convenient way to centralize multiple or complex queries in a database and calls them easily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlTableList | Lists the names of a given set of MSSql tables using a select statement based on a Where clause. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlValidRows | Extracts DB rows that match a given data quality business rule. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlBulkExec | Offers gains in performance while executing the Insert operations on a MySQL or Aurora database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlCDC | Extracts only the changes made to the source operational data and makes them available to the target system(s) using database CDC views. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlClose | Closes the transaction committed in a Mysql database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlColumnList | Iterates on all columns of a given Mysql table and lists column names. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlCommit | Commits in one go, using a unique connection, a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlConfiguration | Stores connection information and credentials to be reused by other MySQL components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlConnection | Opens a connection to the specified MySQL database for reuse in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlInput | Executes a DB query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMySQLInvalidRows | Checks MySQL database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlLastInsertId | Obtains the primary key value of the record that was last inserted in a Mysql table by a user. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlLookupInput | Reads a MySQL database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlOutput | Writes, updates, makes changes or suppresses entries in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlOutputBulk | Writes a file with columns based on the defined delimiter and the MySQL or Aurora standards. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlOutputBulkExec | Executes the Insert action in the specified MySQL or Aurora database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlRollback | Cancels the transaction commit in the connected MySQL database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlRow | Executes the stated SQL query on the specified MySQL database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlSCD | Reflects and tracks changes in a dedicated MySQL SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlSCDELT | Reflects and tracks changes in a dedicated MySQL SCD table through SQL queries. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlSP | Calls a MySQL database stored procedure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlTableList | Lists the names of a given set of Mysql tables using a select statement based on a Where clause. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMySQLValidRows | Checks MySQL database rows against Data Quality patterns (regular expression). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaBulkExec | Offers gains in performance while carrying out the Insert operations to a Netezza database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaClose | Closes the transaction committed in the connected Netazza database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaCommit | Validates the data processed through the Job into the connected Netezza database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaConnection | Opens a connection to a Netezza database to be reused in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaInput | Reads a database and extracts fields from a Netezza database based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaNzLoad | Inserts data into a Netezza database table using Netezza's nzload utility. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaOutput | Writes, updates, makes changes or suppresses entries in a Netezza database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaRollback | Cancels the transaction committed in the connected Netezza database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaRow | Executes the SQL query stated onto the specified Netezza database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaSCD | Reflects and tracks changes in a dedicated Netezza SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleBulkExec | Offers gains in performance during operations performed on data of an Oracle database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleCDC | Extracts source system data that has changed since the last extraction and transports it to another/other system(s). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleCDCOutput | Synchronizes data changes in the Oracle XStream CDC mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleClose | Closes the transaction committed in the connected Oracle database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleCommit | Validates the data processed through the Job into the connected Oracle database |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleConfiguration | Stores connection information and credentials to be reused by other Oracle components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleConnection | Opens a connection to the specified Oracle database for reuse in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleInput | Reads an Oracle database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleInvalidRows | Checks Oracle database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleLookupInput | Reads a database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleOutput | Writes, updates, makes changes or suppresses entries in an Oracle database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleOutputBulk | Writes a file with columns based on the defined delimiter and the Oracle standards. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleOutputBulkExec | Executes the Insert action in the specified Oracle database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleRollback | Cancels the transaction commit in the connected Oracle database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleRow | Executes the stated SQL query on the specified Oracle database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleSCD | Reflects and tracks changes in a dedicated Oracle SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleSCDELT | Reflects and tracks changes in a dedicated Oracle SCD table through SQL queries. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleSP | Calls an Oracle database stored procedure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleTableList | Lists the names of specified Oracle tables using a SELECT statement based on a WHERE clause. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleValidRows | Checks Oracle database rows against Data Quality patterns (regular expression). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tParseRecordSet | Parses a recordset rather than individual records from a table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusBulkExec | Improves performance during Insert operations to a DB2 database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusClose | Closes the transaction committed in the connected PostgresPlus database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusCommit | Commits in one go a global transaction, using a unique connection, instead of doing that on every row or every batch and thus improves performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tPostgresPlusInput | Executes a DB query with a strictly defined order which must correspond to the schema definition. Then it passes on the field list to the next component via a Main row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusOutputBulk | Prepares the file to be used as parameter in the INSERT query to feed the PostgresPlus database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusOutputBulkExec | Improves performance during Insert operations to a PostgresPlus database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusRollback | Avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. The SQLBuilder tool helps you write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated PostgresPlus SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlBulkExec | Improves performance while carrying out the Insert operations to a Postgresql database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlCDC | Addresses data extraction and transportation needs, only extracts the changes made to the source operational data and makes them available to the target system(s) using database CDC views. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlClose | Closes the transaction committed in the connected Postgresql database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlCommit | Commits in one go a global transaction, using a unique connection, instead of doing that on every row or every batch and thus improves performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tPostgresqlInput | Executes a DB query with a strictly defined order which must correspond to the schema definition. Then it passes on the field list to the next component via a Main row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlInvalidRows | Extracts DB rows that do not match a given data quality pattern, you can then implement any required correction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlOutputBulk | Prepares the file to be used as parameters in the INSERT query to feed the Postgresql database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlOutputBulkExec | Improves performance during Insert operations to a Postgresql database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlRollback | Avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. The SQLBuilder tool helps you write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated DB2 SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlValidRows | Extracts DB rows that match a given data quality pattern. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftBulkExec | Loads data into Amazon Redshift from Amazon S3, Amazon EMR cluster, Amazon DynamoDB, or remote hosts. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftClose | Closes the transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftCommit | Provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftConfiguration | Reuses the connection configuration to a Redshift database in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tRedshiftInput | Reads data from a database and extracts fields based on a query so that you may apply changes to the extracted data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftLookupInput | Reads a Redshift database and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftOutput | Writes, updates, modifies or deletes the data in a database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftOutputBulk | Prepares a delimited/CSV file that can be used by tRedshiftBulkExec to feed Amazon Redshift. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftOutputBulkExec | Executes the Insert action on the data provided. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftRollback | Cancels the transaction commit in the Redshift database to avoid committing part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftRow | Acts on the actual DB structure or on the data (although without handling data), depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftUnload | Unloads data on Amazon Redshift to files on Amazon S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaClose | Closes a connection to a SAP HANA database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaCommit | Commits in one go, using a unique connection, a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaConnection | Establishes a SAP HANA connection to be reused by other SAP HANA components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaInput | Executes a database query with a defined command which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaInvalidRows | Checks SAP Hana database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaRollback | Avoids to commit part of a transaction involuntarily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaRow | Acts on the actual database structure or on the data (although without handling data). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaUnload | Offloads massive data from the SAP HANA database to a third party system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaValidRows | Checks SAP Hana database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreBulkExec | Loads data from a file into a table of a database connected through JDBC API. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreClose | Closes an active SingleStore connection to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreCommit | Commits in one go a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tSingleStoreInput | Reads any database using a JDBC API connection and extracts fields based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreOutput | Executes the action defined on the data contained in the table, based on the flow incoming from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreOutputBulk | Prepares the bulk file to be used as a parameter to feed the database connected. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreOutputBulkExec | Provides performance gain when loading data from a file into a table of a database connected through JDBC API. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreRollback | Avoids commiting part of a transaction accidentally by canceling the transaction committed in the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreRow | Acts on the actual DB structure or on the data (although without handling data) using the SQLBuilder tool to write easily your SQL statements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreSP | Centralizes multiple or complex queries in a database in order to call them easily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeConfiguration | Stores connection information and credentials to be reused by other Snowflake components in the Apache Spark Batch framework. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeInput | Reads data from a Snowflake table into the data flow of your Job based on an SQL query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteClose | Closes a transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteCommit | Commits in one go, using a unique connection, a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteConnection | Opens a connection to the database for a current transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteInput | Executes a DB query with a defined command which must correspond to the schema definition. It passes on rows to the next component via a Main row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteRollback | Cancels the transaction committed in the SQLite database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLiteRow | Executes the defined query onto the specified database and uses the parameters bound with the column. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseBulkExec | Gains in performance during Insert operations to a Sybase database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseCDC | Extracts source system data that has changed since the last extraction and transports it to another/other system(s). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseClose | Closes a transaction committed in the connected database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseCommit | Commits in one go, using a unique connection, a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseConnection | Opens a connection to the database for a current transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseInput | Executes a DB query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseIQBulkExec | Loads data into a Sybase database table from a flat file or other database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseIQOutputBulkExec | Gains in performance during Insert operations to a Sybase IQ database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseOutputBulk | Prepares the file to be used as parameter in the INSERT query to feed the Sybase database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseOutputBulkExec | Gains in performance during Insert operations to a Sybase database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseRollback | Cancels the transaction committed in the Sybase database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseRow | Acts on the actual DB structure or on the data (although without handling data). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated Sybase SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseSP | Calls a Sybase database stored procedure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataCDC | Extracts source system data that has changed since the last extraction and transports it to another system(s) using the CDC Trigger mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataClose | Closes the transaction committed in the connected DB. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataCommit | Commits in one go, using a unique connection, a global transaction instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataConfiguration | Defines a connection to Teradata and enables the reuse of the connection configuration in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tTeradataFastExport | Exports data batches from a Teradata table to a customer system or to a smaller database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataFastLoad | Executes a database query according to a strict order which must be the same as the one in the schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataFastLoadUtility | Executes a database query according to a strict order which must be the same as the one in the schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataInput | Executes a DB query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataLookupInput | Executes a database query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataMultiLoad | Executes a database query according to a strict order which must be the same as the one in the schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataOutput | Executes the action defined on the table and/or on the data contained in the table, based on the flow incoming from the preceding component in the job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataRollback | Cancels the transaction commit in the Teradata database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataRow | Acts on the actual DB structure or on the data (although without handling data). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataSCD | Addresses Slowly Changing Dimension needs, reading regularly a source of data and logging the changes into a dedicated SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataSCDELT | Addresses Slowly Changing Dimension needs through SQL queries (server-side processing mode), and logs the changes into a dedicated Teradata SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataTPTExec | Offers high performance in inserting data from an existing file to a table in a Teradata database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataTPTUtility | Writes the incoming data to a file and then loads the data from the file to a Teradata database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataTPump | Inserts, updates, or deletes data in the Teradata database with the TPump loading utility which allows near-real-time data to be achieved in the data warehouse. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaBulkExec | Loads data into a Vertica database table from a local file using the Vertica COPY SQL statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaClose | Closes an active connection to a Vertica database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaCommit | Commits in one go a global transaction using a unique connection instead of doing that on every row or every batch and thus provides gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tVerticaInput | Retrieves data from a Vertica database table based on a SQL query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaOutput | Inserts, updates, deletes, or copies data from an incoming flow into a Vertica database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaOutputBulk | Prepares a file to be used by the tVerticaBulkExec component to feed a Vertica database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaOutputBulkExec | Receives data from a preceding component, writes data into a local file, and loads data into a Vertica database from the file using the Vertica COPY SQL statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaRollback | Cancels the transaction commit in the Vertica database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaRow | Executes a Vertica SQL statement against a database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerticaSCD | Tracks and reflects data changes in a dedicated Vertica SCD table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Databases NoSQL | ||||||||
![]() |
tCassandraBulkExec | Improves performance during Insert operations to a Cassandra column family. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraClose | Disconnects a connection to a Cassandra server so as to release occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraConnection | Enables the reuse of the connection it creates to a Cassandra server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraOutput | Writes data into or deletes data from a column family of a Cassandra keyspace. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraOutputBulk | Prepares an SSTable of large size and processes it according to your needs before loading this SSTable into a column family of a Cassandra keyspace. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraOutputBulkExec | Improves performance during Insert operations to a column family of a Cassandra keyspace. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraRow | Acts on the actual DB structure or on the data, depending on the nature of the query and the database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBSQLAPIInput | Retrieves data from a Cosmos database collection through SQL API. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCosmosDBSQLAPIOutput | Inserts, updates, upserts or deletes documents in a Cosmos database collection based on the incoming flow from the preceding component through SQL API. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCouchbaseDCPInput | Queries the documents from the Couchbase database, under the Database Change Protocol (DCP), a streaming protocol. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCouchbaseDCPOutput | Upserts documents in the Couchbase database based on the incoming flat data from preceding components, under the Database Change Protocol (DCP), a streaming protocol. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCouchbaseInput | Queries the documents from the Couchbase database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCouchbaseOutput | Upserts documents in the Couchbase database based on the incoming flat data from preceding components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDynamoDBOutput | Creates, updates or deletes data in an Amazon DynamoDB table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseClose | Closes an HBase connection you have established in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseConnection | Establishes an HBase connection to be reused by other HBase components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBClose | Closes an MapRDB connection you have established in a same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBConnection | Establishes a MapRDB connection to be reused by other MapRDB components in a same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapROjaiInput | Reads documents from a MapR-DB database to load the data in a given Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapROjaiOutput | Inserts, replaces or deletes documents in a MapR-DB database to be used as document database, based on the incoming flow from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicBulkLoad | Imports local files into a MarkLogic server database in bulk mode using the MarkLogic Content Pump (MLCP) tool. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicClose | Closes an active connection to a MarkLogic database to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicConnection | Opens a connection to a MarkLogic database that can then be reused by other MarkLogic components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicInput | Searches document content in a MarkLogic database based on a string query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMarkLogicOutput | Creates, updates or deletes document content in a MarkLogic database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBBulkLoad | Imports data files in different formats (CSV, TSV or JSON) into the specified MongoDB database so that the data can be further processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBClose | Closes a connection to the MongoDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBConnection | Creates a connection to a MongoDB database and reuse that connection in other components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSDelete | Automates the delete action over specific files in MongoDB GridFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSGet | Connects to a MongoDB GridFS system to copy files from it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSList | Retrieves a list of files based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSProperties | Obtains information about the properties of given files selected based on a query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBGridFSPut | Connects to a MongoDB GridFS system to load files into it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBOutput | Executes the action defined on the collection in the MongoDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMongoDBRow | Executes the commands and functions of the MongoDB database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jBatchOutput | Receives data from the preceding component and writes the data into a local Neo4j database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jBatchOutputRelationship | Receives data from the preceding component and writes relationships in bulk into a local Neo4j database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jBatchSchema | Defines the schema of a local Neo4j database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jClose | Close an active connection to an Neo4j database in embedded mode. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jConnection | Opens a connection to a Neo4j database to be reuse by other Neo4j components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jImportTool | Uses Neo4j Import Tool to create a Neo4j database and import large amounts of data in bulk from CSV files to this database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jInput | Reads data from Neo4j and sends data in the output flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jRow | Executes the stated Cypher query onto the specified Neo4J database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Close | Closes a connection to a Neo4j version 4.x database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Connection | Establishes a connection to a Neo4j version 4.x database for later use. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Input | Reads data from Neo4j version 4.x and sends data in the output flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Output | Receives data from the preceding component and writes the data into a Neo4j version 4.x database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNeo4jv4Row | Executes the stated Cypher query onto the specified Neo4J version 4.x database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Data_Privacy | ||||||||
![]() |
tDataDecrypt | Decrypts data encrypted with the tDataEncrypt component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDataEncrypt | Protects data by transforming it into unreadable cipher text. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDataMasking | Hides original data with random characters or figures to protect the actual data while having a functional substitute for occasions when it is not advisable to show sensitive real data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDataShuffling | Shuffles the data from in an input table to protect the actual data while having a functional data set. Data will remain usable for purposes such as testing and training. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDataUnmasking | Unmasks data masked with the tDataMasking component to retrieve the original data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDuplicateRow | Creates duplicates with meaningful data for data quality functional testing purposes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPatternMasking | Masks data that follows a specific pattern and can transform the original data in consistent manner, if needed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPatternUnmasking | Unmasks data masked with the tPatternMasking component to retrieve the original data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Data_Quality | ||||||||
![]() |
tAddCRCRow | Provides a unique ID which helps improving the quality of processed data. CRC stands for Cyclical Redundancy Checking. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAddressRowCloud | Verifies and formats international addresses in the Cloud by using online services. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBatchAddressRowCloud | Uses batch processing to parse address data and get formatted addresses quickly, accurately and without installing any software. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tChangeFileEncoding | Transforms the character encoding of a given file and generates a new file with the transformed character encoding. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDqReportRun | Launches the analyses listed in a report and save the results in the data quality data mart. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDuplicateRow | Creates duplicates with meaningful data for data quality functional testing purposes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFindRegexlibExpressions | Returns a dataset holding information about all of the regular expressions that match the request sent to the web server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirstnameMatch | Matches first names against a reference index in order to standardize data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFuzzyMatch | Compares a column from the main flow with a reference column from the lookup flow and outputs the main flow data displaying the distance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFuzzyUniqRow | Compares columns in the input flow by using a defined matching method and collects the encountered duplicates. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGenKey | Generates a functional key from the input columns, by applying different types of algorithms on each column and grouping the computed results in one key, then outputs this key with the input columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleAddressRow | Converts human-readable addresses into geographic coordinates and other geographic information. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleGeocoder | Converts human-readable addresses into geographic coordinates. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGoogleMapLookup | Obtains detailed geographic information using geographic coordinates and address information. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tIntervalMatch | Returns a value based on a Join relation. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJapaneseNumberNormalize | Normalizes Japanese numbers (kansuji) to regular Arabic numbers. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJapaneseTokenize | Splits Japanese text into tokens. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJapaneseTransliterate | Converts textual data in Japanese to kana and Latin scripts. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLastRegexlibExpressions | Returns a dataset holding information about the N most recent regular expressions added to the library and that match the query at http://regexlib.com. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLoqateAddressRow | Parses, verifies, cleanses, standardizes, transliterates, and formats international addresses. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMatchGroup | Creates groups of similar data records in any source data including large volumes of data by using one or several match rules. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMelissaDataAddress | Verifies if an address is properly formatted and corrects any formatting or spelling errors in each row. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlInvalidRows | Extracts DB rows that match a given data quality business rule. You can then implement any required correction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMSSqlValidRows | Extracts DB rows that match a given data quality business rule. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMultiPatternCheck | Checks all existing data in multiple columns against a given Java regular expression. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMySQLInvalidRows | Checks MySQL database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMySQLValidRows | Checks MySQL database rows against Data Quality patterns (regular expression). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleInvalidRows | Checks Oracle database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleValidRows | Checks Oracle database rows against Data Quality patterns (regular expression). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPatternCheck | Gives two output flows: Matching Data and Non-Matching Data. The first collects all data that match a given pattern, and the second collects all data that do not match a given pattern. You can then implement any required corrections. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPatternExtract | Outputs all data that match a given pattern. You can then implement any required operation on the extracted data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPersonator | Ensures the quality of a US and Canadian contact database by checking, verifying, moving and appending contact data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlInvalidRows | Extracts DB rows that do not match a given data quality pattern, you can then implement any required correction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresqlValidRows | Extracts DB rows that match a given data quality pattern. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tQASAddressRow | Corrects any formatting or spelling errors and gives the verification status for each row. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tQASBatchAddressRow | Corrects any formatting or spelling errors, adds missing data and gives the verification status for each row. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRecordMatching | Ensures the data quality of any source data against a reference data source. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tReplaceList | Cleanses all files before further processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tReservoirSampling | Extracts a random sample data from a big data set. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRuleSurvivorship | Creates the single representation of an entity according to business rules and can create a master copy of data for Master Data Management. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaInvalidRows | Checks SAP Hana database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSAPHanaValidRows | Checks SAP Hana database rows against specific Data Quality patterns (regular expression) or Data Quality rules (business rule). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSchemaComplianceCheck | Ensures the data quality of any source data against a reference data source. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tStandardizePhoneNumber | Standardizes phone numbers according to given formats. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tStandardizeRow | Normalizes the incoming data in a separate XML or JSON data flow to separate or standardize the rule-compliant data from the non-compliant data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tStem | Enables to standardize data in columns before matching this data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSurviveFields | Centralizes data from various and heterogeneous sources to create a master copy of data for MDM. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSynonymOutput | Creates a Lucene index and feeds it with entries and the related synonyms it receives. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSynonymSearch | Searches a given index for the reference entries matching the data you input. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tThresholdViolationAlert | Alerts to any threshold violations regarding the thresholds set on indicators in different quality analyses created in the Studio. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTransliterate | Converts strings from many languages of the world to a standard set of characters (Universal Coded Character Set, UCS). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tVerifyEmail | Verifies if email addresses comply with specific rules and corrects addresses that do not match the rules by using the content from specific columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Deprecated | ||||||||
![]() |
tBlockedFuzzyJoin | Helps ensuring the data quality of any source data against a reference data source. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFlumeInput | Acts as interface to integrate Flume and the Spark Streaming Job developed with the Studio to continuously read data from a given Flume agent. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFlumeOutput | Acts as interface to integrate Flume and the Spark Streaming Job developed with the Studio to continuously send data to a given Flume agent. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFuzzyJoin | Joins two tables by doing a fuzzy match on several columns, comparing columns from the main flow with reference columns from the lookup flow and outputting the main flow data and/or the rejected data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tQASAddressIncomplete | Gives two output flows: Incomplete and Reject. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tQASAddressUnknown | Gives one output flow: Unknown which collects all addresses that do not match to deliverable results in the QuickAddress data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tQASAddressVerified | Gives three output flows: Verified, Interaction required, and Reject. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniservBTGeneric | Executes a process created with the Uniserv product DQ Batch Suite. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniservRTConvertName | Analyzes the name elements in an address . |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniservRTMailBulk | Creates the index pool for duplicate search. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniservRTMailOutput | Synchronizes the index pool that is used for duplicate search. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniservRTMailSearch | Searches for duplicate values based on a given input record and adds additional data to each record. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniservRTPost | Improves the addresses quality, which is extremely important for CRM and e-business as it is directly related to postage and advertising costs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
DotNET | ||||||||
![]() |
tDotNETInstantiate | Invokes the constructor of a .NET object that is intended for later reuse. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDotNETRow | Facilitates data transform by utilizing custom or built-in .NET classes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
ElasticSearch | ||||||||
![]() |
tElasticSearchConfiguration | Enables the reuse of the connection configuration to ElasticSearch in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tElasticSearchInput | Reads documents from a given Elasticsearch system based on a user-defined query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tElasticSearchLookupInput | Executes a ElasticSearch query with a strictly defined order which must correspond to the schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tElasticSearchOutput | Writes datasets into a given Elasticsearch system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
ELT | ||||||||
![]() |
tAccessConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tAS400Connection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tCombinedSQLAggregate | Provides a set of matrix based on values or calculations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCombinedSQLFilter | Filters data by reorganizing, deleting or adding columns based on the source table and to filter the given data source using the filter conditions. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCombinedSQLInput | Extracts fields from a database table based on its schema definition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCombinedSQLOutput | Inserts records from the incoming flow to an existing database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDB2Connection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tDeltaLakeConnection | Opens a connection to the specified database that can then be reused in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTGreenplumInput | Adds as many Input tables as required for the most complicated Insert statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTGreenplumMap | Uses the tables provided as input to feed the parameter in the built statement. The statement can include inner or outer joins to be implemented between tables or between one table and its aliases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTGreenplumOutput | Executes the SQL Insert, Update and Delete statement to the Greenplum database |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTHiveInput | Replicates the schema, which the tELTHiveMap component that follows will use, of the input Hive table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTHiveMap | Builds graphically the Hive QL statement in order to transform data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTHiveOutput | Works alongside tELTHiveMap to write data into the Hive table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTInput | Adds as many Input tables as required for the SQL statement to be executed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMap | Uses the tables provided as input to feed the parameter in the built SQL statement. The statement can include inner or outer joins to be implemented between tables or between one table and its aliases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMSSqlInput | Adds as many Input tables as required for the most complicated Insert statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMSSqlMap | Uses the tables provided as input to feed the parameter in the built statement. The statement can include inner or outer joins to be implemented between tables or between one table and its aliases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMSSqlOutput | Executes the SQL Insert, Update and Delete statement to the MSSql database |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMysqlInput | Adds as many Input tables as required for the most complicated Insert statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMysqlMap | Uses the tables provided as input to feed the parameter in the built statement. The statement can include inner or outer joins to be implemented between tables or between one table and its aliases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTMysqlOutput | tELTMysqlOutput executes the SQL Insert, Update and Delete statement to the Mysql database |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTNetezzaInput | Allows you to add as many Input tables as required for the most complicated Insert statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTNetezzaMap | Uses the tables provided as input, to feed the parameter in the built statement. The statement can include inner or outer joins to be implemented between tables or between one table and its aliases. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTNetezzaOutput | Performs the action (insert, update or delete) on data in the specified Netezza table through the SQL statement generated by the tELTNetezzaMap component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTOracleInput | Provides the Oracle table schema that will be used by the tELTOracleMap component to generate the SQL SELECT statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTOracleMap | Builds the SQL SELECT statement using the table schema(s) provided by one or more tELTOracleInput components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTOracleOutput | Performs the action (insert, update, delete, or merge) on data in the specified Oracle table through the SQL statement generated by the tELTOracleMap component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTOutput | Carries out the action on the table specified and inserts the data according to the output schema defined in the ELT Mapper. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTPostgresqlInput | Provides the Postgresql table schema that will be used by the tELTPostgresqlMap component to generate the SQL SELECT statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTPostgresqlMap | Builds the SQL SELECT statement using the table schema(s) provided by one or more tELTPostgresqlInput components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTPostgresqlOutput | Performs the action (insert, update or delete) on data in the specified Postgresql table through the SQL statement generated by the tELTPostgresqlMap component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTSAPInput | Provides the SAP table schema that will be used by the tELTSAPMap component to generate the SQL SELECT statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTSAPMap | Builds the SQL SELECT statement using the table schema(s) provided by one or more tELTSAPInput components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTSybaseInput | Provides the Sybase table schema that will be used by the tELTSybaseMap component to generate the SQL SELECT statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTSybaseMap | Builds the SQL SELECT statement using the table schema(s) provided by one or more tELTSybaseInput components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTSybaseOutput | Performs the action (insert, update or delete) on data in the specified Sybase table through the SQL statement generated by the tELTSybaseMap component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTTeradataInput | Provides the Teradata table schema that will be used by the tELTTeradataMap component to generate the SQL SELECT statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTTeradataMap | Builds the SQL SELECT statement using the table schema(s) provided by one or more tELTTeradataInput components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTTeradataOutput | Performs the action (insert, update or delete) on data in the specified Teradata table through the SQL statement generated by the tELTTeradataMap component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTVerticaInput | Provides the Vertica table schema that will be used by the tELTVerticaMap component to generate the SQL SELECT statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTVerticaMap | Builds the SQL SELECT statement using the table schema(s) provided by one or more tELTVerticaInput components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tELTVerticaOutput | Performs the action (insert, update or delete) on data in the specified Vertica table through the SQL statement generated by the tELTVerticaMap component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExasolConnection | Opens a connection to an EXASolution database instance that can then be reused by other EXASolution components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFirebirdConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tGreenplumConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tJDBCConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tMSSqlConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tMysqlConnection | Opens a connection to the specified MySQL database for reuse in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNetezzaConnection | Opens a connection to a Netezza database to be reused in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleConnection | Opens a connection to the specified Oracle database for reuse in the subsequent subJob or subJobs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostgresPlusConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tPostgresqlConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tSAPHanaConnection | Establishes a SAP HANA connection to be reused by other SAP HANA components in your Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSingleStoreConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
tSQLiteConnection | Opens a connection to the database for a current transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplate | Executes the common database actions or customized SQL statement templates, for example to drop/create a table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplateAggregate | Provides a set of matrix based on values or calculations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplateCommit | Commits a global action in one go using a single connection, instead of doing so for every row or every batch of rows separately. This provides a gain in performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplateFilterColumns | Homogenizes schemas by reorganizing, deleting or adding new columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplateFilterRows | Sets row filters for any given data source, based on a WHERE clause. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplateMerge | Merges data into a database table directly on the DBMS by creating and executing a MERGE statement. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSQLTemplateRollback | Cancels the transaction committed in the SQLTemplate database. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSybaseConnection | Opens a connection to the database for a current transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataConnection |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
ESB | ||||||||
![]() |
tESBConsumer | Calls the defined method from the invoked Web service and returns the class as defined, based on the given parameters. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tESBProviderFault | Serves a Job cycle result as a Fault message of the Web service in case of a request response communication style. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tESBProviderRequest | Wraps Job as web service. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tESBProviderResponse | Serves a Job cycle result as a response message. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRESTClient | Interacts with RESTful Web service providers by sending HTTP and HTTPS requests using CXF (JAX-RS) getting the corresponding responses. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRESTRequest | Receives GET/POST/PUT/PATCH/DELETE requests from the clients on the server end. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRESTResponse | Returns a specific HTTP status code to the client end as a response to the HTTP and/or HTTP requests. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRouteFault | Sends messages from a Data Integration Job to a Mediation Route and mark the message as fault. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRouteInput | Accepts messages in a Data Integration Job from a Mediation Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRouteOutput | Sends messages from a Data Integration Job to a Mediation Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Exception Handling | ||||||||
![]() |
cErrorHandler | Processes errors in the message routing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cIntercept | Intercepts each message sub-route and redirects it in another sub-route without modifying the original one. When this detour is complete, message routing to the originally intended target endpoints continues. This can be useful at testing time to simulate error handling. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cOnException | Catches the exceptions defined and triggers certain actions which are then performed on these exceptions and the message routing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cTry | Offers the Java equivalent exception handling abilities by building Try/Catch/Finally blocks to isolate the part of your Route likely to generate an error, catch the errors, and execute final instructions regardless of the errors. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
File | ||||||||
![]() |
tAdvancedFileOutputXML | Writes an XML file with separated data values according to an XML tree structure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tApacheLogInput | Reads the access-log file for an Apache HTTP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAvroInput | Extracts records from any given Avro format files for other components to process the records. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAvroOutput | Receives data flows from the processing component placed ahead of it and writes the data into Avro format files in a given distributed file system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAvroStreamInput | Listens on a given directory, reads data from Avro files once they are created and sends this data to the component that follows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tChangeFileEncoding | Transforms the character encoding of a given file and generates a new file with the transformed character encoding. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCreateTemporaryFile | Creates a temporary file in a specified directory. This component allows you to either keep the temporary file or delete it after the Job execution. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileArchive | Creates a new zip, gzip, or tar.gz archive file from one or more files or folders. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileCompare | Compares two files and provides comparison data based on a read-only schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileCopy | Copies a source file or folder into a target directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileDelete | Deletes files from a given directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileExist | Checks if a file exists or not. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputARFF | Reads an ARFF file row by row to split them up into fields and then sends the fields as defined in the schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputDelimited | Reads a delimited file row by row to split them up into fields and then sends the fields as defined in the schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputExcel | Reads an Excel file row by row to split them up into fields using regular expressions and then sends the fields as defined in the schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputFullRow | Reads a file row by row and sends complete rows of data as defined in the schema to the next component via a Row link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputJSON | Extracts JSON data from a file and transfers the data to a file, a database table, etc. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputLDIF | Reads an LDIF file row by row to split them up into fields and sends the fields as defined in the schema to the next component using a Row connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputMail | Reads the standard key data of a given MIME or MSG email file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputMSDelimited | Reads the data structures (schemas) of a multi-structured delimited file and sends the fields as defined in the different schemas to the next components using Row connections. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputMSPositional | Reads the data structures (schemas) of a multi-structured positional file and sends the fields as defined in the different schemas to the next components using Row connections. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputMSXML | Reads the data structures (schemas) of a multi-structured XML file and sends the fields as defined in the different schemas to the next components using Row connections. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputORC | Extracts records from a given ORC format file and sends the data to the next component for further processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputParquet | Extracts records from a given Parquet format file and sends the data to the next component for further processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputPositional | Reads a positional file row by row to split them up into fields based on a given pattern and then sends the fields as defined in the schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputProperties | Reads a text file row by row and separates the fields according to the model key = value. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputRaw | Reads all data in a raw file and sends it to a single output column for subsequent processing by another component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputRegex | Reads a file row by row to split them up into fields using regular expressions and sends the fields as defined in the schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileInputXML | Reads an XML structured file row by row to split them up into fields and sends the fields as defined in the schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileList | Iterates a set of files or folders in a given directory based on a filemask pattern. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputARFF | Writes an ARFF file that holds data organized according to the defined schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputDelimited | Outputs the input data to a delimited file according to the defined schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputExcel | Writes an MS Excel file with separated data values according to a defined schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputJSON | Receives data and rewrites it in a JSON structured data block in an output file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputLDIF | Writes or modifies an LDIF file with data separated in respective entries based on the schema defined, or else deletes content from an LDIF file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputMSDelimited | Creates a complex multi-structured delimited file, using data structures (schemas) coming from several incoming Row flows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputMSPositional | Creates a complex multi-structured file, using data structures (schemas) coming from several incoming Row flows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputMSXML | Creates a complex multi-structured XML file, using data structures (schemas) coming from several incoming Row flows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputORC | Receives records from the processing component placed ahead of it and writes the records into ORC format files. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputParquet | Receives records from the processing component placed ahead of it and writes the records into Parquet format files. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputPositional | Writes a file row by row according to the length and the format of the fields or columns in a row. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputProperties | Writes a configuration file, of the type .ini or .properties, containing text data organized according to the model key = value. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputRaw | Provides data coming from another component, in the form of a single column of output data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileOutputXML | Writes an XML file with separated data values according to a defined schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileProperties | Creates a single row flow that displays the main properties of the processed file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileRowCount | Opens a file and reads it row by row in order to determine the number of rows inside. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputDelimited | Reads data continuously, row by row, to split it into fields, then sends fields defined in its schema to the next Job component, via a Row > Main link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputFullRow | Reads data in a newly-created file row by row and sends the entire rows within one single field to the next Job component, via a Row > Main link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputJSON | Extracts JSON data from a file, then transfers the data to, for instance, a file or a database table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputParquet | Extracts records from a given Parquet format file for other components to process the records. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputPositional | Listens on a given directory for new files, reads data from them row by row and extracts fields based on a specific pattern. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputRegex | Listens on a given directory for new files, then reads data from these files, row by row, in order to split the data into fields using regular expressions. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileStreamInputXML | Opens a structured XML file and reads it row by row to split the data into fields, then sends these fields as defined in the Schema to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileTouch | Creates an empty file or, if the specified file already exists, updates its date of modification and of last access while keeping the contents unchanged. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileUnarchive | Decompresses an archive file for further processing, in one of the following formats: *.tar.gz , *.tgz, *.tar, *.gz and *.zip. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGPGDecrypt | Calls the gpg -d command to decrypt a GnuPG-encrypted file and saves the decrypted file in the specified directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSCompare | Compares two files in HDFS and based on the read-only schema, generates a row flow that presents the comparison information. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSConnection | Connects to a given HDFS so that the other Hadoop components can reuse the connection it creates to communicate with this HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSCopy | Copies a source file or folder into a target directory in HDFS and removes this source if required. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSDelete | Deletes a file located on a given Hadoop distributed file system (HDFS). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSExist | Checks whether a file exists in a specific directory in HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSGet | Copies files from Hadoop distributed file system(HDFS), pastes them in a user-defined directory and if needs be, renames them. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSInput | Extracts the data in a HDFS file for other components to process it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSList | tHDFSList retrieves a list of files or folders based on a filemask pattern and iterates on each unity. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSOutput | Writes data flows it receives into a given Hadoop distributed file system (HDFS). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSOutputRaw | Transfers data of different formats such as hierarchical data in the form of a single column into a given HDFS file system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSProperties | Creates a single row flow that displays the properties of a file processed in HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSPut | Connects to Hadoop distributed file system to load large-scale files into it with optimized performance. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSRename | Renames the selected files or specified directory on HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSRowCount | Reads a file in HDFS row by row in order to determine the number of rows this file contains. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNamedPipeClose | Closes a named-pipe at the end of a process. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNamedPipeOpen | Opens a named-pipe for writing data into it. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNamedPipeOutput | Writes data into an existing open named-pipe. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPivotToColumnsDelimited | Fine-tunes the selection of data to output. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopExport | Defines the arguments required by Sqoop for transferring data to a RDBMS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopImport | Defines the arguments required by Sqoop for writing the data of your interest into HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopImportAllTables | Defines the arguments required by Sqoop for writing all of the tables of a database into HDFS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqoopMerge | Performs an incremental import that updates an older dataset with newer records. The file types of the newer and the older datasets must be the same. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Internet | ||||||||
![]() |
tCyberarkInput | Retrieves the password of an application from a CyberArk vault at runtime. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileFetch | Retrieves a file through the given protocol (HTTP, HTTPS, FTP, or SMB). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPClose | Closes an active FTP connection to release the occupied resources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPConnection | Opens an FTP connection to transfer files in a single transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPDelete | Deletes files or folders in a specified directory on an FTP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPFileExist | Checks if a file or a directory exists on an FTP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPFileList | Lists all files and folders directly under a specified directory based on a filemask pattern. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPFileProperties | Retrieves the properties of a specified file on an FTP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPGet | Downloads files to a local directory from an FTP directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPPut | Uploads files from a local directory to an FTP directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPRename | Renames files in an FTP directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFTPTruncate | Truncates files in an FTP directory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHttpRequest | Sends an HTTP request to the server and outputs the response information locally. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJBossESBInput | Retrieves a message from a JBossESB server to process it as a flow that can be used in a Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJBossESBOutput | Transforms the data used in a Job into a JBossESB message. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKafkaConnection | Opens a reusable Kafka connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKafkaCreateTopic | Creates a Kafka topic that the other Kafka components can use. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKafkaInput | Transmits messages you need to process to the components that follow in the Job you are designing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKafkaOutput | Publishes messages into a Kafka system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRStreamsCommit | Connects to a given tMapRStreamsInput to perform a consumer offset commit. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRStreamsConnection | Opens a reusable connection to a given MapR Streams cluster so that the other MapR Streams components can reuse this connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRStreamsCreateStream | Creates a MapR Streams stream or topic that the other MapR Streams components can use. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRStreamsInput | Transmits messages to the Job that runs transformations over these messages. Only MapR V5.2 onwards is supported by this component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRStreamsOutput | Publishes messages into a MapR Streams system. Only MapR V5.2 onwards is supported by this component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMicrosoftMQInput | Retrieves the first message in a given Microsoft message queue (only support String). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMicrosoftMQOutput | Writes a defined column of given inflow data to Microsoft message queue (only support String type). |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMomCommit | Commits data on the MQ Server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMomConnection | Opens a connection to the MQ Server for communication. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMomInput | Fetches a message from a queue on a Message-Oriented Middleware (MOM) system and passes it on to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMomMessageIdList | Fetches a message ID list from a queue on a Message-Oriented middleware system and passes it to the next component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMomOutput | Adds a message to a Message-Oriented Middleware system queue in order for it to be fetched asynchronously. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMomRollback | Cancels the transaction committed in the MQ Server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPOP | Fetches one or more email messages from a server using the POP3 or IMAP protocol. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRabbitMQClose | Closes a connection to a message queue. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRabbitMQConnection | Establishes a connection to a message queue for later use. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRabbitMQInput | Reads messages from a message queue and passes the messages in the output flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRabbitMQOutput | Receives data from the preceding component as messages and adds the messages to queues in the specified way. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tREST | Serves as a REST Web service client. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRSSInput | Reads RSS-Feeds using URLs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRSSOutput | Creates and writes XML files that hold RSS or Atom feeds. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPClose | Closes a connection to an SCP protocol. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPConnection | Opens an SCP connection to transfer files in one transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPDelete | Removes a file from the defined SCP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPFileExists | Verifies the existence of a file on the defined SCP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPFileList | Lists files from the defined SCP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPGet | Copies files from the defined SCP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPPut | Copies files to the defined SCP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPRename | Renames file(s) on the defined SCP server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSCPTruncate | Removes data from file(s) on the defined SCP server via an SCP connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSendMail | Notifies recipients about a particular state of a Job or possible errors. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSetKerberosConfiguration | Sets the relevant information for Kerberos authentication. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSetKeystore | Sets the authentication data type between PKCS 12 and JKS. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSetProxy | Sets the relevant information for proxy setup. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSOAP | Calls a method via a Web service in order to retrieve the values of the parameters defined in the component editor. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSocketInput | Opens the socket port and listens for the incoming data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSocketOutput | Sends out the data from the incoming flow to a listening socket port. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSVNLogInput | Retrieves the information of a specified revision or range of revisions from an SVN repository. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWebService | Calls a method via a Web service in order to retrieve the values of the parameters defined in the component editor. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWebServiceInput | Invokes a Method through a Web service. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tXMLRPCInput | Invokes a Method through a Web service and for the described purpose. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Logs_Errors | ||||||||
![]() |
tAssert | Generates the boolean evaluation on the concern for the Job execution status and provides the Job status messages to tAssertCatcher. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAssertCatcher | Generates a data flow consolidating the status information of a job execution and transfer the data into defined output files. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tChronometerStart | Operates as a chronometer device that starts calculating the processing time of one or more subJobs in the main Job, or that starts calculating the processing time of part of your subJob. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tChronometerStop | Operates as a chronometer device that stops calculating the processing time of one or more subJobs in the main Job, or that stops calculating the processing time of part of your subJob. tChronometerStop displays the total execution time. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDie | Triggers the tLogCatcher component for exhaustive log before killing the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFlowMeter | Counts the number of rows processed in the defined flow, so this number can be caught by the tFlowMeterCatcher component for logging purposes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFlowMeterCatcher | Operates as a log function triggered by the use of a tFlowMeter component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLogCatcher | Operates as a log function triggered by one of the three: Java exception, tDie or tWarn, to collect and transfer log data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tStatCatcher | Gathers the Job processing metadata at the Job level and at the component level and transfers the log data to the subsequent component for display or storage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWarn | Triggers a warning often caught by the tLogCatcher component for exhaustive log. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Machine Learning | ||||||||
![]() |
tALSModel | Generates an user-ranking-product associated matrix, based on given user-product interactive data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tClassify | Predicts which class an element belongs to, based on the classifier model generated by a model training component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tClassifySVM | Predicts which class an element belongs to, based on the classifier model generated by tSVMModel. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDecisionTreeModel | Analyzes feature vectors usually prepared and provided by tModelEncoder to generate a classifier model that is used by tPredict to classify given elements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGradientBoostedTreeModel | Analyzes feature vectors usually prepared and provided by tModelEncoder to generate a classifier model that is used by tPredict to classify given elements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKMeansModel | Analyzes incoming datasets based on applying the K-Means algorithm. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKMeansStrModel | Analyzes incoming datasets in near real-time, based on applying the K-Means algorithm. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLinearRegressionModel | Builds a linear regression model using a training dataset. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLogisticRegressionModel | Analyzes feature vectors usually pre-processed by tModelEncoder to generate a classifier model that is used by tPredict to classify given elements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tModelEncoder | Performs featurization operations to transform data into the format expected by the model training components such as tLogisticRegressionModel or tKMeansModel. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNaiveBayesModel | Generates a classifier model that is used by tPredict to classify given elements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPredict | Predicts the situation of an element. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPredictCluster | Predicts the cluster of an element. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRandomForestModel | Analyzes feature vectors. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRecommend | Recommends products to users known to this model, based on the user-product recommender model generated by tASLModel. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSVMModel | Generates an SVM-based classifier model that can be used by tPredict to classify given elements. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Messaging | ||||||||
![]() |
tJMSInput | Creates an interface between a Java application and a Message-Oriented middleware system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJMSOutput | Creates an interface between a Java application and a Message-Oriented middleware system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKafkaCommit | Saves the current state of the tKafkaInput to which it is connected. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKafkaInputAvro | Transmits Avro-formatted messages you need to process to its following component in the Job you are designing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKinesisInput | Acts as consumer of an Amazon Kinesis stream to pull messages from this Kinesis stream. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKinesisInputAvro | Acts as consumer of an Amazon Kinesis stream to pull messages from this Kinesis stream. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKinesisOutput | Acts as data producer to put data to an Amazon Kinesis stream for real-time ingestion. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRStreamsInputAvro | Transmits messages in the Avro format to the Job that runs transformations over these messages. Only MapR V5.2 onwards is supported by this component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMQTTInput | Acts as consumer of a MQTT topic to stream messages from this topic. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMQTTOutput | Acts as publisher to a MQTT topic to stream messages to this topic in real time. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPubSubInput | Connects to the Google Cloud PubSub service that transmits messages to the components that run transformations over these messages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPubSubInputAvro | Connects to Google Cloud Pub/Sub to receive messages in the Avro format for the components that run transformations over these messages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPubSubOutput | Receives messages serialized into byte arrays by its preceding component and issues these messages into a given PubSub service. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Misc | ||||||||
![]() |
tAddLocationFromIP | Replaces IP addresses with geographical locations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBufferInput | Retrieves data bufferized via a tBufferOutput component, for example, to process it in another subJob. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBufferOutput | Collects data in a buffer in order to access it later via webservice for example. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tContextDump | Copies the context setup of the current Job to a flat file, a database table, etc., which can then be used by tContextLoad. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tContextLoad | Loads a context from a flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFixedFlowInput | Generates a fixed flow from internal variables. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLogRow | Displays data or results in the Run console to monitor data processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMemorizeRows | Memorizes a sequence of rows that passes through and allows the following component(s) to perform operations of your choice on the memorized rows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMsgBox | Opens a dialog box with an OK button requiring action from the user. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRowGenerator | Creates an input flow in a Job for testing purposes, in particular for boundary test sets. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tServerAlive | Validates the status of the connection to a specified host. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSocketTextStreamInput | Creates a textual input stream by connecting to a network. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Miscellaneous | ||||||||
![]() |
cLog | Logs message exchanges in a Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cStop | Stops a message routing to which it is connected. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Orchestration | ||||||||
![]() |
cDelayer | Delays the delivery of messages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cLoop | Processes messages repetitively and possibly in different ways. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cTimer | Schedules message exchanges in a Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCollector | Feeds the parallel execution processes with the threads generated by tPartitioner. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDepartitioner | Assembles the outputs of the parallel execution processes so that tRecollector can capture those outputs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFileList | Iterates a set of files or folders in a given directory based on a filemask pattern. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFlowToIterate | Reads data line by line from the input flow and stores the data entries in iterative global variables. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tForeach | Creates a loop on a list for an iterate link. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tInfiniteLoop | Executes a task or a Job automatically, based on a loop. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tIterateToFlow | Transforms non processable data into a processable flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tLoop | Executes a task or a Job automatically, based on a loop |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tParallelize | Manages complex Job systems. It executes several subJobs simultaneously and synchronizes the execution of a subJob with other subJobs within the main Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPartitioner | Partitions the input data before tCollector can transfer them to the parallel execution processes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPostjob | Triggers a task required after the execution of a Job |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPrejob | Triggers a task required for the execution of a Job |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRecollector | Outputs of the parallel execution results, depending on tDepartitioner. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRunJob | Manages complex Job systems which need to execute one Job after another. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSleep | Identifies possible bottlenecks using a time break in the Job for testing or tracking purpose. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWaitForFile | Iterates on a directory and triggers the next component when the defined condition is met. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWaitForSocket | Triggers a Job based on a defined condition. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWaitForSqlData | Iterates on a given connection for insertion or deletion of rows and triggers a subJob when a condition linked to SQL data presence is met. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Processing | ||||||||
![]() |
tAggregateRow | Receives a flow and aggregates it based on one or more columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tAggregateSortedRow | Aggregates the sorted input data for output column based on a set of operations. Each output column is configured with many rows as required, the operations to be carried out and the input column from which the data will be taken for better data aggregation. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBRMS | Applies Drools business rules to an incoming flow and writes the output data to an XML file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCacheIn | Offers faster access to the persistent data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCacheOut | Persists the input RDDs depending on the specific storage level you define in order to offer faster access to these datasets later. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tConvertType | Converts one java type to another automatically, and thus avoid compiling errors. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDenormalize | Denormalizes the input flow based on one column. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDenormalizeSortedRow | Synthesizes sorted input flow to save memory. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExternalSortRow | Sorts input data based on one or several columns, by sort type and order, using an external sort application. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractDelimitedFields | Generates multiple columns from a delimited string column. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractDynamicFields | Parses a Dynamic column to create standard output columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractJSONFields | Extracts the desired data from JSON fields based on the JSONPath or XPath query. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractPositionalFields | Extracts data and generates multiple columns from a formatted string using positional fields. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractRegexFields | Extracts data and generates multiple columns from a formatted string using regex matching. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractXMLField | Reads the XML structured data from an XML field and sends the data as defined in the schema to the following component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFilterColumns | Homogenizes schemas either by ordering the columns, removing unwanted columns or adding new columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tFilterRow | Filters input rows by setting one or more conditions on the selected columns. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHConvertFile | Uses structures to perform a conversion from one representation to another, as a Spark Batch execution. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHMap | Executes transformations (called maps) between different sources and destinations by harnessing the capabilities of , available in the Mapping perspective. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHMapFile | Runs a map where input and output structures may differ, as a Spark batch execution. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHMapInput | Runs a map where input and output structures may differ, as a Spark batch execution, and sends the data for use by a downstream component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHMapRecord | Runs a map where input and output structures may differ, as a Spark streaming execution. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJoin | Performs inner or outer joins between the main data flow and the lookup flow. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMap | Transforms and routes data from single or multiple sources to single or multiple destinations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tNormalize | Normalizes the input flow following SQL standard to help improve data quality and thus eases the data update. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tPartition | Allows you to visually define how an input dataset is partitioned. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tReplace | Cleanses all files before further processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tReplicate | Duplicates the incoming schema into two identical output flows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRules | Uses business rules defined in a Drools file of .xls or .drl format in order to filter data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSample | Returns a sample subset of the data being processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSampleRow | Selects rows according to a list of single lines and/or a list of groups of lines. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSortRow | Helps creating metrics and classification table. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSplitRow | Splits one input row into several output rows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSqlRow | Performs SQL queries over input datasets. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSurviveFields | Centralizes data from various and heterogeneous sources to create a master copy of data for MDM. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTop | Sorts data and outputs several rows from the first one of this data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTopBy | Groups and sorts data and outputs several rows from the first one of the data in each group. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUniqRow | Ensures data quality of input or output flow in a Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tUnite | Centralizes data from various and heterogeneous sources. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWindow | Applies a given Spark window on the incoming RDDs and sends the window-based RDDs to its following component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteAvroFields | Transforms the incoming data into Avro files. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteDelimitedFields | Converts records into byte arrays. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteDynamicFields | Creates a dynamic schema from input columns in the component. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteJSONField | Transforms the incoming data into JSON fields and transfers them to a file, a database table, etc. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWritePositionalFields | Converts records into byte arrays. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteXMLField | Reads an input XML file and extracts the structure to insert it in defined fields of the output XML file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteXMLFields | Converts records into byte arrays. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tXMLMap | Transforms and routes data from single or multiple sources to single or multiple destinations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Routing | ||||||||
![]() |
cAggregate | Combines a number of messages together into a single message. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cDynamicRouter | Routes a message or messages to different endpoints on specified conditions. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cIdempotentConsumer | Identifies messages that have already been sent to the receiver and eliminates them. Messages are still sent by the sender but are ignored by the receiver at the delivery stage. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cLoadBalancer | Distributes the messages it received to multiple endpoints according to the load balancing policy. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMessageFilter | Filters the content of messages according to the specified criterion and routes the filtered messages to the specified output channel. All messages that do not match the criteria will be dropped. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMessageRouter | Creates different channels for each filtered message type according to specified conditions so that messages can later on be treated more accurately in each new channel. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMulticast | Routes messages to a number of endpoints at one go and process them in different ways. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cPipesAndFilters | Splits message routing into a series of independent processing stages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cRecipientList | Routes messages to a number of dynamically specified recipients. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cRoutingSlip | Routes the message consecutively through a series of processing steps, with the sequence of steps unknown at design time and variable for each message. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cSplitter | Splits a message into several sub-messages so that they can be handled and treated differently in individual routes. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cThrottler | Limits the number of messages flowing to a specific endpoint in order to prevent it from getting overloaded. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cWireTap | Wiretaps messages to a user defined URI while they are sent to their original endpoint. cWireTap also allows you to populate a new message to this wiretap URI concurrently. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Storage | ||||||||
![]() |
tAzureFSConfiguration | Provides authentication information for Spark to connect to a given Azure file system. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tBigQueryConfiguration | Provides the connection configuration to Google BigQuery and Google Cloud Storage for a Spark Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCassandraConfiguration | Enables the reuse of the connection configuration to a Cassandra server in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDynamoDBConfiguration | Stores connection information and credentials to be reused by other DynamoDB components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tGSConfiguration | Provides the connection configuration to Google Cloud Storage for a Spark Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHBaseConfiguration | Enables the reuse of the connection configuration to HBase in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHDFSConfiguration | Enables the reuse of the connection configuration to HDFS in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJDBCConfiguration | Stores connection information and credentials to be reused by other JDBC components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tKuduConfiguration | Enables the reuse of the connection configuration to Cloudera Kudu in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMapRDBConfiguration | Stores connection information and credentials to be reused by other MapRDB components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMysqlConfiguration | Stores connection information and credentials to be reused by other MySQL components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tOracleConfiguration | Stores connection information and credentials to be reused by other Oracle components. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRedshiftConfiguration | Reuses the connection configuration to a Redshift database in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tS3Configuration | Reuses the connection configuration to S3 in the same Job. The Spark cluster to be used reads this configuration to eventually connect to S3. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSnowflakeConfiguration | Stores connection information and credentials to be reused by other Snowflake components in the Apache Spark Batch framework. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTachyonConfiguration | Defines a connection to Tachyon storage system and enables the reuse of the configuration in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tTeradataConfiguration | Defines a connection to Teradata and enables the reuse of the connection configuration in the same Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
System | ||||||||
![]() |
tRunJob | Manages complex Job systems which need to execute one Job after another. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSetEnv | Adds variables temporarily to system environment during the execution of a Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSSH | Establishes a communication with distant server and returns securely sensible information. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tSystem | Calls other system processing commands, already up and running in a larger Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Talend | ||||||||
![]() |
cTalendJob | Exchanges messages between a Data Integration Job and a Mediation Route. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Talend Cloud | ||||||||
![]() |
tJobFailure | Throws an exception and prompts a message when an error occurs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJobLog | Collects and shows exception data during the execution of the Job in or the task in . |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tJobReject | Receives data rejected after task processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Talend Data Preparation | ||||||||
![]() |
tDataprepRun | Applies a preparation made using in a standard Data Integration Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDatasetInput | Creates a flow with data from a dataset. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDatasetOutput | Creates a dataset in . |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Talend Data Stewardship | ||||||||
![]() |
tDataStewardshipTaskDelete | Connects to and deletes the data stored in campaigns in the form of tasks. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDataStewardshipTaskInput | Connects to and retrieves the data stored in campaigns in the form of tasks. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDataStewardshipTaskOutput | Connects to and loads data into campaigns in the form of tasks. The tasks must have the same schema defined in the campaign. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Talend MDM | ||||||||
![]() |
tMDMBulkLoad | Uses bulk mode to write XML structured master data into the MDM server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMClose | Terminates an open MDM server connection after the execution of the proceeding subJob. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMCommit | Commits all changes to the database made within the scope of a transaction in MDM. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMConnection | Opens an MDM server connection for convenient reuse in the current Job or transaction. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMDelete | Deletes master data records from specific entities in the MDM Hub. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMInput | Reads data in an MDM Hub and thus makes it possible to process this data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMOutput | Writes data into or removes data from the MDM server. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMReceive | Decodes a context parameter holding MDM XML data and transforms it into a flat schema. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMRestInput | Reads data through the REST API from the MDM Hub for further processing. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMRollback | Rolls back any changes made in the database rather than definitively committing them, for example to prevent partial commits if an error occurs. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMRouteRecord | Helps Event Manager to identify the changes you have made on your data so that correlative actions can be triggered. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMSP | Offers a convenient way to centralize multiple or complex queries in an MDM Hub and calls the stored procedure easily. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMTriggerInput | Reads the XML message (Document type) sent by MDM and passes the information to the component that follows. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMTriggerOutput | Receives an XML flow (Document type) from the preceding component in the Job. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tMDMViewSearch | Retrieves the MDM records from an MDM hub by applying filtering criteria you have created in a specific view and puts out results in XML structure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Technical | ||||||||
![]() |
tBoundedStreamInput | Provides a data stream for the component to be tested and is suitable for use in a test case only. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tCollectAndCheck | Shows and validates the result of a component test. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHashInput | Reads from the cache memory data loaded by tHashOutput to offer high-speed data feed, facilitating transactions involving a large amount of data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tHashOutput | Loads data to the cache memory to offer high-speed access, facilitating transactions involving a large amount of data. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Testing | ||||||||
![]() |
cDataset | Creates a new dataset or reference an existing dataset to send or receive messages. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMock | Simulates message generation and message endpoints for testing Routes and mediation rules. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Transformation | ||||||||
![]() |
cContentEnricher | Uses a consumer or producer to obtain additional data, respectively intended for event messaging and request/reply messaging. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cConvertBodyTo | Converts the message body to the given class type. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cFlatPack | Processes fixed width or delimited files or messages using the FlatPack library |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
cMap | Executes transformations (called maps) between different sources and destinations by harnessing the capabilities of , available in the perspective. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Unstructured | ||||||||
![]() |
tEDIFACTtoXML | Transforms an EDIFACT message file into the XML format for better readability to users and compatibility with processing tools. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tExtractEDIField | Reads the EDI structured data from an EDIFACT message file, generates an XML according to the EDIFACT family and the EDIFACT type, extracts data by parsing the generated XML using the XPath queries manually defined or coming from the Repository wizard, and finally sends the data to the next component via a Row connection. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Webservice | ||||||||
![]() |
tRestWebServiceLookupInput | Retrieves messages from a REpresentational State Transfer (REST) Web service provider and gets responses accordingly. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tRestWebServiceOutput | Serves as a REpresentational State Transfer (REST) Web service client that continuously sends HTTP requests to a REST Web service provider in real time and gets the responses. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
XML | ||||||||
![]() |
tAdvancedFileOutputXML | Writes an XML file with separated data values according to an XML tree structure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tDTDValidator | Helps at controlling data and structure quality of the file to be processed |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tEDIFACTtoXML | Transforms an EDIFACT message file into the XML format for better readability to users and compatibility with processing tools. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tWriteXMLField | Reads an input XML file and extracts the structure to insert it in defined fields of the output XML file. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tXMLMap | Transforms and routes data from single or multiple sources to single or multiple destinations. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tXSDValidator | Helps at controlling data and structure quality of the file or flow to be processed. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
tXSLT | Helps to transform data structure to another structure. |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |