• Blog

For the instances where for whatever reason, a Talend job does not always connect to a backend service layer - database, web service, ftp, salesforce, dropbox - on the first try, the job can be modified to retry the connection before failing the job. This solution shows a simple design to handle intermittent connection failures.

Every once in a while, one runs into a situation that one can not connect to a database from a tool or application. When this happens, the best way to isolate the issue is to try to connect to the same database using a quick java application using standard JDBC.

Starting with Talend 5.6.2, it is now possble to create metadata connections for NoSQL databases and Hadoop platforms using the metadata feature in the Studio. Even better, the Studio now allows automatic discovery of these properties using Hadoop properties site-*.xml files.

Out of the box, Talend uses the open source jTDS driver to connect to MS SQL Server databases. This driver however does not support connecting to an AlwaysOn enabled database. A generic jdbc driver would have to be used as a work-around.

Starting with Talend 5.6.1, a patch was released by Talend to update Hive components to be able to connect to Hadoop Clusters configured for HA - High Availability. In HA, instead of configuring the Hive host in the component and a standard port, we now instead specify the Zookeeper quorum that in turn 'discovers' the active Hive Host.

'Not implemented by the DistributedFileSystem FileSystem implementation' error occassionally rears its head when debugging Talend Big Data jobs. This is a cryptic message that actually intends to convey that your job includes JARs from different versions of Hadoop in its classpath.

Data warehousing and ETL processes usually repeat common patterns across different data domains (databases, tables, subject areas etc...). One such pattern is copying data from a transactional system to Hadoop or some other data platform (Teradata, Oracle DBs) to create 'images' of those systems for downstream processing. Because these processes are repeated many times over in the design & construction of data warehouses, it is best to create repeatable patterns that reduce future technical debt in terms of support, maintenance and updates costs. 

Wednesday, 30 September 2015 19:51

Introducing Talend 6.0

Talend 6 was released in September 2015 and with it come a number of new and important features and updates, including product name changes. 

Talend Hive components have a number of somewhat confusing options that could be tricky to understand when making connections to a Hadoop cluster. Options include selecting between HiveServer1 and HiveServer2, Embedded vs. Standalone modes, and what ports to connect to. We explore the options in this post, pulling in information from the Hive Wiki, Talend Support and other sources.

In a previous post, the steps for downloading and configuring DBVisualizer to connect to Hive were presented. The connection was made using a Hive Host Name in a Hadoop cluster with a single Namenode. In this post, we look at connecting to a Hive in a Hadoop cluster that's configured for HA (High Availability), meaning it has multiple Hive hosts (and namenodes & resource managers etc...) where one Hive host is active and while others are passive. 

Thursday, 04 June 2015 15:36

Talend Studio: Use (Java) JRE or JDK?

Talend Studio (essentially a customized Eclipse IDE) requires that Java be installed on the client in order for the Studio to function - run jobs etc... Most often when installing Talend, the decision of installing Java is a no-brainer - basically click-through on java.com and you're done. But that doesn't always work, depending what you're doing in your Talend job.

Friday, 29 May 2015 21:10

Talend 5.x and Cassandra CQL 2 & 3

For quite some time, Talend has included a family of components for Apache Cassandra NoSQL database. However, recent versions of Talend 5.x have not yet started generating code to connect to Cassandra that leverages the newer Cassandra Query Language (CQL) version 3. CQL v3 is not backward compatible with CQL v2 and differs from it in numerous ways.

This is part 2 in a series about downloading and processing LivePerson chat data. LivePerson is a leading business chat, online messaging, marketing, and analytics platform that's integrated into many online sales channels / websites. In part 1 of the series, we looked at how to test connectivity to LivePerson API. In this part, we move on to developing the solution using Talend Data Integration Studio.  

LivePerson is a leading business chat, online messaging, marketing, and analytics platform that's integrated into many online sales channels / websites. It enables companies to proactively engage online customers who, based on their site navigation patterns, are most likely to be converted into customers. LivePerson captures rich information about site visits (IP addresses, navigation paths, location information...) including detailed chat transcripts...

Thursday, 19 February 2015 15:54

Hive Primers for SQL Users

Hive continues to gain prominence within the Hadoop ecosystem. This is despite the introduction of new tools in the ever-expanding Hadoop universe in the form of new Apache projects and incubators. Hive is an Apache Hadoop platform that uses a SQL-like language (called HQL or Hive Query Language)...

Accessing secured web services from a Talend job requires that the jvm authenticate with a trust store file (.jks). Failing to do so results in a java.lang.Exception: nulljavax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed exception. The solution is to configure the Talend job to present a jks file when accessing the service.

As pointed out in the previous post on this issue, there are ways of dealing with invalid line breaks in text columns. The solution presented here depends on tweaking the structure of the data being landed into the file, then applying some simple logic to remove invalid breaks.

The most common line break character or row separator across most O/S platforms is "\n". Data (especially in files) are coded with line break characters at the end of the line to indicate to the application parsing or reading it that the line is complete. There are situations where data within a row can contain a line break character, resulting in incorrect parsing and presentation of data.

Analytic functions compute an aggregate value based on a group of rows. Two common examples are Lead and Lag functions, which allow you to access the NEXT and PREVIOUS row values in a dataset (essentially, finding a value in a row a specified number of rows from a current row).

While it's perfectly ok to interact with Hive databases using the command line (Hive shell), it's easier to display and visualize large number of columns of data using a GUI. Of the many GUI options that exist, one tool that does the job pretty well is DBVisualizer. It's great not only for Hive databases - but all most popular RMDBs as well. And it doesn't hurt that the DBVisualizer is free!

Contact Us

Kindle Consulting, Inc

6595 Roswell Road 
Suite G2025 
Atlanta, GA 30328

Phone: 404.551.5494
Fax: 404.551.5452
Email: info@kindleconsulting.com

Talend Gold Partner