Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded

Different causes for OOM

Every one in java development face java.lang.OutOfMemoryError now and then, OutOfMemoryError in Java is one problem which is may be due to

  • Improper memory management by the Java code
  • Insufficient hardware memory available to process the code
  • Programming mistakes
  • Memory leak

Types of OOM

Any of the above reason can result into OutOfMemoryError. Following three different types of OOM errors are common

  1. Java.lang.OutOfMemoryError: Java heap space
  2. Java.lang.OutOfMemoryError: PermGen space
  3. Java.lang.OutOfMemoryError: GC overhead limit exceeded

OOM in Talend

From above the last error I observed during flat file data migration to database using Talend Open Studio. File sizes were in the range of 10 MB-500MB in size. Initially job worked well, but when started with larger files of around 100MB, this error popped up. Hardware memory (RAM) available was 16 GB. But default memory allocated by Talend was Xmx1024m (1GB). There was a possibility to increase Xmx to 10240m which could have solved the issue, but this GC overhead limit exceeded was related to garbage collection. After searching for the issue, I came across a very descriptive article related to garbage collection at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#available_collectors

Resolution

Following workaround solved the problem in Talend without increasing the memory limit to higher figures.

Add new option to Windows–>Preferences–>Talend–>Run/Debug  – XX:-UseGCOverheadLimit

ETL JOB FOR DOWNLOADING AND UNTARING TAR FILE FROM FTP

etlpic1

 

ETL JOB FOR DOWNLOADING AND UNTARING TAR FILE FROM FTP

1. To Download File from FTP , we first have to create connection by providing all credentials of FTP Server (Host-URl, Username, Password, Port No and which type of connection it is like FTP or SFTP)component Name : tFTPConnection
2. Then Next Task is to Provide Ftp File path (For this mention Ftp Location of file’s ) component Name : tFTPFileList
3. Once Done then we have to mention where we want to put that file (Here we mention Local System path where we want to put our Ftp file’s) component Name : tFTPGet

 filedownload

  ETL  JOB   TO PROCESSED TAR  FILE’S  AND ITERATE THEM ONE-BY-ONE

1. To Untar a tar file there is a component tFileArchieve but instead of that I am using GZipCompressor by using Java code in tJava component .
2. Here we just need to drag-n-drop tJava component , and in that provide the location of tar  file  and path  where you  untaring  your  tar file…

File dest = new File(dirName);

TarArchiveInputStream tarIn = new TarArchiveInputStream(

new GzipCompressorInputStream(

new BufferedInputStream( new FileInputStream( TarName ) )

);

TarArchiveEntry tarEntry = tarIn.getNextTarEntry();

while (tarEntry != null)

{

// create a file with the same name as the tarEntry

File destPath = new File(dest, tarEntry.getName());

System.out.println(“working: ” + destPath.getCanonicalPath()+”— Tar Entry: “);

context.csvloc=””+destPath.getParentFile();

System.out.println(“\nCSV FILE Location ::::”+context.csvloc+”\n”);

if(!(destPath.getParentFile().exists()))

{

System.out.println(“Dest: “+dest);

destPath.getParentFile().mkdirs();

}

if (tarEntry.isDirectory())

{

System.out.println(“Createing directory: “+tarEntry.getName());

destPath.mkdirs();

}

else

{

destPath.createNewFile();

byte [] btoRead = newbyte[2048];

BufferedOutputStream bout = new BufferedOutputStream(new FileOutputStream(destPath));

int len; //variable declared

while((len = tarIn.read(btoRead)) != -1)

{

bout.write(btoRead,0,len);

}

bout.close();

btoRead = null;

}

tarEntry = tarIn.getNextTarEntry();

}//while loop end here

tarIn.close();

(This  code is capable  of  searching  tar  file  in  given  folder as  well  as  untaring  that  file into  specified  folder  path)

Here “ dirName” denotes location where Tar file is present and “TarName” denotes name of the Tar file.
3. Regarding Iteration you can connect tFTPGet-component to this tJava-component by Iterate. By this way tJava-component get one Tar file at a time and processed it.

So  lastly the  flow  is  similar to  the  below  picture….

etlpic

D3 DayHour Charts integration with Jaspersoft

This blog talks about D3 DayHour Charts integration with Jaspersoft

All the reports are develop using ireport 5.5 professional and jasper server 5.5

As html component of jasper server does not load any scripts in the html component, we loaded the script in one of the decorator page (jsp page). The page is located at the location:

C:\Jaspersoft\jasperreports-server-5.5\apache-tomcat\webapps\jasperserver-pro\WEB-INF\decorators\decorator.jsp

In the page we included the scripts which we want to load. We added the following code in the jsp page at line no 46:

<script type=”text/javascript” language=”JavaScript” src=”${pageContext.request.contextPath}/scripts/d3.v3.min.js”></script>

The script to be added should be kept at location:

C:\Jaspersoft\jasperreports-server-5.5\apache-tomcat\webapps\jasperserver-pro\scripts

Meaning of DayHour Chart:-

This chart represents functioning of any particular thing during different hour of different days. These graph can be used to view variations in different situations.

DayHour Jaspersoft Integration

Integration with JasperServer:

The data which we use for developing the calendar view can be fetched from any database. The data fetched from database is stored in a variable and is then accessed in the html component using the same variable. Applying this of process makes the report dynamic instead of static. Few parameters can also be added in the report which can be used in query and/or html component.

Generally for these type of charts we pass a variable which contains required data containing date, hour and a value associated with that particular date and hour. The string is created according to JSON format, so that when accessed in script tag, can be easily converted to JSON object using eval function.

Any variable/parameter can be accessed as shown below:

“<script> var arr =”+$V{variable1}+” </script>”

Parameter in query:

Select * from table_name

where date between $P{parameter1}  and $P{parameter2}

 

The sample code of static chart can be found at:

http://bl.ocks.org/tjdecke/5558084

The steps on how to integrate it with jasperserver was discussed in my previous blog(D3 Integrating with Jasperserver).

 

 

Avi Dawra

Helical IT Solutions

Make Batch count for Incremental loading in Talend (TOS_5.3)

This blog will talk about how to make Batch count for Incremental loading in Talend (TOS_5.3).

First all we have t_source and t_target tables
Both Tables have (t_source,t_target) data like this….

before_execute_job

Objective

INSERT into t_source

We inserted one record into t_source
Insert into t_source(id,name,city,s_date) values (111,’vic’,’del’,’2014-03-01 01:02:00′)
UPDATE from t_source
We updated from t_source
Update t_source set name=’don’,s_date=’2014-02-01 01:02:00′ where id = 109;
DELETE from t_source
We deleted from t_source
Delete from t_source where id = 108;

Finally we have records from t_source table and t_target tables

update_Tsource

We want make Batch count in TALEND(TOS)

We created one job…

test

Details of Job

Subjob (1)

We are fetched max(id) from t_target table and we updated into property variable

context.last_id = input_row.id;

Subjob (2)

We are fetching min (id) and max (id) from t_source and we updated into property variables

context.sr_max_id = input_row.max_id;

context.sr_min_id = input_row.min_id;

Subjob (3)

we are selecting from t_source

select * from t_source where  id > “+context.last_id+” order by id

and insert into t_target table by primary key is id

Subjob(4)

we need to count between primary key from t_source

select count(*) as batch_count from t_source where id between “+context.sr_min_id+” and “+context.sr_max_id+”

and updated into property variable. We want to calculate Batch count

We will define by divide count (context.MT_COUNT = 5)  . context.max_count, context.min_count is 0 before Execution of job.

context.count = input_row.count;

System.out.println(“Count of primary key from source “+context.UPLOAD_FILE_NAME+” Table : “+context.count);

 

int x = (context.count / context.MT_COUNT) + 3;

context.batch_count = x;

System.out.println(“Batch Count : “+context.batch_count);

context.max_count = context.MT_COUNT;

 

context.min_count = context.sr_min_id ;

context.max_count = context.sr_min_id + context.max_count;

SubJob (5)

We will iterate by context.batch_count. We have another job(test123) by Iterating.

1.Test123 Job

test123a.       SubJob(5.1)

We are printing Batch count min_count to max_count

System.out.println(“Batch “+Numeric.sequence(“s1″,1,1)+”: Count of “+context.min_count+” to “+context.max_count);    

b.      Subjob(5.2)

We are selecting from t_source between primary key

select * from t_source  where id >= “+context.min_count+” and id <= “+context.max_count+” order by id

and collects data into Buffer output

c.       SubJob (5.3)

We compared by inner join from Buffer input(t_source) and t_target tables in tmap. If any reject output will be there then updated into t_target.

T_target sql query: select * from t_target  where id >= “+context.min_count+” and id <= “+context.max_count+” order by id

d.      SubJob (5.4)

We compared by left outer join from t_target and Buffer input(t_source) in tmap. We filtered t_source.id == 0 and if any output is there then deleted

T_target sql query: select * from t_target  where id >= “+context.min_count+” and id <= “+context.max_count+” order by id

And we have t_javarow(Min, Max of  p_key)

In that,

context.min_count = input_row.id;

context.max_count = context.min_count + context.MT_COUNT;

Results

We Executed the job by defined (property variables)MT_COUNT = 5.

extecion

 

Finally we have records from t_source and t_target.

afterexecited

Thanks & regards

Vishwanth suraparaju

Senior ETL Developer

Using Pentaho Schema Workbench (PSW)

Schema -:
A schema defines a multi-dimensional database. It contains a logical model, consisting of cubes, hierarchies, and members, and a mapping of this model onto a physical model.
The logical model consists of the constructs used to write queries in MDX language: cubes, dimensions, hierarchies, levels, and members.
The physical model is the source of the data which is presented through the logical model. It is typically a star schema, which is a set of tables in a relational database; later, we shall see examples of other kinds of mappings.

Schema files :
Mondrian schemas are represented in an XML file. An example schema, containing almost all of the constructs we discuss here, is supplied as demo/FoodMart.xml in the Mondrian distribution. The dataset to populate this schema is also in the distribution.

The structure of the XML document is as follows:
<schema>
<cubes>
<AggName>
Agg Element
</AggName>
<Dimension>
<Hirerchy>
</hirerchy>
</Dimension>
</cubes>
</schema>

You can see example:
Step-1 Open your Schema Workbench
Step-2 Go to File ->New ->Schema.
Step-3 Click On Schema and set the name of schema like foodmart
Step-4 Right Click On schema and select Add cube
Step-5 click on add cube and set the name of add cube like sales or whatever        you want.
Step-6 Right Click On cube and select a add table  and set the schema like public and name like select database table  means  sales_fact_1997
Step-7 again right click on cube and add  dimension and click dimension
<Dimension type=”StandardDimension” visible=”true” `        foreignKey=”store_id” name=”Dimension Test”>
double click on dimension you can see the hierarchy and set the hierarchy <Hierarchy name=” Hierarchy Test” visible=”true” hasAll=”true”>
Step-8  Inside hierarchy right click on hierarchy and create add table and click table and then provide name and schema name like :<Table name=”store” schema=”public”>
Step-9 In hierarchy you need to set a Level i.e right click on hierarchy and create add level. <Level
name=”Level Test” visible=”true” table=”store” column=”store_country” nameColumn=”store_country” uniqueMembers=”false”>
Step-10 Inside Cube right click on cube and create Add Measure and click Measure and <Measure name=”Measure Creation” column=”customer_id” datatype=”Integer” aggregator=”count” visible=”true”>
Step-11 According to requirement You can create a lots of cube ,procedure is same
Step-12 Now publish the server what you create a Schema ———-
Step-13 click option and click connection set the
connection name =foodmart
select the connection type what you want like select postgresql
Host Name :192.168.2.9
Database Name: foodmart
Port Number:5432
Username :postgres
Password :postgres
Access like : Native (JDBC)
Click Test Button  Connection is done if not come any problem then click ok…
Step-14 Now To publish In server local server type in browser: localhost:7080/pentaho and use username and password what they are provided thats it.
Step-15 Go to Schema Inside File ->click on publish Button set server  url:http://localhost:7080/pentaho/
user:Admin
password:password
Pentaho or JNDI Data source: foodmart click next to public button
Message will shown on screen connection successfully.
Step-16 Now go to type in browser :localhost:7080/pentaho    click file->New-    >JPivot view. Then select schema and cube what you were creating. And click and check all icon you can see all the output type….
Step-17 this is process to create a schema and deployed in local server…….
This code will execute properly——–Schema1.xml
<Schema name=”foodmart”>
<Dimension type=”StandardDimension” visible=”true” highCardinality=”false” name=”Store”>
<Hierarchy visible=”true” hasAll=”true” primaryKey=”store_id”>
<Table name=”store” schema=”public”>
</Table>
<Level name=”Store Country” visible=”true” column=”store_country” type=”String” uniqueMembers=”false” levelType=”Regular” hideMemberIf=”Never”>
</Level>
<Level name=”Store State” visible=”true” column=”store_state” type=”String” uniqueMembers=”false” levelType=”Regular” hideMemberIf=”Never”>
</Level>
<Level name=”Store City” visible=”true” column=”store_city” type=”String” uniqueMembers=”false” levelType=”Regular” hideMemberIf=”Never”>
</Level>
<Level name=”Store Name” visible=”true” column=”store_name” type=”String” uniqueMembers=”false” levelType=”Regular” hideMemberIf=”Never”>
<Property name=”Store Type” column=”store_type” type=”String”>
</Property>
<Property name=”Store Manager” column=”store_manager” type=”String”>
</Property>
<Property name=”Store Sqft” column=”store_sqft” type=”Numeric”>
</Property>
</Level>
</Hierarchy>
</Dimension>
<Cube name=”Sales” visible=”true” cache=”true” enabled=”true”>
<Table name=”sales_fact_1997″ schema=”public”>
</Table>
<DimensionUsage source=”Store” name=”Store” visible=”true” foreignKey=”store_id” highCardinality=”false”>
</DimensionUsage>
<Measure name=”Unit Sales” column=”unit_sales” formatString=”Standard” aggregator=”sum” visible=”true”>
</Measure>
</Cube>
<Cube name=”Warehouse” visible=”true” cache=”true” enabled=”true”>
<Table name=”inventory_fact_1997″ schema=”public”>
</Table>
<DimensionUsage source=”Store” name=”Store” visible=”true” foreignKey=”store_id” highCardinality=”false”>
</DimensionUsage>
<Measure name=”Store Invoice” column=”store_invoice” aggregator=”sum” visible=”true”>
</Measure>
</Cube>
some thing about schema?
=>A schema defines a multiple dimensional database.it consist a logical model,consisting of cubes, hierarchies and member,and mapping on this model onto a physical model

For any questions on Pentaho Schema Workbench, please get in touch with us @ Helical IT Solutions

Best Practices when designing & using iReport /Jaspersoft

This blog talks about the best practices which should be followed when creating reports using iReport or Jasper studio, deploying the same on Jaspersoft server, nomenclature to be used etc.

 

1) Report Margins:

When you develop reports for dashboards, it is advisable to keep all the margins with 0 pixels.

By default margins will be
Left margin         20
Right margin       20
Top margin         20
Bottom margin    20

Change the values to 0

Left margin         0
Right margin       0
Top margin         0
Bottom margin  0

Why?
Because, when set to 0 report panels are well fit when designing the dashboards.

 

2) Bands to keep the components

Do not keep table component, cross tab component in Detail band. Keep all the components either in Title band or in Summary Band as per the requirement. It is advisable to create custom bands to keep the different charts if you need to develop a report with multiple charts.

Why it is not recommended to keep the components in Detail band?

Details band falls into loop till the end of the row/data for fields hence if you keep any other component it will fall in a loop and will give you unexpected behaviour of iReport with bad output.

3) Parameter Naming conventions

It is advisable to give good naming conventions for parameters. For example parameter name could be param_paramName or p_paramName

Eg : 1)  p_startDate 2) p_endDate

Other Naming conventions
The same thing you can apply when you create input controls /Data source Names/Custom band names/Data Set names in iReport & Jasper Repository respectively.

Why ?

Easy to differentiate the variables, parameters and group names etc

4) Remove the other bands which you are not going to use in iReports

5) Variables and Parameter usage in iReport

Make use of internal parameters for the report and for the summation of columns recommended to use the variables.

6) Jasper Project Folder Structure

Project Name

archive (take a back up of jrxmls if you are going to update/modify them in this folder with a version number)

      Resources
     Input Controls ( All your parameter names for the project/various reports)
      Data sources(This folder is useful when you have multiple databases to use in your project)
      Files(Keep all your data source files here, for eg : Excel, CSV, XML and etc)
      JRXML’s (Whatever the JRXMLS you are creating you can keep all of them in this folder)
      Sub Reports(keep all you sub reports in this folder and refer from here where ever you want)
      Images(Keep all your images in this folder- for easy understanding)
      Reports(Keep all your reports in this folder)
      Dashboards(Save all your dashboards here)
      Temp (for temporary files)
      Test (Do experiment at the time of development of report in this folder)

Note that if your project is having lot many reports according to some sections/departments, it is advisable to sub divide the Reports folder with other folders.
For example:
Reports
    A.Department-1
         1.Report Name
         2.Report Name
    B.Department-2
        1.Report Name
         2.Report Name

 NOTE:

When you upload a report to JRXML it is recommended to write the description of the report. By seeing it every one can easily understand the purpose of the report/visualization.

7) Export / Import Utility

Command line utility to import/export/update folders/reports from the jasper server is given below.

Importing
js-import –input-zip(space) <Filename>
Ex: js-import –input-zip(space)”E:Work Space\Unified\Unified Reports\<file name>”

Updating
js-import –input-zip(space)”E:Work Space\Unified\Unified Reports\<file name>” –update

Exporting
js-export <location of the folder in jasper server> –output-zip <location of exporting folder>/<exporting_filename.zip>

8) Bands

Title band:

·         Every report must have some name, give the name of the report in this band.

·         Blue colour back ground with white colour font is preferable to give the titles.

·         Logos of the company are recommended to be placed left side of the band in title band under the title of the report.

Page Header:

·         Page header consists of the page numbers and date type of information. It is recommended to give page header information for long reports with heavy text involved in the reports.

Column Header:

·         This band is used for giving column headers for the fields. You can change the font style, size, give the borders, back ground colours and etc.

Detail band:

·         Detail band is used to display the output of the report using fields fetched by the query.

·         You need to drag and drop required fields to create the report to Detail band format them accordingly.

·         Detail band falls into for loop so we should keep only fields in this band rather than keeping any other component like table , cross tab, chart components.

Column footer:

 ·         This band is used to find the total, max, min of the columns from the details band.

·         You need to create variable for this and drag that variables under the column where you want see the sum, max or min

Page footer:

·         Page footer is used to place the page numbers, confidential type of text for the company etc.

Summary:

·         Summary of the report will be placed in the summary band.

·         Generally we keep the chart component, table component, cross tab component to summarize the report.

9)   9) Why should we keep input controls and data sources in resource folder?

  

Input controls in repository:

Create all your input controls in resource folder because every time for each report you need not to create the same input controls. You just need to link the existing input control from the repository folder.

Data sources in repository:
It is considered as a best practice to create data source connections in a folder called resources  and use this data source for the reports. It’ll reduce the report development time. You need to not create import database connections from iReport once you create this connection in the repository.

For any Jaspersoft, ireport, jasper studio, jasperserver or Open source DWBI requirement, please get in touch : [email protected], www.helicaltech.com

Best practices to be followed while developing jasper report using ireport / jasper studio

Software used:  i-report / jasper studio, jasperserver

A)First before creating report keep in mind following things:

  1. Set page (report) properties

Eg:-Page height,width,left-right margin, orientation

 

2.Set Same properties for palette elements  like text,static

Eg:-Font style,size,horizontal –vertical alignment,

Position type, stretch type.

 

B)check  jasperserver version comp ability, between where you are developing and where you would be deploying.  

Steps:-In  i-report  tool

Goto – tool menu->Options->ireport>General>compability->select version

C) While report uploading on to jasperserver repository

1. Check input control and datasource.

2. Usually make one folder on jasperserver  as resources. In that create your input control as well as datasource

3. For adding subreport onto server make subreportJRXML

 

D) For importing and exporting  report from jasperserver

1.JasperReports server should be stopped when using the import and export utilities. This is very important for the import utility to avoid issues with caches, configuration, and security.

2.All command line options start with two dashes (–).

3. You must specify either a directory or a zip file to export to or import from.

4. Make sure the output location specified for an export is writable to the user running the command.

 

Use the command : Export and Import Utility in Jasper Report Server

Before Importing,  go into buildomatic directory of jasper Location: C:\Program Files\jasperreports-server-5.0\buildomatic

 

For importing JS data :-

Windows: js-import –input-zip(space) <Filename>
Ex: js-import –input-zip(space)”E:Work Space\Unified\Unified Reports\<file name>”

 

Update Command

js-import –input-zip(space)”E:Work Space\Unified\Unified Reports\<file name>” –update

 

For Exporting Jasper Reports:

Exporting project folder from jasper community server

 

1)  Goto PuTTY

Give username and password

 

2) Go to the location of jasperserver cp 5.0.0(or any version)

Example:

[email protected]:cd /opt/jasperreports-server-cp-5.0.0

 

3) Navigate to buildomatic folder

Example:

[email protected]:cd /opt/jasperreports-server-cp-5.0.0/cd buildomatic

 

4) Execute js-export.sh file with the destination path.

 

Syntax :

[email protected]:cd /opt/jasperreports-server-cp-5.0.0/buildomatic#./js-export <location of the folder in jasper server> –output-zip <location of exporting folder>/<exporting_filename.zip>

How to resolve Jasperserver version compatability problem when migrating reports from one version to another

This blog will talk about how to make the report compatible with another version where it is being deployed.

Software used :- I-report, jasperserver (any version)

Solution:-

Step1) Open Your  report(.jrxml) in i-report

Jasper Version compatibility issue

step2)Goto Tool menu->options ->ireport menu->general->compatability

Jasper Version compatibility issue2

Step 3)select version which you want of jasperserver.

Here eg: jasperreport 4.1.3

Step4)goto (.jrxml) file of report  and make some  little change like set band height whatever you want and save it and update to server-repository navigator.

Step 5)check on your jasper server

Logging using Talend

Logging using Talend

Introduction: In this article, we will discuss about different methods of logging in Talend Open Studio. Talend is one of the widely used open source data integration tool in the market. Talend mainly uses three types of logging

  1. Statistics – Job execution statistics at component and job level
  2. Error – Job level errors, warning and exceptions
  3. Meter logging – Data flow details inside job

Best approach for logging is at project level. To enable project level logging, In Talend Open Studio, go to File, Project properties and enable or disable check boxes to start default logging at project level. See the screen shot below.

talend logging 1

If you enable the logging at project level, then every new job created will inherit these settings. There are more settings and options to do if you enable project level logging. See below screenshot.

talend logging 2

You can decide it to log the information to Console/File/Database. In case if you select any of File/Database or both options, then need to set few more default parameters like

talend logging 3

For file names, you can pre-or post fixes the file name with Talend Date Time stamp function. Or else it will write into the same file for every execution and flush out earlier data. In case of databases, you can have existing created database. This scenario does not work when you don’t have any database on the target server this scenario fails. For general JDBC, you need to provide above parameters, else if you select any database such as MySQL, then provide username, password and other required parameter values.

If you enable project level logging, then there is no need to separately use all these components say, tLogCatcher to Log the errors, tFlowMeter to catch the data flow and tStatCatcher to catch the Errors, flow and statistics log. Talend throws the errors or exceptions whenever it occurs and displays its complete trace on the console. tLogCatcher if used with the help of tDie or tWarn, would catch those messages and can be redirected to the required database / file based on the requirement. In order to do this we need to take care of all above components, implement and test the job.

Advantage with this approach is that you get the brief error information in logs table which is automatically created by Talend. In addition to this information Talend also prints its error log trace on console. And the negative side of this is that the console trace is not stored in the log table.

Problem: In both above approaches, Talend does not store or redirect its console trace to database or file.

Jaspersoft System Integration (SI) Partner Helical IT Solutions

Helical, a fast growing Business Intelligent (BI) organisation, offering Open source BI solutions across verticals has been appointed by Jaspersoft as a System Integrator Partner. As per the tie-up, Helical IT Solutions will be a System Integration Partner for Jaspersoft in the India/South Asia region.

As a part of this appointment, Helical IT Solutions will provide services for Jaspersoft’s BI suite including reporting, dashboards, ad-hoc and OLAP analysis, ETL/data integration. Helical IT Solutions’ knowledge of both Business Intelligence applications and the Jaspersoft BI platform is bound to ensure successful development and deployment of BI solutions.

Taking on this appointment, Mr Nikhilesh Tiwari, Founder, Helical IT Solutions shared “We are extremely happy and delighted with this tie-up and regard this as a great achievement for our organisation. We will definitely look forward to this collaboration with Jaspersoft to be beneficial for both the companies.

 

Mr Nitin Sahu, Cofounder at Helical IT Solutions added, “We have been working on Jaspersoft BI platform for a long time and we are happy to be their SI. With our technical strength and partnership with Jaspersoft, we are hopeful of surpassing our customers’ expectations

 

Mr. Royce Buñag, Vice President Asia Pacific at Jaspersoft said “Helical IT Solutions has impressed us with their knowledge of BI solutions.  They have already shown themselves to be a valuable partner with current and ongoing customer engagements.  We are delighted that they have also agreed to be one of the sponsors for the upcoming JasperWorld event in Bangalore, which is a great way for us to showcase the collaboration to customers. We look forward to a long and successful partnership.

Jaspersoft’s open source business intelligence solutions  is the world’s most widely used BI software, with more than 8 million total downloads worldwide and more than 10,000 commercial customers in 96 countries. The Jaspersoft provides a web-based, open and modular approach to the evolving business intelligence needs of the enterprise. It has 90,000 registered members working on more than 350 projects, which represents the world’s largest business intelligence community.

Helical IT Solutions is an open source DWBI company and has expertise in providing simple, practical & affordable solutions which are suitable for business users, right from CEO, CXO, line managers & to every end user of the enterprise. With a quick turnaround time, the company can provide mobile BI solutions, on premises or hosted SaaS solution, hence catering to every type of need. Helical offers services on entire BI stack, ranging from ETL, DW, Data mining, Analytics, BI solution. They also provide integration of disparate data sources and offers powerful interactive tools like balanced scorecards, personalized dashboards, key performance indicators, automated alerts, graphical mining, cross tab reporting and more.

The press release got published at many places which includes

IndiaInfoline
http://www.indiainfoline.com/Markets/News/Jaspersoft-appoints-Helical-IT-Solutions-for-system-integration-partner/5810655543

Eletronics for You
http://www.efytimes.com/e1/fullnews.asp?edid=119811

Light-Reading
http://www.lightreading.in/lightreadingindia/news-wire-feed/285912/jaspersoft-appoints-helical-solutions-integration-partner

INfotechLead
http://infotechlead.com/2013/10/29/jaspersoft-appoints-helical-system-integration-partner-south-asia/

Channeltimes
http://www.channeltimes.com/story/jaspersoft-appoints-helical-it-solutions-as-partner-for-bi-solution/

Techvorm
http://www.techvorm.com/helical-ties-jaspersoft/

SIlobreaker
http://news.silobreaker.com/jaspersoft-appoints-helical-it–5_2267202378157523078

Digisecrets
http://www.digisecrets.com/news/jaspersoft-appoints-helical-it-solutions-as-their-system-integration-partner-for-indiasouth-asia-region/

CIOL
http://www.ciol.com/ciol/news/199620/jaspersoft-appoints-helical-integration-partner