JDBC Connection Pooling in jaspersoft

How JDBC Resources and Connection Pools Work Together in Jasper Report’s server


JasperReports Server can access any database using the JDBC (Java DataBase Connectivity) API. We can configure two types of data sources in the repository:

  • JDBC data source – Establishes a direct connection to the database server using its JDBC driver. JasperReports Server configures and manages the connections to the database. By default, the maximum number of simultaneous connections for each data source is five
  • JNDI data source – Calls the JNDI (Java Naming and Directory Interface) service of the application server to locate a database connection. The application server manages the connections to the database and shares them between all applications that it hosts. The configuration of the application server determines the number of connections that are shared.


At runtime, here’s what happens when Jasper Report’s server connects to a database:


  1. The Jasper Server gets the JDBC resource (data source) associated with the database by making a call through the JNDI API.

Given the resource’s JNDI name, the naming and directory service locates the JDBC resource. Each JDBC resource specifies a connection pool.

  1. Via the JDBC resource, the Jasper Server gets a database connection.

Behind the scenes, the server retrieves a physical connection from the connection pool that corresponds to the database. The pool defines connection attributes such as the database name (URL), user name, and password.

  1. Now that it is connected to the database, the server can read the database.

The server access the database by making calls to the JDBC API. The JDBC driver translates the application’s JDBC calls into the protocol of the database server.

Server doesn’t have to validate every single time the server uses the connection, which improves performance. So when user runs a report, the server requests a connection from the pool. When the thread is finished using the connection, it returns it to the pool, so that it can be used by any other threads.

Once the report is rendered and user does not perform any action on report like passing parameter etc, then connection sits idle. JDBC Connection pooling enables the idle connection to be used by some other thread. When the user does not access the DB, the server automatically returns the connection to connection pool after certain time. Once it’s back in the pool, the connection is available.

First encounter with Visualise.js

First encounter with Visualise.js

As we all know, visualise.js is a javascript framework introduced in JasperServer 5.6, for integration of jasperserver reports & visualizations inside web applications. It is available only with commercial editions of JasperServer 5.6.

Prior to 5.6, there were 2 methods for embedding, the simple HTTP/iframe integration method and the REST web APIs. The iframe way, though simple, did not give much control and limited branding options as it accesses the jasperserver UI itself from within the iframe. It however supports integration of Adhoc/Self service BI. REST APIs provided greater control, leverage to the repository services, greater control on the look and feel. However, it meant more coding, limited interactivity. Visualise.js leverages the REST APIs of jasperServer and provides a simple javaScript API, that could be easily used along with CSS to create webpages that seamlessly integrate interactive reports, with a few lines of code.

The API ranges from basic APIs for authentication, report rendering, resource search, accessing input controls to accessing/control of report events, errors, report hyperlinks. It also provides a way to interact with the JIVE UI components which provides a way to achieve higher interactivity.

After having a look at what visualise has to offer, lets see what it takes to embed a simple report using visualise.js. Below is a code snippet that shows a javascript code that renders a report into a div tag inside the webpage. This report takes one parameter, the store id.

HTML Page contents

<script type="text/javascript" src="http://localhost:8081/jasperserver-pro/client/visualize.js"></script>
<div id="container">Embedded a Sample Report</div>
<div id="testreport"></div>

JavaScript code using visualize.js

    auth: {
        name: "joeuser",
        password: "joeuser",
        organization: ""
}, function (v) {
  // Report API
        resource: "/public/MyReports/Test1",
        error: handleError
    //showing Error
    function handleError(err) {

1. Loading the visualise.js script

<script type="text/javascript" src="http://localhost:8081/jasperserver-pro/client/visualize.js"></script>

This is the script tag which loads the visualize.js script from the location where it is present on the running instance of your JasperServer 5.6 (commercial edition).

2. A HTML container for the report
The division with ID “testreport” is the container for the report. This element will be passed to the visualize.js function when a report is rendered.

3. Authentication with JasperServer
Visualise is the function in visualize.js that establishes a connection with the JRS instance which hosts the report to be embedded. Its first parameter is an object which contains the configuration information required for connectivity – the URL of the JRS instance(server) and the authentication properties (auth).
The above example only shows authentication properties. The server URL is not provided, it internally takes the same server from which we requested the visualize script.

4. Rendering the report
The code for rendering the report is present in the callback function defined in the visualize call. ‘v’ is the client to the JRS instance for which session has been established.

        resource: "/public/MyReports/Test1",
        error: handleError

As can be seen, the selector to the container element in the html (#testreport) has been passed. The parameters are pretty self explanatory. “handleError” is a function which would handle any errors generated by report API.

5. Passing parameters to the report
As seen above, the “params” key in the argument to the report function is used to pass all the parameters to the report. The value has to be an array, even for single values to be passed.

And it is done !!

Hope this gives you basic understanding about visualise.js. A few more blog posts to follow, exploring other APIs of visualise.js. Till then, happy embedding !!

Imp References –

– Shraddha
Helical IT Solutions

Understanding and Usage of Mondrian

This blog will be talking about the different layers of Mondrian engine, its components, introduction to ROLAP etc.

Mondrian Architecture


Layers of a Mondrian system

A Mondrian OLAP System consists of four layers; these are as follows:

  1. Presentation layer

The presentation layer determines what the end-user sees on his or her monitor, and how he or she can interact to ask new questions. There are many ways to present multidimensional datasets, including pivot tables, pie, line and bar charts, and advanced visualization tools such as clickable maps and dynamic graphics. These might be written in Swing or JSP, charts rendered in JPEG or GIF format, or transmitted to a remote application via XML. What all of these forms of presentation have in common is the multidimensional ‘grammar’ of dimensions, measures and cells in which the presentation layer asks the question is asked, and OLAP server returns the answer.

  1. Dimensional layer

The second layer is the dimensional layer. The dimensional layer parses, validates and executes MDX queries. A query is evaluated in multiple phases. The axes are computed first, then the values of the cells within the axes. For efficiency, the dimensional layer sends cell-requests to the aggregation layer in batches. A query transformer allows the application to manipulate existing queries, rather than building an MDX statement from scratch for each request. And metadata describes the dimensional model, and how it maps onto the relational model.

  1. Star layer

The third layer is the star layer, and is responsible for maintaining an aggregate cache. An aggregation is a set of measure values (‘cells’) in memory, qualified by a set of dimension column values. The dimensional layer sends requests for sets of cells. If the requested cells are not in the cache, or derivable by rolling up an aggregation in the cache, the aggregation manager sends a request to the storage layer.

  1. Storage layer.

The storage layer is an RDBMS. It is responsible for providing aggregated cell data, and members from dimension tables.

These components can all exist on the same machine, or can be distributed between machines. Layers 2 and 3, which comprise the Mondrian server, must be on the same machine. The storage layer could be on another machine, accessed via remote JDBC connection. In a multi-user system, the presentation layer would exist on each end-user’s machine (except in the case of JSP pages generated on the server).

ROLAP Defined

Pentaho Analysis is built on the Mondrian relational online analytical processing (ROLAP) engine. ROLAP relies on a multidimensional data model that, when queried, returns a dataset that resembles a grid. The rows and columns that describe and bring meaning to the data in that grid are dimensions, and the hard numerical values in each cell are the measures or facts. In Pentaho Analyzer, dimensions are shown in yellow and measures are in blue.

ROLAP requires a properly prepared data source in the form of a star or snowflake schema that defines a logical multidimensional database and maps it to a physical database model. Once you have your initial data structure in place, you must design a descriptive layer for it in the form of a Mondrian schema, which consists of one or more cubes, hierarchies, and members. Only when you have a tested and optimized Mondrian schema is your data prepared on a basic level for end-user tools like Pentaho Analyzer and JPivot.


Workflow Overview

To prepare data for use with the Pentaho Analysis (and Reporting, to a certain extent) client tools, you should follow this basic workflow:

Design a Star or Snowflake Schema

The entire process starts with a data warehouse.The end result should be data model in the star or snowflake schema pattern. You don’t have to worry too much about getting the model exactly right on your first try. Just cover all of your anticipated business needs; part of the process is coming back to the data warehouse design step and making changes to your initial data model after you’ve discovered what your operational needs are.

Populate the Star/Snowflake Schema

Once your data model is designed, the next step is to populate it with actual data, thereby creating your data warehouse. The best tool for this job is Pentaho Data Integration, an enterprise-grade extract, transform, and load (ETL) application.

Build a Mondrian Schema

Now that your initial data warehouse project is complete, you must build a Mondrian schema to organize and describe it in terms that Pentaho Analysis can understand. This is also accomplished through Pentaho Data Integration by using the Agile BI plugin. Just connect to your data warehouse and auto-populate your schema with the Modeler graphical interface. Alternatively, you can use Pentaho Schema Workbench to create an analysis schema through a manual process.

Initial Testing

At this point you should have a multi-dimensional data structure with an appropriate metadata layer. You can now start using Pentaho Analyzer and JPivot to drill down into your data and see if your first attempt at data modeling was successful. In all likelihood, it will need some adjustment, so take note of all of the schema limitations that you’re unhappy with during this initial testing phase.

Do not be concerned with performance issues at this time — just concentrate on the completeness and comprehensiveness of the data model.

Adjust and Repeat Until Satisfied

Use the notes you took during the testing phase to redesign your data warehouse and Mondrian schema appropriately. Adjust hierarchies and relational measure aggregation methods. Create virtual cubes for analyzing multiple fact tables by conforming dimensions. Re-test the new implementation and continue to refine the data model until it matches your business needs perfectly.

Test for Performance

Once you’re satisfied with the design and implementation of your data model, you should try to find performance problems and address them by tuning your data warehouse database, and by creating aggregation tables. The testing can only be reasonably done by hand, using Pentaho Analyzer and/or JPivot. Take note of all of the measures that take an unreasonably long time to calculate. Also, enable SQL logging and locate slow-performing queries, and build indexes for optimizing query performance.

Create Aggregation Tables

Using your notes as a guide, create aggregation tables in Pentaho Aggregation Designer to store frequently computed analysis views. Re-test and create new aggregation tables as necessary.

If you are working with a relatively small data warehouse or a limited number of dimensions, you may not have a real need for aggregation tables. However, be aware of the possibility that performance issues may come up in the future.

Check in with your users occasionally to see if they have any particular concerns about the speed of their BI content.

Deploy to Production

Your data warehouse and Mondrian schema have been created, tested, and refined. You’re now ready to put it all into production. You may need to personally train or purchase Pentaho training for anyone in your organization who needs to create traditional reports, dashboards, or Analyzer reports with Pentaho’s client tools.

Conditional Formatting in jaspersoft’s OLAP view (mondrian jPivot)

 Conditional Formatting in jaspersoft’s OLAP view (mondrian jPivot)

  1. Open {js-install}\apache-tomcat\webapps\jasperserver-pro\jpivot\table\mdxtable.css and {js-install}\apache-tomcat\webapps\jasperserver-pro\j\mdxtable.css.
  2. In these files find the following section

/* data cells */










td.property-span {

font-family: Arial, Helvetica, Tahoma, sans-serif;

font-size: 80%;

/*color : Black;*/

color : #033669;

text-align: right;

/*border-bottom: solid 1px #000033;

border-right: solid 1px #000033;

border-top: solid 1px #000033;

border-left: solid 1px #000033;*/


  1. Add 3 styles for 3 different colors as shown below

/* data cells */













td.property-span {

font-family: Arial, Helvetica, Tahoma, sans-serif;

font-size: 80%;

/*color : Black;*/

color : #033669;

text-align: right;

/*border-bottom: solid 1px #000033;

border-right: solid 1px #000033;

border-top: solid 1px #000033;

border-left: solid 1px #000033;*/


  1. Then a little down you will see some blocks that look like this. This one in particular is for green.td.cell-gruen,
    background-color: #66FF66;

    So, now just copy this one, and make it for 3 new styles added like this:




background-color: #FF0000;






background-color: #FFBF00;





background-color: #04B404;


  1. Now, within the MDX query you can reference these styles as shown below:

 a)      with member [Measures].[Profit] as ‘([Measures].[Store Sales] – [Measures].[Store Cost])’, FORMAT_STRING = CASE WHEN (([Measures].[Profit] <= 300000.0) AND ([Measures].[Profit] > 100000.0)) THEN “|#|style=above_avg_green” WHEN (([Measures].[Profit] <= 100000.0) AND ([Measures].[Profit] > 50000.0)) THEN “|#|style=avg_orange” ELSE “|#|style=below_avg_red” END select {[Measures].[Store Sales], [Measures].[Profit]} ON COLUMNS, {[Product].CurrentMember.Children} ON ROWS from [Sales]


b)      with member [Measures].[Profit] as ‘([Measures].[Store Sales] – [Measures].[Store Cost])’, FORMAT_STRING = CASE WHEN (([Measures].[Profit] <= 101.0) AND ([Measures].[Profit] > 75.0)) THEN “|#|style=above_avg_green” WHEN (([Measures].[Profit] <= 75.0) AND ([Measures].[Profit] > 50.0)) THEN “|#|style=avg_orange” ELSE “|#|style=below_avg_red” END

select {([Time].[2012].[Q1], [Measures].[Profit]), ([Time].[2012].[Q2], [Measures].[Profit]), ([Time].[2012].[Q3], [Measures].[Profit]), ([Time].[2012].[Q4], [Measures].[Profit])} ON COLUMNS,

{[Product].[Drink].[Beverages].[Carbonated Beverages].[Soda].[Excellent].[Excellent Cola], [Product].[Drink].[Beverages].[Carbonated Beverages].[Soda].[Excellent].[Excellent Cream Soda], [Product].[Drink].[Beverages].[Carbonated Beverages].[Soda].[Excellent].[Excellent Diet Cola], [Product].[Drink].[Beverages].[Carbonated Beverages].[Soda].[Excellent].[Excellent Diet Soda], [Product].[Drink].[Beverages].[Pure Juice Beverages].[Juice].[Excellent].[Excellent Apple Juice], [Product].[Drink].[Beverages].[Pure Juice Beverages].[Juice].[Excellent].[Excellent Berry Juice], [Product].[Drink].[Beverages].[Pure Juice Beverages].[Juice].[Excellent].[Excellent Cranberry Juice], [Product].[Drink].[Beverages].[Pure Juice Beverages].[Juice].[Excellent].[Excellent Orange Juice]} ON ROWS

from [Sales]


Passing the extra parameter from login page Spring Security

We have implemented spring security with database using hibernate in previous post.
You can have a look at the previous post from the following link http://helicaltech.com/spring-security-with-hibernate/
We had seen spring security supports username and password from login page by default.
In this post we will learn how to pass extra parameter from login page to spring security.

We know how to add spring and spring security jar’s in application class path using maven and adding dependencies in pom.xml file. By default spring submit the page to j_spring_security_check which use UsenamePasswordAuthenticationFilter for authenticating user which extends the AbstractAuthenticationProcessingFilter class.

    Below are the steps for implementation

Step 1. Create the class which extends the UsenamePasswordAuthenticationFilter and override the two methods attemptAuthentication and obtainUsername

public Authentication attemptAuthentication(HttpServletRequest req,HttpServletResponse res)
throws AuthenticationException{
public String obtainUsername(HttpServletRequest req){

attemptAuthentication method takes the HttpServletRequest and HttpServletResponse arguments and return the Authentication object and obtainUsername takes the HttpServletRequest argument and return the String which contain the username and extra parameter passed by the login page with delimiter separator.
Below is class with business logic I chose the class name as ExtInpuParam.

public class ExtInpuParam extends
UsernamePasswordAuthenticationFilter {
private static final Logger logger = LoggerFactory
private String extraParameter = "j_organization";
private String delimiter = ":";
public Authentication attemptAuthentication(HttpServletRequest request,
HttpServletResponse response) throws AuthenticationException {
logger.debug("Attempting for authentication. " + "j_username = "
+ request.getParameter("j_username") + ", j_password = "
+ request.getParameter("j_password"));
return super.attemptAuthentication(request, response);
protected String obtainUsername(HttpServletRequest request) {
String username = request.getParameter(getUsernameParameter());
logger.debug("username = " + username);
String extraInput = request.getParameter(getExtraParameter()) == null ? ""
: request.getParameter(getExtraParameter());
logger.debug("extraInput = " + extraInput);
String combinedUsername = "";
String extParam = extraInput.trim();
logger.debug("extParam = " + extParam);
if (extParam.length() == 0) {
combinedUsername = username;
} else {
combinedUsername = username + getDelimiter() + extraInput;
logger.debug("returning combinedUsername = " + combinedUsername);
return combinedUsername;
* @return The parameter name which will be used to obtain the extra input
* from the login request
public String getExtraParameter() {
return this.extraParameter;

* @param extraParameter
* The parameter name which will be used to obtain the extra
* input from the login request
public void setExtraParameter(String extraParameter) {
this.extraParameter = extraParameter;

* @return The delimiter string used to separate the username and extra
* input values in the string returned by
* obtainUsername()
public String getDelimiter() {
return this.delimiter;

* @param delimiter
* The delimiter string used to separate the username and extra
* input values in the string returned by
* obtainUsername()
public void setDelimiter(String delimiter) {
this.delimiter = delimiter;


I passed the extra parameter name as j_organization from login page and adding delimiter “:” with username obtainUsername method will return the uername with extra parameter and “:” separator. In userDetails class you will get the username and extra parameter.

Step 2. Add j_organization instance field with getter and setter in user model class.

Step 3.
Please refer to the code in the below snapshots for the steps to be followed for Step No 3.
spring security

spring security 2

Step 4. Login.jsp page

<form name='loginForm'
action="" method='POST'>


Muqtar Ahmed
Helical IT Solutions

Creating HTML Table with JSON Data dynamically in HDI(Helical Dashboard Insights)

This blog will teach you how to create HTML Table with JSON Data dynamically in HDI(HelicalDashboard Insights) :

To create html table with json data dynamically in hdi , we need two files

  1. Demo.EFW
  2. template.html

In “Demo.EFW” file , we have to call template.html file


<?xml version=”1.0″ encoding=”UTF-8″ ?>


<title>HDI Demo</title>


<description>Demo Dashboard</description>






In “template.html” file ,

  1. we have to declare in which portion of the dashboard we want table to be shown
  2. JSON data
  3. Script that will automatically add the json data to table


Template.html : –

<div id = “myTable”></div>

<style type=”text/css”>

td, th {

padding: 1px 4px;




var data = [{“id”:”1″,”Name”:”Rupam”,”address”:”Hyderabad”}];

var peopleTable = tabulate(data, [“id”, “Name”,”address”]);

function tabulate(data, columns) {

var table = d3.select(“#myTable”).append(“table”)

.attr(“style”, “margin-left: 250px”),

thead = table.append(“thead”),

tbody = table.append(“tbody”);


// append the header row






.text(function(column) { return column; });


// create a row for each object in the data

var rows = tbody.selectAll(“tr”)





// create a cell in each row for each column

var cells = rows.selectAll(“td”)

.data(function(row) {

return columns.map(function(column) {

return {column: column, value: row[column]};





.attr(“style”, “font-family: Courier”)

.html(function(d) { return d.value; });

return table;




Rupam Bhardwaj

Helical IT Solutions

Calling Static jrxml files inHDI

This blog will teach you how to call Static jrxml  files in HDI (Helical Dashboard Insights).

To call static jrxml file from hdi , we need 3 files

  1. “.EFW” extension file
  2. “.html” file
  3. Required “.jrxml” file

Eg:- here I have integrated “SkillChart” in jasper and now I want to call its jrxml through hdi

So now , I have

  1. “Skill Chart 0.1.jrxml” file ,
  2. “SkillChart.EFW” extension file in which I have given its template file name,
  3. The Template file i.e, “skillChart.html”

       “SkillChart.EFW” code:


“skillChart.html” code :


Rupam bhardwaj

Helical IT Solutions


In liferay, Administrator of any Organization is able to delete, update or change user details of any other organization which is not secure to organizations. User Details of any other organization should not display to any other organization admin.

To avoid the above scenario, it is required to change permissions and role for organizations admin.

Role Keywords

Administrator:   Administrator can create organization, assign or add users to organization, update or remove user, organization. It means Poweruser has all the authority of liferay portal.

Organization Administrator: Organisaton Administrator is able to see organization pages, he can edit organization page, but he is not able to edit his private and public pages.


How can we change Role of organization admin

  1. Login as Admin (Administrator role).
  2. Go to Control Panel.
  3. Select User and Organization menu from portal panel of left side to window.

Liferay user and role management


4. Select Organization in which you want to change the role of admin.
E.g. Selecting New York Organization

Liferay user and role managment

Liferay user and role managment

We can see there are 2 user in New York Organization, NYC (Admin) and NY1USER (organization User). Here NYC is Administrator for this organization.

5.Click Actions (NYC in this example) -> Edit

liferay user and role management

liferay user and role management

6. Select Roles tab from User Information under Admin menu [NYC Admin] which is a right side of window

liferay 4

7. We can see Current role of this user in Regular Roles. (Current is Administrator)

liferay 5

8. Remove Administrator from Regular Role and assign New Role from “+Select” option. When we click on Select Hyperlink, pop-up window appears. From this window, we can see 2 Roles- Administrator and Power User. [It is not that only above mentioned roles can be seen, there are chances of seeing other regular roles too]

liferay 6

9. Select Power User.


10. Now, from Organization Roles view. Click on “Select” Hyperlink and select organizations role from pop-up window.

liferay 7

11. Select Organization Administrator.

liferay 8

Now, we can see we have two roles for Organization Administrator, Power User and Organization Administrator

12. Click Save.

Now, when same organization admin logins into his/her account user can’t get access to any other organization details from control panel.


Before applying changes to organization administrator role.

liferay 9

After applying changes to organization administrator role.

liferay 10

He is not able to see any organization except his organization.

Finally we have secured Organization’s details from any other organization’s admin.

Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded

Different causes for OOM

Every one in java development face java.lang.OutOfMemoryError now and then, OutOfMemoryError in Java is one problem which is may be due to

  • Improper memory management by the Java code
  • Insufficient hardware memory available to process the code
  • Programming mistakes
  • Memory leak

Types of OOM

Any of the above reason can result into OutOfMemoryError. Following three different types of OOM errors are common

  1. Java.lang.OutOfMemoryError: Java heap space
  2. Java.lang.OutOfMemoryError: PermGen space
  3. Java.lang.OutOfMemoryError: GC overhead limit exceeded

OOM in Talend

From above the last error I observed during flat file data migration to database using Talend Open Studio. File sizes were in the range of 10 MB-500MB in size. Initially job worked well, but when started with larger files of around 100MB, this error popped up. Hardware memory (RAM) available was 16 GB. But default memory allocated by Talend was Xmx1024m (1GB). There was a possibility to increase Xmx to 10240m which could have solved the issue, but this GC overhead limit exceeded was related to garbage collection. After searching for the issue, I came across a very descriptive article related to garbage collection at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#available_collectors


Following workaround solved the problem in Talend without increasing the memory limit to higher figures.

Add new option to Windows–>Preferences–>Talend–>Run/Debug  – XX:-UseGCOverheadLimit

Create CrossTab In HDI(Helical Dashboard Insights) :

Creating CrossTab In Helical Dashboard Insight (HDI)

This blog will teach you how to create Cross Tab in HDI (Helical Dashboard Insights).

CrossTab uses PivotTable.js.Which is a JavaScript Pivot Table Library with drag and drop functionality. ‘PivotTable.js’ basic function is to turn a data set into a summary table and then optionally add a true 2-d drag’n’drop UI to allow a user to manipulate this summary table.

Main functions defined in ‘PivotTable.js’ is pivotUI()and is implemented as jQuery plugins.

PivotUI() takes 3 parameters :

1) Input : input is an array of objects, an array of arrays, a function or a jQuery object referencing a table.Here we are using Json Data as a input.

2) Option : is an object with the following keys:

  • aggregators: dictionary of generators for aggregation functions in dropdown, defaulting to common aggregators.
  • rows: array of strings, attribute names to prepopulate in row area, default is [ ]
  • cols: array of strings, attribute names to prepopulate in cols area, default is [ ]
  • vals: array of strings, attribute names to prepopulate in vals area, default is [ ] (gets passed to aggregator generating function)
  • aggregatorName: string, aggregator to prepopulate in dropdown (key to aggregators object), default is first key in aggregators .

3) Locale: is a string defaulting to en which controls the default locale for things like number formatting and error messages. Regardless of this setting, you can still override the default aggregators (which control number formatting) and error message strings. If this is set to something other than en you will have to load a locale-specific ‘language pack’ which creates this locale object before calling pivotUI().

For adding the Cross Tab in the HDI following steps should be followed:

1) EFW file: EFW contain the Title, author, description, Template name, visibility of the Dashboard.
2) HTML File:HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external link.
Here we are using the ‘PivotTable.js’ as external Library and it specified in the HDI as below:


We have to declare one CrossTab component in “CrossTab.html” file and in this component we need to provide the link of the file where the CrossTab chart property is defined.

3) EFWD File:EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database.

The DataSource Details used in our demo is shown as below:-

        <Connection id="1" type="sql.jdbc">

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.

<DataMap id="2" connection="1" type="sql" >
       <Name>Query for CrossTab Component </Name>
					SELECT distinct r.precinct,cd.candidate,sum(fact.votes) votes 
                                        FROM Voting_Summary as fact,region as r,contest as ct,candidate as cd 
                                        where fact.region_id=r.region_id and
                                        fact.contest_id=ct.contest_id and
                                        fact.candidate_id = cd.candidate_id 
                                        group by ct.contest,r.precinct,cd.candidate;


4)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source.
In Script we set the CrossTab as Below :

						    rows: ["precinct"], 
						    cols: ["candidate"],
						    aggregatorName: "Integer Sum",
						    vals: ["votes"] ,

By following these we are able to see the output which is as follows:

Nitin Uttarwar
Helical It Solution