Drill Down in Same Panel in Pentaho CDE

Drill Down in Same Panel in Pentaho CDE

First create dashboard having two three panels as in example shown below :-

B1

In this example we are going to learn how we can drill down from bar chart in second panel to the table component in third panel with the relevant data.
On clicking any of the bar representing the categories in the bar chart in second panel, the detailed report of that department should open in the third panel in table component.
Here are the instructions to achieve the drill down of bar chart to table component:

1) Set the clickable option = ‘True’ in the properties of bar chart.
2) In the click action of bar chart write the code:

b2

In this code we are passing value from bar chart to the table through parameter ‘department’ i.e on clicking the bar representing different department we will be getting the detailed data of that department in the table.

3) In the component layer add the component ‘simple parameter’ as department:

b3

4) In the properties of table component add listener and parameter department respectively:

b4

b5

5) In the SQL query of table component add parameter in the where clause or in the way you want data to get filter on the basis of parameter used:

b6

6) In same window, add the parameter ‘department’.

b7

7) Save the dashboard and click on the preview. Here you’ll be able to see the changes in table component as you click on different department in bar chart.

Thankyou
Nisha Sahu

Map Reduce in Mongo db :

Map Reduce in Mongo db :

This Blog will teach you, how to write Map reduce in Mongo DB .

Map Reduce is a concept that process large volume of data into aggregated results.

To use Map Reduce Concept in Mongo DB , create one command called “mapReduce”.

This mapReduce() function fetch data from collection (table) and then produce the result set into (key, value) format.

Then reduce () function takes the (key, value) pair and reduce all the data (documents) on the same key.

Eg : – Let say I have two Collection (tables) named :

  1. Emp_test
  2. Dept_Test

Now , to create collection in mongo db , use below query

db.createCollection(“Emp_test”)

db.createCollection(“Dept_Test”)

To insert data in Emp_test Collection :

db.Emp_test.insert({“name” : {       “first” : “ABC”,       “last” : “DEF”   },   “city” : “Hyd”,   “department” : 1})

db.Emp_test.insert({“name” : {       “first” : “GHI”,       “last” : “JKL”   },   “city” : “Pune”,   “department” : 2})

To insert data in Dept_Test Collection :

db.Dept_Test.insert({“_id” : 1,   “department” : “SALESMAN”})

db.Dept_Test.insert({“_id” : 2,   “department” : “CLERK”})

Now the requirement is to display FirstName , LastName , DepartmentName.

For this , we need to use Map Reduce :

# 1 : Create two map functions for both the collections.

var mapEmp_test = function () {

var output= {departmentid : this.department,firstname:this.name.first, lastname:this.name.last , department:null}

emit(this.department, output);               };

var mapDept_Test = function () {

var output= {departmentid : this._id,firstname:null, lastname:null , department:this.department}

emit(this._id, output);               };

Write Reduce Logic to display the required fields :

var reduceF = function(key, values) {

var outs = {firstname:null, lastname:null , department:null};

values.forEach(function(v){

if(outs.firstname ==null){                       outs.firstname = v.firstname                   }                   if(outs.lastname ==null){                       outs.lastname = v.lastname                   }                   if(outs.department ==null){                       outs.department = v.department                   }                          });   return outs;};

# 3 : Store the result into a different collection called emp_dept_test

result = db.employee_test.mapReduce(mapEmployee, reduceF, {out: {reduce: ’emp_dept_test’}}) result = db.department_test.mapReduce(mapDepartment,reduceF, {out: {reduce: ’emp_dept_test’}})

# 4: write the following commanddb.emp_dept_test.find()

How to use Custom Component in Jaspersoft Studio:

How to use Custom Component in Jaspersoft Studio:

This Blog will teach you how to use Custom component in Jaspersoft studio.

A Custom component allows the BI Developer to enhance the functionality of Jasperreports engine by adding some custom visualized components.

By using Custom Component , we can able to develop anything like tables , charts , etc …

Steps to create Custom Component :

# 1.      Go to File -> New -> Others

Custom# 2 .     Select Custom Component

CustomThen Click Next.

# 3. There you will get 3 samples , you can select any one of them and give your Project Name.

Then Click on Next.

d3charts# 4. You can see in the left side , one folder is created named same as the above metioned name.

project explorer

# 5. Right click on build.js -> Buid Component

# 6.After thet In the same folder double click “d3_Circle_sample.jrxml” , Preview it

You will get the Output as :

output

 

 

Thanks ,

Rupam Bhardwaj

 

Introduction to “Helical Insights”

 

Helical Helical Insight is our own BI tool.

Helical Insight consists of different 5 layers:

hdi-map

1) Templating Layer:In this layer dashboard is defined.It is a end user interaction layerthis is realated to the JavaScript framework layer.

2) Javascript framework Layer:All the interaction with the Templating layer is done by this layer.It also communicate with the Data Layer and Visualization layer.The combination of Templating layer and Javascript framework layer forms a front end of a dashboard.

3) Data Layer:The main role of Data layer is to provide all data relaed information required by front end.

4) Visualization Layer:basically this layer generates a visualization and provides it to Front end.

5) Background Services:This layer manages the communication between Front end,Data layer and visualization layer.

Helical Insight consists of different file extensions:

Firstly create your own folder which can have multiple dashboards.

1).EFW file:This file is required for recognization of Dashboard ,it contains metadata about Dashboard.In this file ,we can define the template file(.html/.js) whichever we required in tag.

<?xml version=”1.0″ encoding=”UTF-8″ ?><efw>

<title>HDI Demo On LocalHost</title>

<author>Sayali</author>

<description>Sample Dashboard</description>

<icon>images/image.ico</icon>

<template>test.html</template>

<visible>true</visible>

<style>clean</style>

</efw>

2).EFWD file:This file contains related data,data connection(DataSource) and related to queries.

<EFWD><DataSources>

<Connection id=”1″ type=”sql.jdbc”>

<Driver>com.mysql.jdbc.Driver</Driver>

<Url>jdbc:mysql://192.168.2.9:3306/output_db_1216</Url>

<User>devuser</User>

<Pass>devuser</Pass>

</Connection>

</DataSources>

<DataMaps>

<DataMap id=”1″ connection=”1″ type=”sql” >

<Name>Sql Query on SampleData – Jdbc</Name>

<Query>

<![CDATA[select distinct sector as sector, sum(promo_value) as val  from Subbrand_Level where promo_value>0 andsector in (${sector})group by sector;]]>

</Query>

<Parameters>

<Parameter name=”sector” type=”collection” default='””‘/>

</Parameters>

</DataMap>

</DataMaps>

</EFWD>

 

 

3).EFWVF file:This file defines the visualization of Dashboard.It contains the Charts,table etc.It is a .xml file which is used while writing JavaScript Chart Components.

<charts><chart id=”1”>

<prop>

<name>Pie chart</name>

<type>custom</type>

<datasource>1</datasource>

<script>

<!CDATA[[console,log(data);]]

</script>

</prop>

</chart>

</charts>

 

4)Template file(.html): It is used to define components which are used in Dashboard.To set variable it requires some component configuration Dashboard.setVariable() and calls Dashboard.init().

Var component={}

Var components=[];

Dashboard.init(components);

 

 

 

Thanks,

Sayali Mahale

Creating HTML Table with JSON Data dynamically in HDI(Helical Dashboard Insights)

This blog will teach you how to create HTML Table with JSON Data dynamically in HDI(HelicalDashboard Insights) :

To create html table with json data dynamically in hdi , we need two files

  1. Demo.EFW
  2. template.html

In “Demo.EFW” file , we have to call template.html file

Demo.EFW:-

<?xml version=”1.0″ encoding=”UTF-8″ ?>

<efw>

<title>HDI Demo</title>

<author>Rupam</author>

<description>Demo Dashboard</description>

<icon>images/image.ico</icon>

<template>template.html</template>

<visible>true</visible>

<style>clean</style>

</efw>

In “template.html” file ,

  1. we have to declare in which portion of the dashboard we want table to be shown
  2. JSON data
  3. Script that will automatically add the json data to table

 

Template.html : –

<div id = “myTable”></div>

<style type=”text/css”>

td, th {

padding: 1px 4px;

}

</style>

<script>

var data = [{“id”:”1″,”Name”:”Rupam”,”address”:”Hyderabad”}];

var peopleTable = tabulate(data, [“id”, “Name”,”address”]);

function tabulate(data, columns) {

var table = d3.select(“#myTable”).append(“table”)

.attr(“style”, “margin-left: 250px”),

thead = table.append(“thead”),

tbody = table.append(“tbody”);

 

// append the header row

thead.append(“tr”)

.selectAll(“th”)

.data(columns)

.enter()

.append(“th”)

.text(function(column) { return column; });

 

// create a row for each object in the data

var rows = tbody.selectAll(“tr”)

.data(data)

.enter()

.append(“tr”);

 

// create a cell in each row for each column

var cells = rows.selectAll(“td”)

.data(function(row) {

return columns.map(function(column) {

return {column: column, value: row[column]};

});

})

.enter()

.append(“td”)

.attr(“style”, “font-family: Courier”)

.html(function(d) { return d.value; });

return table;

}

</script>

 

Rupam Bhardwaj

Helical IT Solutions

Calling Static jrxml files inHDI

This blog will teach you how to call Static jrxml  files in HDI (Helical Dashboard Insights).

To call static jrxml file from hdi , we need 3 files

  1. “.EFW” extension file
  2. “.html” file
  3. Required “.jrxml” file

Eg:- here I have integrated “SkillChart” in jasper and now I want to call its jrxml through hdi

So now , I have

  1. “Skill Chart 0.1.jrxml” file ,
  2. “SkillChart.EFW” extension file in which I have given its template file name,
  3. The Template file i.e, “skillChart.html”

       “SkillChart.EFW” code:

Capture3

“skillChart.html” code :

Capture1

Rupam bhardwaj

Helical IT Solutions

INSTALL LIFERAY ON TOMCAT USING WAR

In my previous blog, I shared how to install liferay on existing tomcat using liferay source code. You can found my previous blog here http://helicaltech.com/install-liferay-existing-tomcat-7/

This blog will be talking about how to install liferay on Tomcat using WAR (existing Tomcat)

For this Section, I will refer to your tomcat’s installation folder as $TOMCAT_HOME. Before you begin, make sure that you have downloaded Liferay latest war file. If you haven’t downloaded, you can download from http://www.liferay.com/downloads/liferay-portal/additional-files (Find “Download Wars” section And portal dependencies files from “Dependencies” section).

After downloading, you will get a liferay-portal-6.1.x-<date>.war and liferay-portal-dependencies-6.1.x-<date>.zip.

If you have liferay in your machine, you don’t need to download liferay-portal-dependencies. You can use same Liferay global library as your portal-dependencies files.

Follow these steps, to install Liferay war in Tomcat:

Step-1

Create folder $TOMCAT_HOME/lib/ext.

Step-2

Extract the Liferay dependencies file to $TOMCAT_HOME/lib/ext.

The best way to get the appropriate versions of these files is, If you have liferay in your machine, then copy all .jar from $LIFERAY_HOME/lib/ext to $TOMCAT_HOME/lib/ext  (If you are going through this step, ignore Step-3 and Step-4)

or

Download the Liferay source code and get them from there. Once you have downloaded the Liferay source, unzip the source into a temporary folder and Copy the following jars from $LIFERAY_SOURCE/lib/development to $TOMCAT_HOME/lib/ext

activation.jar

jms.jar

jta.jar

jutf7.jar

mail.jar

persistence.jar

resin.jar

script-10.jar

 

Step-3

Make sure the JDBC driver for your database is accessible by Tomcat. Copy JDBC driver for your version of the database server to $TOMCAT_HOME/lib/ext.

 

Step-4

Liferay requires an additional jar to manage transactions. You may find this .jar here: http://www.oracle.com/technetwork/java/javaee/jta/index.html.

Step-5

Now, Edit $TOMCAT_HOME/conf/catalina.properties file. Change this line

common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar

to

common.loader=${catalina.base}/lib,${catalina.base}/lib/*.jar,${catalina.home}/lib,${catalina.home}/lib/*.jar,${catalina.home}/lib/ext,${catalina.home}/lib/ext/*.jar

Step-6

Create setenv.bat in $TOMCAT_HOME/bin folder and add these lines:

if exist “[email protected]@/win” (

    if not “%JAVA_HOME%” == “” (

       set JAVA_HOME=

    )

 

    set “[email protected]@/win”

)

 

set “JAVA_OPTS=%JAVA_OPTS% -Dfile.encoding=UTF8 -Djava.net.preferIPv4Stack=true -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=GMT -Xmx1024m -XX:MaxPermSize=256m”

 

Step-7

I am deploying liferay in $TOMCAT_HOME/webapps/ROOT folder. So we need to Create the directory $TOMCAT_HOME/conf/Catalina/localhost and create a ROOT.xml file in it. Edit this file and populate it with the following contents to set up a portal web application:

<Context path="" crossContext="true">

 
    <!-- JAAS -->
 
    <!--<Realm
       className="org.apache.catalina.realm.JAASRealm"
       appName="PortalRealm"
       userClassNames="com.liferay.portal.kernel.security.jaas.PortalPrincipal"
       roleClassNames="com.liferay.portal.kernel.security.jaas.PortalRole"
    />-->
 
    <!--
    Uncomment the following to disable persistent sessions across reboots.
    -->
 
    <!--<Manager pathname="" />-->
 
    <!--
    Uncomment the following to not use sessions. See the property
    "session.disabled" in portal.properties.
    -->
 
    <!--<Manager className="com.liferay.support.tomcat.session.SessionLessManagerBase" />-->

</Context>

 

Step-8

Now, Deploy Liferay.

If you are manually installing Liferay on a clean Tomcat server, delete the contents of the $TOMCAT_HOME/webapps/ROOT directory. This undeploys the default Tomcat home page. Then extract the liferay-portal-6.1.x-<date>.war file to $TOMCAT_HOME/webapps/ROOT.

Step-9

Start Tomcat by executing $TOMCAT_HOME/bin/startup.sh

Congratulations on successfully installing and deploying Liferay on Tomcat!

For any confusion, please get in touch with us at Helical IT Solutions

Creating Candlestick Chart in iReport / Jaspersoft / Jasper Report

This blog will talk about how to create candlestick chart in Jaspersoft.

PREREQUISITE S/W:-

  • Jaspersoft(any version)
  • iReport tool(design)
  • d/b softwares (eg.MySql)
  • JAVA
  • Eclipse(if require)

 

WHAT IS CANDLE STICK CHART?

The candlestick techniques we use today originated in the style of technical charting used by the Japanese for over 100 years before the West developed the bar and point-and-figure analysis systems. In the 1700s, a Japanese man named Homma, a trader in the futures market, discovered that, although there was a link between price and the supply and demand of rice, the markets were strongly influenced by the emotions of traders.

 

HOW TO READ CANDLESTICK CHART?

In order to create a candlestick chart, you must have a data set that contains open, high, low and close values for each time period you want to display. The hollow or filled portion of the candlestick is called “the body” (also referred to as “the real body”). The long thin lines above and below the body represent the high/low range and are called “shadows” (also referred to as “wicks” and “tails”). The high is marked by the top of the upper shadow and the low by the bottom of the lower shadow.

Candlestick chart Jaspersoft iReport

 

FORMATION:

STEPS:-
1.      Create report in i-report designer,  select Blank A4 size report from ireport designer.
Ex:
File > New > Blank A4
(here eg.DemoOfCandlestickchart->NEXT->FINISH)

2.    Delete all band except summary band

3.    Goto-> Palette window tool->Select chart->Select MultiAxis chart->Select TimeSeries chart->Ok

4.    Right Click on Multiaxis chart->Select Add Exist Chart->Select Candlestick chart->Ok
(add two candlestick chart into multiaxis chart)

While writing query  keep in mind following things:
•    For input values to Candlestick chart  we require 5  values for each chart
High value

Low value
Open value
Close value
Volume value

 

Example:-

 

select

avg(0) as  avg,

MAX(0) as max,

MIN(0) as min,

STDDEV(0) as std_dev,

“dummy” as _lable,

 1-1-1111   as _date,

as  abc  from dual

 

· Add  new Dataset ->Write query->add it

· Goto -> Report Inspector->Summary band-> Select candlestick chart (first)

->Right click on it ->Select chart data->goto chart data

Note:-  Generally creating single candlestick chart  first candlestick High-close value  are same and second chart Low-close value are same

Candlestick chart Jaspersoft iReport 2

 

5. Similarly as mentioned above instruction  set data for another chart .
Eg:

Candlestick chart Jaspersoft iReport 3

Pentaho 5.0.1 CE integration with MySQL 5.0.1 CE (Windows or Linux family)

Pentaho 5.0.1 CE integration with MySQL 5.0.1 CE (Windows or Linux )

Parts

  1. Creating databases
  2. Modifying configuration files
  3. Stopping HSQL db start up

Creating databases

Command to execute the scripting files

mysql>source  D:\ biserver-ce\data\mysql5\create_jcr_mysql.sql

Similarly execute the remaining .sql files(i.e, execute create_quartz_mysql.sql and create_repository_mysql.sql)

  1. Check the databases created using show databases command on MySQL command prompt.

 

Modifying configuration files

1. applicationContext-spring-security-hibernate.properties.

Edit the file pentaho-solutions\system\applicationContext-spring-security-hibernate.properties.

Original code

jdbc.driver=org.hsqldb.jdbcDriver

jdbc.url=jdbc:hsqldb:hsql://localhost:9001/hibernate

jdbc.username=hibuser

jdbc.password=password

hibernate.dialect=org.hibernate.dialect.HSQLDialect

Modified code

jdbc.driver=com.mysql.jdbc.Driver

jdbc.url=jdbc:mysql://localhost:3306/hibernate

jdbc.username=hibuser

jdbc.password=password

hibernate.dialect=org.hibernate.dialect.MySQLDialect

  2. hibernate-settings.xml

Edit the file pentaho-solutions\system\hibernate\hibernate-settings.xml.

Original code

<config-file>system/hibernate/hsql.hibernate.cfg.xml</config-file>

Modified code

<config-file>system/hibernate/mysql5.hibernate.cfg.xml</config-file>

 

3. mysql5.hibernate.cfg.xml

Location of the file: pentaho-solutions\system\hibernate\mysql5.hibernate.cfg.xml

Do need to change any code in this file.. Just need to check everything is perfect or not

<property name="connection.driver_class">com.mysql.jdbc.Driver</property>

<property name="connection.url">jdbc:mysql://localhost:3306/hibernate</property>

<property name="dialect">org.hibernate.dialect.MySQL5InnoDBDialect</property>

<property name="connection.username">hibuser</property>

<property name="connection.password">password</property>

4. quartz.properties

Location of the file: pentaho-solutions\system\quartz\quartz.properties

Original Code

org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.PostgreSQLDelegate

Modified Code

org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate

5. context.xml

Location of the file: tomcat\webapps\pentaho\META-INF\context.xml

Original Code

<Resource name="jdbc/Hibernate" auth="Container" type="javax.sql.DataSource"

factory="org.apache.commons.dbcp.BasicDataSourceFactory" maxActive="20" maxIdle="5"

maxWait="10000" username="hibuser" password="password"

driverClassName="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://localhost/hibernate"

validationQuery="select count(*) from INFORMATION_SCHEMA.SYSTEM_SEQUENCES" />

 

<Resource name="jdbc/Quartz" auth="Container" type="javax.sql.DataSource"

factory="org.apache.commons.dbcp.BasicDataSourceFactory" maxActive="20" maxIdle="5"

maxWait="10000" username="pentaho_user" password="password"

driverClassName="org.hsqldb.jdbcDriver" url="jdbc:hsqldb:hsql://localhost/quartz"

validationQuery="select count(*) from INFORMATION_SCHEMA.SYSTEM_SEQUENCES"/>

Modified Code

<Resource name="jdbc/Hibernate" auth="Container" type="javax.sql.DataSource"

factory="org.apache.commons.dbcp.BasicDataSourceFactory" maxActive="20" maxIdle="5"

maxWait="10000" username="hibuser" password="password"

driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/hibernate"

validationQuery="select 1" />

 

<Resource name="jdbc/Quartz" auth="Container" type="javax.sql.DataSource"

factory="org.apache.commons.dbcp.BasicDataSourceFactory" maxActive="20" maxIdle="5"

maxWait="10000" username="pentaho_user" password="password"

driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/quartz"

validationQuery="select 1"/>

 Imp Note:

Delete pentaho.xml file in below location

tomcat\conf\Catalina\localhost\pentaho.xml

Reason:

Pentaho creates  on startup pentaho.xml as a copy of context.xml

6. repository.xml

Location of the file: pentaho-solutions\system\jackrabbit\repository.xml.

Comment this code means (<! – – every thing here – -> )

Active means: Remove comment

i)                    FileSystem part

Comment this code

<FileSystem>

<param name=”path” value=”${rep.home}/repository”/>

</FileSystem>

Active this code

<FileSystem>

<param name="driver" value="com.mysql.jdbc.Driver"/>

<param name="url" value="jdbc:mysql://localhost:3306/jackrabbit"/>

<param name="user" value="jcr_user"/>

<param name="password" value="password"/>

<param name="schema" value="mysql"/>

<param name="schemaObjectPrefix" value="fs_repos_"/>

</FileSystem>

ii)                  DataStore part

Comment this code

<DataStore/>

Active this code

<DataStore class="org.apache.jackrabbit.core.data.db.DbDataStore">

   <param name="url" value="jdbc:mysql://localhost:3306/jackrabbit"/>

   <param name="user" value="jcr_user"/>

   <param name="password" value="password"/>

   <param name="databaseType" value="mysql"/>

   <param name="driver" value="com.mysql.jdbc.Driver"/>

   <param name="minRecordLength" value="1024"/>

   <param name="maxConnections" value="3"/>

   <param name="copyWhenReading" value="true"/>

   <param name="tablePrefix" value=""/>

   <param name="schemaObjectPrefix" value="ds_repos_"/>

 </DataStore>

iii)                Security part in the FileSystem Workspace part

Comment this code

<FileSystem>

<param name=”path” value=”${wsp.home}”/>

</FileSystem>

   Active this code

<FileSystem>

<param name="driver" value="com.mysql.jdbc.Driver"/>

<param name="url" value="jdbc:mysql://localhost:3306/jackrabbit"/>

<param name="user" value="jcr_user"/>

<param name="password" value="password"/>

<param name="schema" value="mysql"/>

<param name="schemaObjectPrefix" value="fs_ws_"/>

</FileSystem>

iv)                PersistenceManager part

Comment this code

<PersistenceManager>

<param name=”url” value=”jdbc:h2:${wsp.home}/db”/>

<param name=”schemaObjectPrefix” value=”${wsp.name}_”/>

</PersistenceManager>

Active this code

<PersistenceManager>

<param name="url" value="jdbc:mysql://localhost:3306/jackrabbit"/>

<param name="user" value="jcr_user" />

<param name="password" value="password" />

<param name="schema" value="mysql"/>

<param name="schemaObjectPrefix" value="${wsp.name}_pm_ws_"/>

</PersistenceManager>

v)                  FileSystem Versioning part

Comment this code

<FileSystem>

<param name=”path” value=”${rep.home}/version” />

</FileSystem>

 

Active this code

<FileSystem>

<param name="driver" value="com.mysql.jdbc.Driver"/>

<param name="url" value="jdbc:mysql://localhost:3306/jackrabbit"/>

<param name="user" value="jcr_user"/>

<param name="password" value="password"/>

<param name="schema" value="mysql"/>

<param name="schemaObjectPrefix" value="fs_ver_"/>

</FileSystem>

vi)                PersistenceManager Versioning part

 

Comment this code:

 

<PersistenceManager>

<param name=”url” value=”jdbc:h2:${rep.home}/version/db”/>

<param name=”schemaObjectPrefix” value=”version_”/>

</PersistenceManager>

Active this code:

<PersistenceManager>

<param name="url" value="jdbc:mysql://localhost:3306/jackrabbit"/>

<param name="user" value="jcr_user" />

<param name="password" value="password" />

<param name="schema" value="mysql"/>

<param name="schemaObjectPrefix" value="pm_ver_"/>

</PersistenceManager>

Stopping HSQL db start up

In web.xml file

Comment or delete this code (Commenting is preferable)

<!– [BEGIN HSQLDB DATABASES] –>

<context-param>

<param-name>hsqldb-databases</param-name>

<param-value>sampledata@../../data/hsqldb/sampledata,hibernate@../../data/hsqldb/hibernate,quartz@../../data/hsqldb/quartz</param-value>

</context-param>

<!– [END HSQLDB DATABASES] –>

 

Also comment this code

<!– [BEGIN HSQLDB STARTER] –>

<listener>

<listener-class>org.pentaho.platform.web.http.context.HsqldbStartupListener</listener-class>

</listener>

<!– [END HSQLDB STARTER] –>

 

 

You have done with integrating pentaho 5.0.1 CE with Mysql 5.5

Now login to the Pentaho server .

URL:  http://localhost:8080/pentaho

Username/Password : Admin/password

NOTE:

  • You will not find any sample working. B’z you have not installed sample data.
  • Example available in pentaho are developed on sample data so you need to execute and give the new sample data base connections(for this you need to execute .sql file of sample data).

Helical IT Solutions

Helical IT Solutions Launches “Helical Scrunch” – Press Release

Helical IT Solutions Launches “Helical Scrunch”

Helical Brings an Innovative Product to Solve ETL Problems- “Helical Scrunch”

  • One of its own kind, Helical Scrunch will reduce the time, effort & resource requirement by approx 30-70%
  • High end visualization & control of ETL jobs, status, errors, data flow, configurations etc

Regaining the faith of the clients in the company for bringing innovative products to make business more profitable and easy process working, Helical IT Solutions has launched its ambitious product HELICAL SCRUNCH into the market to solve the existing ETL issues faced by the companies.

Nitin Sahu, Co-founder, Helical IT Solutions said, “We are really excited to launch Helical Scrunch which will further lessen the complications which exists in ETL Solutions, thus resulting in saving time and resource requirement and creation of much better high quality enterprise ETLs. We have been working on this product for the past 3 months. This will be a new revolution in the way ETLs are created and used”.
He further explained, “ETL jobs are generally created for data migration, creation of data marts and data warehouse, data integration, data replication, data cleansing etc. Though this work can be handled by database SQL, yet ETL tools are used because of its ease of usage, built in objects like aggregators, easy debugging, good auditing capabilities etc. Even ETLs have many restrictions like very low visibility & control for an ETL admin, no reusability of ETL scripts, no standardization, error and logging etc, keeping all these restrictions of ETL tools in mind, Helical IT Solution has came up with a custom framework (known as Helical Scrunch), to work on top of an ETL tool, thus removing all the restrictions of the same.”
What is ETL?
ETL is shortened form of EXTRACT, TRANSFORM, LOAD.  In ETL, using any method data gets extracted from some source , then this data gets changed (transformed ) as per specific need, further changed ( transformed ) data gets loaded to another system mostly known as target system.
Problem Definition: Though there are many ETL tools available in the market, but using them also come with their own inherent problems, some of which are highlighted below:
Best Practices: Each and every developer does the ETL development according to his logic and his method of development; hence more often than not the best practices are not followed. These best practices are related to error handling, naming conventions, QA, QC etc.

Lack of standardization: Often not following the best practices on logging, error handling, naming conventions, documentation etc leads to lack of standardization between the different ETL jobs which have been developed amongst the different ETL developers.

Lack of control for end user: Generally in any ETL, an end user or IT administrator is often not able to see and monitor what exactly is happening. He has absolutely no control of the jobs, flags, status etc.

Lack of reusability: Generally, any ETL job is designed to tackle any specific problem, and not with reusability or long time picture in mind. So whenever there is any change, ETL job creation starts from the scratch.

Lack of monitoring: An end IT user or business user is having no option to monitor the progress of the job execution, what is the real time progress, logs and errors encountered if any etc.

Lack of visualization: Lack of visualization in ETL tools result thus result in an end user having no control and visibility on the history of the jobs execution, what jobs were executed, what jobs are executing, what error is being thrown etc.

Helical Scrunch
Pluggable: The Helical Scrunch has been designed in such a way that the different features are pluggable (like logging module, visualization module, status and notification module etc). This gives the developer freedom to select which all modules are to be present
Reusability: Helical Scrunch has been designed in such a way to make sure that the jobs created are usable. Having a standardized naming convention, features, documents etc further goes a long way in making sure that the jobs are reusable.
Control: Helical Scrunch provides an extensive control to an end user/IT admin via web interface. The control is very exhaustive which includes controlling and changing ETL configurations without opening ETL job, monitor data flows, controlling what to execute what not to execute etc.
Visualization: Helical Scrunch also provides, via web interface, extensive reports and dashboard capabilities. The reporting capabilities thus empower user to have real time view of the project status, error encountered, data transfer, data flow monitoring, which all jobs are executed, which all jobs are executing etc. There will also be ability to select date range for seeing the different parameters. Visualization helps in Monitoring and Analysis.
Alerting and Notification: The alerting and notification feature of Helical Scrunch helps in creating different kind of alerts. These alerts and notifications are configurable. These alerts can be set on certain events or thresholds. Notifications could include email alerts etc.
Reusability: Helical Scrunch has been designed in such a way that the jobs created are following standard conventions, best practices, naming conventions etc and thus they can be easily edited and highly reusable.
Extensibility: The architecture has been designed in such a way that easily any new module or feature can be added, which makes the entire framework highly extensible. Thus it can accommodate any new requirement or business logic or feature.
Logging: Helical Scrunch is designed in such a way to make sure that absolutely all the ETL work is properly logged. The extensive logging mechanism helps in visualization, identifying, alerting and taking corrective action.
Advantages:
Time Saving: Generally, whenever any ETL job is designed, a lot of work is involved in designing reusable components like error logging, custom logging, defining naming conventions, designing proper architecture etc. By usage of Helical Scrunch, a company can reduce ETL development time by 30-70%, and with a much better quality.
Resource Saving: Usage of this can lead to reduction of a number of resources required to execute the same project. ETL architects are not at all required and ETL developers to create generic ETL jobs and implement ETL architects are not required. The only work involved is in terms of business logic implementation. 30-70% of lesser ETL resources might be required for the implementation of the same work.
Quality of output: The ETL jobs created using this will be of a much higher quality.
Interactivity: A user interface for controlling of jobs, putting configurations, having a view of what is happening, reports, analysis etc gives a lot of interactivity and information to the end users.
Team Productive: Usage of Helical crunch can lead to the ETL team becoming effective and start working right from the day one. Helical Scrunch takes care of all the other things like nomenalcature, standardization, creation of jobs etc. Hence, ETL team can only need to focus on the problem in hand, which is actual implementation of the business logic.
About Helical IT Solutions

Helical IT Solutions is an open source DWBI company and has expertise in providing simple, practical & affordable solutions which are suitable for business users, right from CEO, CXO, line managers & to every end user of the enterprise. With a quick turnaround time, the company can provide mobile BI solutions, on premises or hosted SaaS solution, hence catering to every type of need. Helical offers services on entire BI stack, ranging from ETL, DW, Data mining, Analytics, BI solution. They also provide integration of disparate data sources and offers powerful interactive tools like balanced scorecards, personalized dashboards, key performance indicators, automated alerts, graphical mining, cross tab reporting and more! At present Helical impressive clients list include Unified Social Media, Vortecy Energy Consulting, Sage Human Capital – HR Business Intelligence, Predikto, hCentive- HealthCare insurance and many more.  Check www.helicaltech.com for more details