How to use Google maps in Jaspersoft Studio

How to Use Google Maps in Jaspersoft Studio

The report in this example is developed on Jaspersoft Studio version: 6.3.1 final and deployed on Jaspersoft server version: 6.3.0 Enterprise edition.

Step 1: Create a report in jasper studio. In this example report is called GoogleMapsTest.

Screenshot (193)

1- Delete all the bands except Title and summary band. Use the query “Select 1 as one” in main data-set query.

Step 2: Adding the jasper map component to report.

Screenshot (173)

1- Add the text box to Title band and insert any relevant text into it.

2- Drag the Map component from Palette on to the Summary band of the report.

3- Add a data-set named “Markers” to the report, and use this query in it.

Query:

select ‘NorthCharleston’ as CityName,’-80.02842′ as Longitude,’32.92896′ as Latitude
union
select ‘GREENVILLE’ as CityName ,’-82.33626515′ as Longitude ,’34.79360263′ as Latitude
union
select ‘ZIBO’ as CityName,’117.8351787′ as Longitude,’37.17124911′ as Latitude
union
select ‘Mumbai’ as CityName, ‘72.9227767’ as Longitude, ‘19.10662726’ as Latitude

Step 3: Add markers to the map.

Screenshot (175)      Screenshot (176)

1- Click on the Map and select the Markers tab from the Properties section.

2- Add the created data-set to the marker by selecting the Data-set tab.

Step 4:Add markers properties.

Screenshot (177)

1- For adding marker properties, click the add button on right side, and insert the following details.

i.) Longitude & Latitude : For adding Longitude & Latitude, click the the properties icon on the right hand corner of the input text field. Check the “Use Expression” box and provide the latitude and longitude fields in their appropriate input boxes.

Screenshot (179)

ii.) Adding Mouse-over(Information appearing when hovered on marker) : For mouse-over information, insert data into the “Title” box in the window.

Screenshot (181)

iii.) Adding Pop-up window content(Content shown after clicking the marker) : For adding pop up window content, use the “Info Window”  text boxes.

Screenshot (183)

Step 5: Deploying the report in jaspersoft server

Publish the report on jasper server.

Screenshot (192)

………. and Voila..!!!!!!!!!!!

Thank You

Ravi Bhatta

 

 

Jaspersoft 6.2 Features and Upgrades

Dear Jaspersoft Users,

In this blog we would be talking about the latest version of Jaspersoft i.e. Jaspersoft 6.2 new features.

a. Advanced multi-tenant server administration capabilities

Jaspersoft in its latest release is providing more options to for administration of a multi-tenant envrionment. There is an option through which administrator can export resources for a particular user (which includes option to export data sources, domains, adhoc views, reports, dashboards, scheduled report job, other resource files, sub-organizations, dependencies, permissions, attributes and values).

Below snapshot can be referred to see the UI for the above functionality.

Jaspersoft 6.3

Jaspersoft 6.2

While exporting and then importing for some other user, the other user may/may not have certain accesses and privileges. Based on that proper warning messages appear like shown below

Jaspersoft 6.3 Import Warnings

Jaspersoft 6.2Import Warnings

 

b. Dashboard Improvements

There are certain small incremental improvements which Jaspersoft is making to make their dashboarding experience better (even though there is still a long way to go). In the latest release, Jaspersoft has added the option to add images to the dashboard. This image can also be fetched from a certain web link as well.

Also with the latest release, it is possible to export a dashboard in pdf/image/ODT format. Phantomjs libraries have been used for achieving this functionality. The input filters option can also appear as a popup options on top of the dashboard.

Jaspersoft 6.3 Dashboard

Jaspersoft 6.3 Dashboard

also with latest version, it is possible to drill down into the same panel by clicking on any portion of the chart.

 

c. Advanced charting customization options

Jaspersoft enterprise version uses highcharts. High charts provides a lot of customization option of the charts via APIs. In the latest release of Jaspersoft, they have created certain UI to trigger and have customization option of the charts. Using this some of the customizations which can be done like color, orientation, labels, reset option and position, title text, subtext, legend, panning, hover etc.

There is also a hyperlink of more information which opens a wiki page of Jaspersoft with more details regarding customization.

Jaspersoft 6.3 Charts Customization

Jaspersoft 6.2 Charts Customization

 

d. New charts

Treemap new chart has been added inside Jaspersoft adhoc reporting feature. Below snapshot is a treemap chart.

Jaspersoft Treemap Chart

e. Master scheduler view

With the latest version, Jaspersoft has a page wherein the loggedin user can see how many jobs have been scheduled and he is also having the option to selectively enable/disable those jobs. One user can not see the details of other users email scheduled jobs.

f. Reporting Improvements

There have been certain improvement on the jaspesoft studio or report designing level as well. In Jasper studio now there is an option to add a custom visualization component, thus any javascript (like D3 charts also) can be added inside and used.

– Also there is geojson support inside tibco geoanalytics and geomaps

In a frame multiple elements can be added, and then on resizing the frame the other elements present inside it gets resized. The same can also be done for tables so that columns fit the table element. Previously resizing was a very tedious job and this is a welcome improvement.

Jaspersoft 6.3 studio

 

Please get in touch with us at [email protected] for any Jaspersoft, Pentaho, ETL related requirements or queries.

Thankyou

 

Creating a project in SpagoBI Studio

Creating a project in SpagoBI Studio

SpagoBI Studio is an eclipse based BIRT reporting software and creating a project in SpagoBI studio is very much easy and recommended too as all our reports / documents will be organized in better way

Let’s start creating a project in SpagoBI Studio:

  1. Start SpagoBI studio
  2. Click File > New and select Project.
  3. Search for Report Project under (Business Intelligence and Reporting Tools).
  4. Assign Spago Tutorial
  5. Now, you should be able to see SpagoBI Tutorial under Project Explorer.
  6. Click Yes, to open Report Design Perspective.

Project Creation

Thanks & Regards
Venkatanaveen Dasari™

Map Reduce In MongoDB

MongoDb Map Reduce

Map-reduce is a data processing paradigm for condensing large volumes of data into useful aggregated results.

For map-reduce operations, MongoDB provides the map Reduce database command.

The mapReduce command allows you to run map-reduce aggregation operations over a collection. The mapReduce command has the following prototype form:


db.runCommand(
     {
               mapReduce: <collection>,
               map: <function>,
               reduce: <function>,
               finalize: <function>,
               out: <output>,
               query: <document>,
               sort: <document>,
               limit: <number>,
               scope: <document>,
               verbose: <boolean>
     }
)

 

Pass the name of the collection to the mapReduce command (i.e. <collection>) to use as the source documents to perform the map reduce operation.

The command also accepts the following parameters:

Field Description
mapReduce The name of the collection on which you want to perform map-reduce. This collection will be filtered using query before being processed by the map function.
map A JavaScript function that associates or “maps” a value with a key and emits the key and value pair.
reduce A JavaScript function that “reduces” to a single object all the values associated with a particular key.
out Specifies where to output the result of the map-reduce operation. You can either output to a collection or return the result inline.
query Optional. Specifies the selection criteria using query operators for determining the documents input to the map function.
sort Optional. Sorts the input documents. This option is useful for optimization. For example, specify the sort key to be the same as the emit key so that there are fewer reduce operations. The sort key must be in an existing index for this collection.
limit Optional. Specifies a maximum number of documents for the input into the map function.
finalize Optional. Follows the reduce method and modifies the output.
scope Optional. Specifies global variables that are accessible in the map, reduce and finalize functions.
verbose Optional. Specifies whether to include the timing information in the result information. The verbose defaults to true to include the timing information.

 

The following is a prototype usage of the mapReduce command:

var mapFunction = function() { ... };
var reduceFunction = function(key, values) { ... };
db.runCommand(
{
      mapReduce: <input-collection>,
      map: mapFunction,
      reduce: reduceFunction,
      out: { merge: <output-collection> },
      query: <query>
}
)

Requirement for map function:
Map function is responsible for transforming each input document in to zero or more documents.It can access the variables defined in the scope parameter,and has following prototypes.

function(){
     ...
     emit(key,value);
}

The map function has the following requirements:

  • In the map function, reference the current document as this within the function.
  • The map function should not access the database for any reason.
  • The map function should be pure, or have no impact outside of the function (i.e. side effects.)
  • A single emit can only hold half of MongoDB..
  • The map function may optionally call emit(key,value) any number of times to create an output document associating key with value.

The following map function will call emit(key,value) either 0 or 1 times depending on the value of the input document’s status field:

function(){   
   if(this.status=='A')       
      emit(this.cust_id,1);
}

The following map function may call emit(key,value) multiple times depending on the number of elements in the input document’s items field:

function(){this.items.forEach(function(item){emit(item.sku,1);});}

Requirements for the Reduce Function
The reduce function has the following prototype:


     function(key,values){
         ...
         return result;
}

The reduce function exhibits the following behaviors:

  • The reduce function should not access the database, even to perform read operations.
  • The reduce function should not affect the outside system.
  • MongoDB will not call the reduce function for a key that has only a single value. The valuesargument is an array whose elements are the value objects that are “mapped” to the key.
  • MongoDB can invoke the reduce function more than once for the same key. In this case, the previous output from the reduce function for that key will become one of the input values to the next reduce function invocation for that key.
  • The reduce function can access the variables defined in the scope parameter.
  • The inputs to reduce must not be larger than half of MongoDB’s. This requirement may be violated when large documents are returned and then joined together in subsequent reduce steps.

Because it is possible to invoke the reduce function more than once for the same key, the following properties need to be true:

  • the type of the return object must be identical to the type of the value emitted by the mapfunction.
  • the reduce function must be associative. The following statement must be true:
reduce(key,[C,reduce(key,[A,B])])==reduce(key,[C,A,B])
    • the reduce function must be idempotent. Ensure that the following statement is true:
reduce(key,[reduce(key,valuesArray)])==reduce(key,valuesArray)
    • the reduce function should be commutative: that is, the order of the elements in thevaluesArray should not affect the output of the reduce function, so that the following statement is true:
reduce(key,[A,B])==reduce(key,[B,A])

Requirements for the finalize Function

The finalize function has the following prototype:

 function(key,reducedValue){
          ...
          return modifiedObject;
}

The finalize function receives as its arguments a key value and the reducedValue from thereduce function. Be aware that:

  • The finalize function should not access the database for any reason.
  • The finalize function should be pure, or have no impact outside of the function (i.e. side effects.)
  • The finalize function can access the variables defined in the scope parameter.

out Options

You can specify the following options for the out parameter:

Output to a Collection

This option outputs to a new collection, and is not available on secondary members of replica sets.

out:<collectionName>

Map-Reduce Examples:

Consider two Collection (tables) named :

  • Employee
  • Department

Now , to create collection in mongo db , use below query

db.createCollection(“Employee”)
db.createCollection(“Department”)

To insert data in Employee Collection :

db.Employee.insert({“name” : { “first” : “John”, “last” : “Backus” }, “city” : “Hyd”,“department” : 1})

db.Employee.insert({“name” : { “first” : “Merry”, “last” : “Desuja” }, “city” : “Pune”,“department” : 2})

To insert data in Department Collection :

db.Department.insert({“_id” : 1,   “department” : “Manager”})

db.Department.insert({“_id” : 2,   “department” : “Accountant”})

Now the requirement is to display FirstName , LastName , DepartmentName.

For this , we need to use Map Reduce :

Create two map functions for both the collections.

//map function for Employee
var mapEmployee = function () {
var output= {departmentid : this.department,firstname:this.name.first, lastname:this.name.last , department:null}
     emit(this.department, output);               
};

//map function for Department
var mapDepartment = function () {
var output= {departmentid : this._id,firstname:null, lastname:null , department:this.department}
     emit(this._id, output);              
 };

Write Reduce Logic to display the required fields :



var reduceF = function(key, values) {

var outs = {firstname:null, lastname:null , department:null};

values.forEach(function(v){
      if(outs.firstname ==null){outs.firstname = v.firstname }                   
      if(outs.lastname ==null){outs.lastname = v.lastname    }
      if(outs.department ==null){ outs.department = v.department }                         
 });   
 return outs;
};

Store the result into a different collection called emp_dept_test


result = db.employee_test.mapReduce(mapEmployee, reduceF, {out: {reduce: ‘emp_dept_test’}}) 
result = db.department_test.mapReduce(mapDepartment,reduceF, {out: {reduce: ‘emp_dept_test’}})

write the following command to get combined result:

db.emp_dept_test.find()

Output of the query gives the combined result like


{
    "_id" : 1,
    "value" : {
        "firstname" : "John",
        "lastname" : "Backus",
        "department" : "Manager"
    }
}

/* 1 */
{
    "_id" : 2,
    "value" : {
        "firstname" : "Merry",
        "lastname" : "Desuja",
        "department" : "Accountant"
    }
}

 

-By
Nitin Uttarwar
Helical It Solution

Sorting Data In Mysql Based On Field Value Of a column

This blog will teach you how to sort Data In Mysql Based On Field Value Of a column

We can sort data based on field value of a column by using ORDER BY FIELD () in mysql.

Database : world

Table name : country

If i will execute the below query :

“select distinct Continent from country” then i will get the output as :

Capture

Let say that we want to sort the data in a specific order by “Asia,Africa,North America,South America,Antarctica,Oceania,Europe”.

So in this case we can not perform simply order by because either by ascending or descending , we can not get our required output.

So let’s use the below query :

“select distinct Continent from country

order by field (Continent,’Asia’,

‘Africa’,

‘North America’,

‘South America’,

‘Antarctica’,

‘Oceania’,

‘Europe’)”

And we will get the output as required:

Capture1

Thanks,

Rupam Bhardwaj

 

 

Understanding and Usage of Mondrian

This blog will be talking about the different layers of Mondrian engine, its components, introduction to ROLAP etc.

Mondrian Architecture

 

Layers of a Mondrian system

A Mondrian OLAP System consists of four layers; these are as follows:

  1. Presentation layer

The presentation layer determines what the end-user sees on his or her monitor, and how he or she can interact to ask new questions. There are many ways to present multidimensional datasets, including pivot tables, pie, line and bar charts, and advanced visualization tools such as clickable maps and dynamic graphics. These might be written in Swing or JSP, charts rendered in JPEG or GIF format, or transmitted to a remote application via XML. What all of these forms of presentation have in common is the multidimensional ‘grammar’ of dimensions, measures and cells in which the presentation layer asks the question is asked, and OLAP server returns the answer.

  1. Dimensional layer

The second layer is the dimensional layer. The dimensional layer parses, validates and executes MDX queries. A query is evaluated in multiple phases. The axes are computed first, then the values of the cells within the axes. For efficiency, the dimensional layer sends cell-requests to the aggregation layer in batches. A query transformer allows the application to manipulate existing queries, rather than building an MDX statement from scratch for each request. And metadata describes the dimensional model, and how it maps onto the relational model.

  1. Star layer

The third layer is the star layer, and is responsible for maintaining an aggregate cache. An aggregation is a set of measure values (‘cells’) in memory, qualified by a set of dimension column values. The dimensional layer sends requests for sets of cells. If the requested cells are not in the cache, or derivable by rolling up an aggregation in the cache, the aggregation manager sends a request to the storage layer.

  1. Storage layer.

The storage layer is an RDBMS. It is responsible for providing aggregated cell data, and members from dimension tables.

These components can all exist on the same machine, or can be distributed between machines. Layers 2 and 3, which comprise the Mondrian server, must be on the same machine. The storage layer could be on another machine, accessed via remote JDBC connection. In a multi-user system, the presentation layer would exist on each end-user’s machine (except in the case of JSP pages generated on the server).

ROLAP Defined

Pentaho Analysis is built on the Mondrian relational online analytical processing (ROLAP) engine. ROLAP relies on a multidimensional data model that, when queried, returns a dataset that resembles a grid. The rows and columns that describe and bring meaning to the data in that grid are dimensions, and the hard numerical values in each cell are the measures or facts. In Pentaho Analyzer, dimensions are shown in yellow and measures are in blue.

ROLAP requires a properly prepared data source in the form of a star or snowflake schema that defines a logical multidimensional database and maps it to a physical database model. Once you have your initial data structure in place, you must design a descriptive layer for it in the form of a Mondrian schema, which consists of one or more cubes, hierarchies, and members. Only when you have a tested and optimized Mondrian schema is your data prepared on a basic level for end-user tools like Pentaho Analyzer and JPivot.

 

Workflow Overview

To prepare data for use with the Pentaho Analysis (and Reporting, to a certain extent) client tools, you should follow this basic workflow:

Design a Star or Snowflake Schema

The entire process starts with a data warehouse.The end result should be data model in the star or snowflake schema pattern. You don’t have to worry too much about getting the model exactly right on your first try. Just cover all of your anticipated business needs; part of the process is coming back to the data warehouse design step and making changes to your initial data model after you’ve discovered what your operational needs are.

Populate the Star/Snowflake Schema

Once your data model is designed, the next step is to populate it with actual data, thereby creating your data warehouse. The best tool for this job is Pentaho Data Integration, an enterprise-grade extract, transform, and load (ETL) application.

Build a Mondrian Schema

Now that your initial data warehouse project is complete, you must build a Mondrian schema to organize and describe it in terms that Pentaho Analysis can understand. This is also accomplished through Pentaho Data Integration by using the Agile BI plugin. Just connect to your data warehouse and auto-populate your schema with the Modeler graphical interface. Alternatively, you can use Pentaho Schema Workbench to create an analysis schema through a manual process.

Initial Testing

At this point you should have a multi-dimensional data structure with an appropriate metadata layer. You can now start using Pentaho Analyzer and JPivot to drill down into your data and see if your first attempt at data modeling was successful. In all likelihood, it will need some adjustment, so take note of all of the schema limitations that you’re unhappy with during this initial testing phase.

Do not be concerned with performance issues at this time — just concentrate on the completeness and comprehensiveness of the data model.

Adjust and Repeat Until Satisfied

Use the notes you took during the testing phase to redesign your data warehouse and Mondrian schema appropriately. Adjust hierarchies and relational measure aggregation methods. Create virtual cubes for analyzing multiple fact tables by conforming dimensions. Re-test the new implementation and continue to refine the data model until it matches your business needs perfectly.

Test for Performance

Once you’re satisfied with the design and implementation of your data model, you should try to find performance problems and address them by tuning your data warehouse database, and by creating aggregation tables. The testing can only be reasonably done by hand, using Pentaho Analyzer and/or JPivot. Take note of all of the measures that take an unreasonably long time to calculate. Also, enable SQL logging and locate slow-performing queries, and build indexes for optimizing query performance.

Create Aggregation Tables

Using your notes as a guide, create aggregation tables in Pentaho Aggregation Designer to store frequently computed analysis views. Re-test and create new aggregation tables as necessary.

If you are working with a relatively small data warehouse or a limited number of dimensions, you may not have a real need for aggregation tables. However, be aware of the possibility that performance issues may come up in the future.

Check in with your users occasionally to see if they have any particular concerns about the speed of their BI content.

Deploy to Production

Your data warehouse and Mondrian schema have been created, tested, and refined. You’re now ready to put it all into production. You may need to personally train or purchase Pentaho training for anyone in your organization who needs to create traditional reports, dashboards, or Analyzer reports with Pentaho’s client tools.

Need of BI framework / BI platform: Helical Dashboard Insights (HDI)

Business is changing very rapidly, so is their requirement and needs, organizations generally spends a lot on transactional system to help automate the key business process and spends equally on business intelligence tool to find some useful information from the enormous amount of generated data (transactional data). Nowadays Business Intelligence tools are the most important part of any organization. Some of the organizations purchase the BI tools from the BI vendor if their requirement is fulfilled from that tool; in some cases they develop in-house these tools if their requirement is complex. Let’s evaluate both the approaches.

BI Component / parameters Popular BI tool from market Home grown BI tool
Time Ready to use Organization has to wait till it is ready to use
BI resources You will need BI tool specialist to work on your purchased tool or find a outsourced vendor who supports you to design and develop your BI needs Since this tool is developed in-house you need not have to worry about find a vendor or resources for developing your BI solutions
Training You have to train your resources so that your team will be able to use the BI solutions effectively. No or very less amount of training is required (requirements have come from the same business users).
Support You have to completely rely on the BI tool provider company. This is not the problem in case of n-house developed product.
BI flavors BI tools come with different flavors for example, basic edition, professional edition, enterprise edition. It is very difficult to decide on which suits your need. Designing and developing your own BI tools helps you to add whatever flavor you want to add.
BI software Update problem You need to completely rely on the BI tool provider to release their updates, the updates are generally generic in nature, you need to first evaluate it properly before you decide whether to upgrade to new release/update or not. Updates will be designed, developed and tested on your own environment and hence less complicated.
License confusion There are many different ways of selling the licenses. For example, number of users wise, number of servers, number of cores etc. No license
Compatible Environment You need to make your environment compatible with the BI software to get the best output from the purchased BI software. You can design and develop your BI software in such a way that supports your environment.
Risk There is a huge risk associated with selecting a BI provider; you need to follow vendor risk management to be completely assured of whatever you are selecting. Risk in this case is very less.
Money Have to pay hefty amount of money to purchase this tools Slowly you can increase the budget for this tool
BI features Most of the BI features are presented in these tools. In-house develop all the BI features
Customized BI features No, you are restricted to either use all or none. Yes, you can develop the feature in house as per your requirement
Asset to your organization No. Since you don’t have any rights on the BI software Developed code is your own code and act as an asset for your organization.

 

Considering the above list it is best to develop your own BI software. But obviously going that path is have its own disadvantages as mentioned above time is the biggest contributing factor why companies don’t go for in-house development, also not many companies have IT/BI resources who can design/architect the BI software but they know what they are looking for the BI software. The third important factor is money especially in countries like India where company wants all the BI features but they don’t want to spend money for all those features. This brings me to another question “is there a middle way?”

We at Helical are continuously working in providing a middle way, Helical Dashboard Insights (HDI) – a BI framework / BI platform, which has some pre-build features in it by default like

  1. BI repository
  2. security component
  3. reporting component
  4. scheduling component
  5. email component
  6. dashboard component
  7. input control components (should be able to add any type of input controls like dropdown, slider, calendar etc)
  8. front end component (for designing UI)
  9. data visualization component (should have the capability of customizing the visualization/charts and also developers should be able to design their own charts/visualization)
  10. data source component (should be able to communicate with any data sources e.g. relational, noSQL, flat files etc)
  11. exporting component (should be able to extract the report/dashboard in different formats like CSV, Excel, PDFs etc)
  12. Analytics component (Should be able to convert your data into cube for better analysis)
  13. Embedding functionality (should be able to embed the solution to any solutions be it a website, web application, desktop application)
  14. Mobile ready (should be able to see the solution on the mobile)
  15. Plug and play architecture (should be able to add or remove any of the components from the framework)
  16. Caching component
  17. Dashboard / report language to control your reports and dashboards
  18. Software Development Kit (SDK) for developer to create new components

With the above mentioned list of components what you have to do is start altering the code, add more plugins to the BI framework/ BI platform, remove the component which you don’t want, customize completely the BI framework and come with your own BI tool.

Conclusion : With Helical Dashboard Insights (HDI) – a BI framework / BI platform, it is always good to start with as it is future ready, customizable and very light weighted, you can work on top of HDI to come up with your own BI software within no or very less time. From day 1 you are able to utilize this platform for your basic BI needs and then you can start adding more features to it for your more complex BI needs.

Please contact us for questions, demo, feedback, and for other details.

Nitin Sahu | [email protected]

Exception in thread “main” java.lang.OutOfMemoryError: GC overhead limit exceeded

Different causes for OOM

Every one in java development face java.lang.OutOfMemoryError now and then, OutOfMemoryError in Java is one problem which is may be due to

  • Improper memory management by the Java code
  • Insufficient hardware memory available to process the code
  • Programming mistakes
  • Memory leak

Types of OOM

Any of the above reason can result into OutOfMemoryError. Following three different types of OOM errors are common

  1. Java.lang.OutOfMemoryError: Java heap space
  2. Java.lang.OutOfMemoryError: PermGen space
  3. Java.lang.OutOfMemoryError: GC overhead limit exceeded

OOM in Talend

From above the last error I observed during flat file data migration to database using Talend Open Studio. File sizes were in the range of 10 MB-500MB in size. Initially job worked well, but when started with larger files of around 100MB, this error popped up. Hardware memory (RAM) available was 16 GB. But default memory allocated by Talend was Xmx1024m (1GB). There was a possibility to increase Xmx to 10240m which could have solved the issue, but this GC overhead limit exceeded was related to garbage collection. After searching for the issue, I came across a very descriptive article related to garbage collection at http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html#available_collectors

Resolution

Following workaround solved the problem in Talend without increasing the memory limit to higher figures.

Add new option to Windows–>Preferences–>Talend–>Run/Debug  – XX:-UseGCOverheadLimit

Adding new chart in Helical Dashboard Insight (HDI)

Adding new chart in Helical Dashboard Insight (HDI)

1) Adding the Pie Chart in HDI:-
HDI use D3 (Data-Driven Documents) library. D3 allows you to bind arbitrary data to a Document Object Model (DOM), and then apply data-driven transformations to the document. For example, you can use D3 to generate an HTML table from an array of numbers.
For adding the Pie chart in the HDI following steps should be followed:-
1) EFW file:- EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2) EFWD File: – EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database

The DataSource Details used in our demo is shown as below:-


<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.


<DataMap id="2" connection="1" type="sql" >
       <Name>Query for pie chart component - Order Status</Name>
		<Query>
			<![CDATA[
					select STATUS, count(ORDERNUMBER) as totalorders 
					from ORDERS
					where STATUS in (${order_status})
					group by STATUS;  
			]]>
                </Query>

       <Parameters>
          <Parameter name="order_status" type="Collection" default="'Shipped'"/>
       </Parameters>
</DataMap>

3)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source. In chart section we specify the chart script.

In Chart Script we set the below variables to customize the Pie chart
Setting Up the Chart


var placeHolder = "#chart_1";
var chartHeader = "Orders By status";
var width = 300,
height = 300;

The query returns 2 columns – ‘STATUS’ and ‘totalorders’.

As in the JasperReport we set Catagary Expression, Value Expression, Tool Tip for the pie chart,same way the Below variables are set accordingly.


var category = "d.data.STATUS";
var values="d.totalorders";
var tooltip = "\"Total orders with status \"+ d.data.STATUS+\" are \"+d.data.totalorders";
var legendValues = "d.STATUS";

You may change the below script directly for further customization


function angle(d)
{
var a = (d.startAngle + d.endAngle) * 90 / Math.PI - 90;
return a > 90 ? a - 180 : a;
}
$(placeHolder).addClass('panel').addClass('panel-primary');
var radius = Math.min(width, height) / 2;
var color = d3.scale.ordinal().range(["#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#9467bd", "#8c564b", "#e377c2", "#7f7f7f", "#bcbd22", "#17becf"]);
var arc = d3.svg.arc().outerRadius(radius - 10).innerRadius(0);
var pie = d3.layout.pie().sort(null).value(function(d) { return eval(values); });
var heading = d3.select(placeHolder).append("h3").attr("class", "panel-heading").style("margin", 0 ).style("clear", "none").text(chartHeader);
Creating The SVG Element
var svg = d3.select(placeHolder).append("div").attr("class", "panel-body").append("svg")
.attr("width", width)
.attr("height", height)
.append("g")
.attr("transform", "translate(" + width / 2 + "," + height / 2 + ")");

Drawing The Pie


var g = svg.selectAll(".arc")
.data(pie(data))
.enter().append("g")
.attr("class", "arc");
g.append("path")
.attr("d", arc)
.style("fill", function(d) { return color(eval(category)); });
g.append("text")
.attr("transform", function(d) {
d.outerRadius = radius -10;
d.innerRadius = (radius -10)/2;
return "translate(" + arc.centroid(d) + ")rotate(" + angle(d) + ")"; })
.attr("dy", ".35em")
.style("text-anchor", "middle")
.style("fill", "White")
.text(function(d) { return eval(category); });

Drawing The Lable and Title


g.append("title")
.text(function(d){ return eval(tooltip)});
var legend = d3.select(placeHolder).select('.panel-body').append("svg")
.attr("class", "legend")
.attr("width", 75 * 2)
.attr("height", 75 * 2)
.selectAll("g")
.data(data)
.enter().append("g")
.attr("transform", function(d, i) { return "translate(50," + (i * 20) + ")"; }); legend.append("rect")
.attr("width", 18)
.attr("height", 18)
.style("fill", function(d){return color(eval(legendValues))});
legend.append("text")
.attr("x", 24)
.attr("y", 9)
.attr("dy", ".35em")
.text (function(d) { return eval(legendValues); });
]]>

4) HTML File:-
HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external resources such as :-


<script src="getExternalResource.html?path=UI_Testing/dimple.js"></script>

Below it we create the Divisions for proper alignments and the Position where the Pie Chart should be placed.


<div class="row">
    <div class="col-sm-5">
       <div id="supportChartObj4"></div>
    </div>
    <div class="col-sm-5">
       <div id="supportChartObj5"></div>
    </div>
</div>

Below the Division section,we have Script section we specify the parameters and the chart.

Parameter Creation:-


var select =
{
name: "select",
type: "select",
options:{
multiple:true,
value : 'STATUS',
display : 'STATUS'
},
parameters: ["order_status"],
htmlElementId: "#supportChartObj1",
executeAtStart: true,
map:1,
iframe:true
};

Chart Creation:-


var chart = {
name: "chart",
type:"chart",
listeners:["order_status"],
requestParameters :{
order_status :"order_status"
},
vf : {
id: "1",
file: "Test.efwvf"
},
htmlElementId : "#supportChartObj4",
executeAtStart: true
};

And all the parameters and chart specified in HTML file are passed to the dashboard.init.

This Way Pie chart is Created.

pie

2)Adding the Pie Chart in HDI:-

1)EFW file:- EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2)EFWD File: – EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database

The DataSource Details used in our demo is shown as below:-


<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.
Ex:-


<DataMap id="2" connection="1" type="sql" >
       <Name>Query for pie chart component - Order Status</Name>
		<Query>
			<![CDATA[
					select STATUS, count(ORDERNUMBER) as totalorders 
					from ORDERS
					where STATUS in (${order_status})
					group by STATUS;  
			]]>
                </Query>

       <Parameters>
          <Parameter name="order_status" type="Collection" default="'Shipped'"/>
       </Parameters>
</DataMap>

3)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source. In chart section we specify the chart script.

In Chart Script we set the below variables to customize the Pie chart

Setting Up the Chart


var placeHolder = "#chart_1";
var chartHeader = "Orders By status";
var margin = {top: 20, right: 30, bottom: 30, left: 70},
width = 500 - margin.left - margin.right,
height = 400 - margin.top - margin.bottom;

The query returns 2 columns – ‘STATUS’ and ‘totalorders’.

As in the JasperReport we set Catagary Expression, Value Expression, Tool Tip for the pie chart,same way the Below variables are set accordingly.


var category = "d.STATUS";
var values="d.totalorders";
var tooltip = "\"Total orders with Status \"+ d.STATUS+\" are\"+d.totalorders";

You may change the below script directly for further customization


$(placeHolder).addClass('panel').addClass('panel-primary');
var x = d3.scale.ordinal().rangeRoundBands([0, width], .1);
var y = d3.scale.linear().range([height, 0]);
var xAxis = d3.svg.axis().scale(x).orient("bottom");
var yAxis = d3.svg.axis().scale(y).orient("left");
var heading = d3.select(placeHolder).append("h3").attr("class", "panel
heading").style("margin", 0 ).style("clear", "none").text(chartHeader);
var chart = d3.select(placeHolder).append("div").attr("class", "panel-body")
.append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");

Drawing The Bar Chart


x.domain(data.map(function(d) { return eval(category); }));
y.domain([0, d3.max(data, function(d) { return eval(values); })]);
        chart.append("g")
        .attr("class", "x axis")
	.attr("transform", "translate(0," + height + ")")
	.call(xAxis);

	chart.append("g")
	.attr("class", "y axis")
	.call(yAxis);

	chart.selectAll(".bar")
	.data(data)
	.enter().append("rect")
	.attr("class", "bar")
	.attr("x", function(d) { return x(eval(category)); })
	.attr("y", function(d) { return y(eval(values)); })
	.attr("height", function(d) { return height - y(eval(values)); })
	.attr("width", x.rangeBand());
	
        chart.selectAll(".bar")
	.append("title")
	.text(function(d){ return eval(tooltip)});

4) HTML File:-
HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external resources such as :-


<script src="getExternalResource.html?path=UI_Testing/dimple.js"></script>

Below it we create the Divisions for proper alignments and the Position where the Pie Chart should be placed.


<div class="row">
    <div class="col-sm-5">
       <div id="supportChartObj4"></div>
    </div>
    <div class="col-sm-5">
       <div id="supportChartObj5"></div>
    </div>
</div>

Below the Division section,we have Script section we specify the parameters and the chart.

Parameter Creation:-


var select =
{
name: "select",
type: "select",
options:{
multiple:true,
value : 'STATUS',
display : 'STATUS'
},
parameters: ["order_status"],
htmlElementId: "#supportChartObj1",
executeAtStart: true,
map:1,
iframe:true
};

Chart Creation:-


var chart = {
name: "chart",
type:"chart",
listeners:["order_status"],
requestParameters :{
order_status :"order_status"
},
vf : {
id: "2",
file: "Test.efwvf"
},
htmlElementId : "#supportChartObj4",
executeAtStart: true
};

And all the parameters and chart specified in HTML file are passed to the dashboard.init.

This Way Bar chart is Created.

Bar Chart

Steps to migrate oracle with pentaho

Step 1:-

Run script as DB admin.

Script is available in location:- biserver-ce\data\oracle10g.

Modify configuration file:-

  1. applicationContext-spring-security-hibernate.properties.

Location:-

pentaho-solutions\system\applicationContext-spring-security-hibernate.properties.

original code:-

jdbc.driver=org.hsqldb.jdbcDriver

jdbc.url=jdbc:hsqldb:hsql://localhost:9001/hibernate

jdbc.username=hibuser

jdbc.password=password

hibernate.dialect=org.hibernate.dialect.HSQLDialect

Modified code:-

dbc.driver=oracle.jdbc.OracleDriver

jdbc.url=jdbc:oracle:thin:@localhost:1521/sysdba

jdbc.username=hibuser

jdbc.password=password

hibernate.dialect=org.hibernate.dialect.Oracle10gDialect

  1. hibernate-settings.xml

Location:- pentaho-solutions\system\hibernate\hibernate-settings.xml.

Original code

<config-file>system/hibernate/hsql.hibernate.cfg.xml</config-file>

Modified code:-

<config-file>system/hibernate/oracle10g.hibernate.cfg.xml</config-file>

oracle10g.hibernate.cfg.xml:-

Location:-

pentaho-solutions\system\hibernate\ oracle10g.hibernate.cfg.xml

Do not need to change any code in this file.. Just need to check everything is perfect or not

 

   <!– Oracle 10g Configuration –>

<property name=”connection.driver_class”>oracle.jdbc.driver.OracleDriver</property>

<property name=”connection.url”>jdbc:oracle:thin:@localhost:1521/sysdba

</property>

<property name=”dialect”>org.hibernate.dialect.Oracle10gDialect</property>

<property name=”connection.username”>hibuser</property>

<property name=”connection.password”>password</property>

<property name=”connection.pool_size”>10</property>

<property name=”show_sql”>false</property>

<property name=”hibernate.jdbc.use_streams_for_binary”>true</property>

<!– replaces DefinitionVersionManager –>

<property name=”hibernate.hbm2ddl.auto”>update</property>

<!– load resource from classpath –>

<mapping resource=”hibernate/oracle10g.hbm.xml” />

 

  1. quartz.properties:-

Location:-

pentaho-solutions\system\quartz\quartz.properties

Original Code

org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.PostgreSQLDelegate

Modified Code:-

org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.oracle.OracleDelegate

4.context.xml:-

Location:-

tomcat\webapps\pentaho\META-INF\context.xml

Original Code

<Resource name=”jdbc/Hibernate” auth=”Container” type=”javax.sql.DataSource”

factory=”org.apache.commons.dbcp.BasicDataSourceFactory” maxActive=”20″ maxIdle=”5″

maxWait=”10000″ username=”hibuser” password=”password”

driverClassName=”org.hsqldb.jdbcDriver” url=”jdbc:hsqldb:hsql://localhost/hibernate

validationQuery=”select count(*) from INFORMATION_SCHEMA.SYSTEM_SEQUENCES” />

 

<Resource name=”jdbc/Quartz” auth=”Container” type=”javax.sql.DataSource”

factory=”org.apache.commons.dbcp.BasicDataSourceFactory” maxActive=”20″ maxIdle=”5″

maxWait=”10000″ username=”pentaho_user” password=”password”

driverClassName=”org.hsqldb.jdbcDriver” url=”jdbc:hsqldb:hsql://localhost/quartz

validationQuery=”select count(*) from INFORMATION_SCHEMA.SYSTEM_SEQUENCES”/>

 

Modified Code:-

<Resource validationQuery=”select 1 from dual”

url=” jdbc:oracle:thin:@localhost:1521/sysdba

driverClassName=”oracle.jdbc.OracleDriver” password=”password”

username=”hibuser” maxWait=”10000″ maxIdle=”5″ maxActive=”20″

factory=”org.apache.commons.dbcp.BasicDataSourceFactory”

type=”javax.sql.DataSource” auth=”Container” name=”jdbc/Hibernate”/>

                       

<Resource validationQuery=”select 1 from dual”

url=” jdbc:oracle:thin:@localhost:1521/sysdba

driverClassName=”oracle.jdbc.OracleDriver” password=”password”

username=”quartz” maxWait=”10000″ maxIdle=”5″ maxActive=”20″

factory=”org.apache.commons.dbcp.BasicDataSourceFactory”

type=”javax.sql.DataSource” auth=”Container” name=”jdbc/Quartz”/>

6. repository.xml

Location of the file: pentaho-solutions\system\jackrabbit\repository.xml.

Comment this code means (<! – – every thing here – -> )

Active means: Remove comment

i)                    FileSystem part

Comment this code

<FileSystem class=”org.apache.jackrabbit.core.fs.local.LocalFileSystem”>

     <param name=”path” value=”${rep.home}/repository”/>

</FileSystem>

Active this code:-

<FileSystem class=”org.apache.jackrabbit.core.fs.db.OracleFileSystem”>

   <param name=”url” value=”jdbc:oracle:thin:@localhost:1521“/>

   <param name=”user” value=”jcr_user”/>

   <param name=”password” value=”password”/>

   <param name=”schemaObjectPrefix” value=”fs_repos_”/>

   <param name=”tablespace” value=”pentaho_tablespace”/>

</FileSystem>

ii)                  DataStore part

Comment this code

<DataStore class=”org.apache.jackrabbit.core.data.FileDataStore”/>

Active this code:-

<DataStore class=”org.apache.jackrabbit.core.data.db.DbDataStore”>

<param name=”url” value=”jdbc:oracle:thin:@localhost:1521/sysdba”/>

<param name=”driver” value=”oracle.jdbc.OracleDriver”/>

<param name=”user” value=”jcr_user”/>

<param name=”password” value=”password”/>

<param name=”databaseType” value=”oracle”/>

<param name=”minRecordLength” value=”1024″/>

<param name=”maxConnections” value=”3″/>

<param name=”copyWhenReading” value=”true”/>

<param name=”tablePrefix” value=””/>

<param name=”schemaObjectPrefix” value=”ds_repos_”/>

</DataStore>

iii)                Security part in the FileSystem Workspace part

Comment this code:-

<FileSystem class=”org.apache.jackrabbit.core.fs.local.LocalFileSystem”>

<param name=”path” value=”${wsp.home}”/>

</FileSystem>

Active this code:-

<FileSystem class=”org.apache.jackrabbit.core.fs.db.OracleFileSystem”>

<param name=”url” value=”jdbc:oracle:[email protected]:1521/sysdba”/>

<param name=”user” value=”jcr_user”/>

<param name=”password” value=”password”/>

<param name=”schemaObjectPrefix” value=”fs_ws_”/>

<param name=”tablespace” value=”pentaho_tablespace”/>

</FileSystem>

iv)       PersistenceManager part

Comment this code:-

<PersistenceManager class=”org.apache.jackrabbit.core.persistence.pool.H2PersistenceManager”>

<param name=”url” value=”jdbc:h2:${wsp.home}/db”/>

<param name=”schemaObjectPrefix” value=”${wsp.name}_”/>

</PersistenceManager>

Active This Code:-

<PersistenceManager class=”org.apache.jackrabbit.core.persistence.bundle.OraclePersistenceManager”>

<param name=”url” value=”jdbc:oracle:thin:@localhost:1521/sysdba”/>

<param name=”driver” value=”oracle.jdbc.OracleDriver”/>

<param name=”user” value=”jcr_user”/>

<param name=”password” value=”password”/>

<param name=”schema” value=”oracle”/>

<param name=”schemaObjectPrefix” value=”${wsp.name}_pm_ws_”/>

<param name=”tablespace” value=”pentaho_tablespace”/>

</PersistenceManager>

v)       FileSystem Versioning part

Comment This Code:-

<FileSystem class=”org.apache.jackrabbit.core.fs.local.LocalFileSystem”>

<param name=”path” value=”${rep.home}/version” />

</FileSystem>

Active This Code:-

<PersistenceManager class=”org.apache.jackrabbit.core.persistence.bundle.OraclePersistenceManager”>

<param name=”url” value=”jdbc:oracle:thin:@localhost:1521/sysdba”/>

<param name=”driver” value=”oracle.jdbc.OracleDriver”/>

<param name=”user” value=”jcr_user”/>

<param name=”password” value=”password”/>

<param name=”schema” value=”oracle”/>

<param name=”schemaObjectPrefix” value=”pm_ver_”/>

<param name=”tablespace” value=”pentaho_tablespace”/>

</PersistenceManager>

 

 

Stopping HSQL db start up

In web.xml file

Comment or delete this code (Commenting is preferable)

<!– [BEGIN HSQLDB DATABASES] –>

<context-param>

<param-name>hsqldb-databases</param-name>

<param-value>sampledata@../../data/hsqldb/sampledata,hibernate@../../data/hsqldb/hibernate,quartz@../../data/hsqldb/quartz</param-value>

</context-param>

<!– [END HSQLDB DATABASES] –>

 

Also comment this code

<!– [BEGIN HSQLDB STARTER] –>

<listener>

<listener-class>org.pentaho.platform.web.http.context.HsqldbStartupListener</listener-class>

</listener>

<!– [END HSQLDB STARTER] –>

You have done with integrating pentaho 5.0.1 CE with Oracle

Now login to the Pentaho server .

URL:  http://localhost:8080/pentaho

Username/Password : Admin/password