Business Intelligence Project Lifecycle

Business Intelligence Project Lifecycle

BI&DW_Project_LifeCycle

Project Planning

Project life cycle begins with project planning. Obviously, we must have a basic understanding of the business’s requirements to make appropriate scope decisions. Project planning then turns to resource staffing, coupled with project task identification, assignment, duration, and sequencing. The resulting integrated project plan identifies all tasks associated with the Lifecycle and the responsible parties.

Project Management

Project management ensures that the Project Lifecycle activities remain on track and in sync. Project management activities focus on monitoring project status, issue tracking, and change control to preserve scope boundaries. Ongoing management also includes the development of a comprehensive communication plan that addresses both the business and information technology (IT) constituencies. Continuing communication is critical to managing expectations; managing expectations is critical to achieving your DW/BI goals.

Business Requirement Definition

Business users and their requirements impact nearly every decision made throughout the design and implementation of a DW/BI system. From our perspective, business requirements sit at the centre of the universe, because they are so critical to successful data warehousing. Our understanding of the requirements influence most Lifecycle choices, from establishing the right scope, modeling the right data, picking the right tools, applying the right transformation rules, building the right analyses, and providing the right deployment support.

Business_Requirement_Definition

Product selection and installation

DW/BI environments require the integration of numerous technologies. Considering business requirements, technical environment, specific architectural components such as the hardware platform, database management system, extract-transformation-load (ETL) tool, or data access query and reporting tool must be evaluated and selected. Once the products have been selected, they are then installed and tested to ensure appropriate end-to-end integration within your DW/BI environment.

Data Modeling

The first parallel set of activities following the product selection is the data track, from the design of the target dimensional model, to the physical instantiation of the model, and finally the “heavy lifting” where source data is extracted, transformed, and loaded into the target models.

Dimensional Modeling

During the gathering of business requirements, the organization’s data needs are determined and documented in a preliminary enterprise data warehouse representing the organization’s key business processes and their associated dimensionality. This matrix serves as a data architecture blueprint to ensure that the DW/BI data can be integrated and extended across the organization over time.

Designing dimensional models to support the business’s reporting and analytic needs requires a different approach than that used for transaction processing design. Following a more detailed data analysis of a single business process matrix row, modelers identify the fact table granularity, associated dimensions and attributes, and numeric facts.

Refer this link for more information on dimensional modeling process:
http://helicaltech.com/dimensional-modeling-process/

Physical Design

Physical database design focuses on defining the physical structures, including setting up the database environment and instituting appropriate security. Although the physical data model in the relational database will be virtually identical to the dimensional model, there are additional issues to address, such as preliminary performance tuning strategies, from indexing to partitioning and aggregations.

ETL Design & Development

Design and development of the extract, transformation, and load (ETL) system remains one of the most vexing challenges confronted by a DW/BI project team; even when all the other tasks have been well planned and executed, 70% of the risk and effort in the DW/BI project comes from this step.

BI Application Track

The next concurrent activity track focuses on the business intelligence (BI) applications.

BI application design

Immediately following the product selection, while some DW/BI team members are working on the dimensional models, others should be working with the business to identify the candidate BI applications, along with appropriate navigation interfaces to address the users’ needs and capabilities. For most business users, parameter driven BI applications are as ad hoc as they want or need. BI applications are the vehicle for delivering business value from the DW/BI solution, rather than just delivering the data.

BI application development

Following BI application specification, application development tasks include configuring the business metadata and tool infrastructure, and then constructing and validating the specified analytic and operational BI applications, along with the navigational portal.

Deployment

The 2 parallel tracks, focused on data and BI applications, converge at deployment. Extensive planning is required to ensure that these puzzle pieces are tested and fit together properly, in conjunction with the appropriate education and support infrastructure. It is critical that deployment be well orchestrated; deployment should be deferred if all the pieces, such as training, documentation, and validated data, are not ready for prime time release.

Maintenance

Once the DW/BI system is in production, technical operational tasks are necessary to keep the system performing optimally, including usage monitoring, performance tuning, index maintenance, and system backup. We must also continue focus on the business users with ongoing support, education, and communication.

Growth

If we have done your job well, the DW/BI system is bound to expand and evolve to deliver more value to the business. Prioritization processes must be established to deal with the ongoing business demand. We then go back to the beginning of the Lifecycle, leveraging and building upon the foundation that has already been established, while turning our attention to the new requirements.

– Archana Verma

Finding MySQL Database Size

Finding MySQL Database Size

Here’s the SQL script to list out the entire databases size.

SELECT table_schema "Data Base Name",
SUM( data_length + index_length) / 1024 / 1024 "Data Base Size in MB"
FROM information_schema.TABLES
GROUP BY table_schema;
SELECT TABLE_NAME, table_rows, data_length, index_length,
round(((data_length + index_length) / 1024 / 1024),2) "Size in MB"
FROM information_schema.TABLES
WHERE table_schema = "schema_name";

Following is the calculation:

data_length + index_length = total table size.

data_length = store the real data.
index_length = store the table index.

Unit Test and JUnit Introduction

Testing is the process of checking the functionality of the application whether it is working as per requirements and to ensure that at developer level, unit testing comes into picture. Unit testing is the testing of single entity (class or method). Unit testing is very essential to every software company to give a quality product to their customers.

A unit test is a piece of code written by a developer that executes a specific functionality in the code to be tested. The percentage of code which is tested by unit tests is typically called test coverage.

 

A unit test examines the behavior of a distinct unit of work. Within a Java application, the “distinct unit of work” is often (but not always) a single method. By contrast, integration tests and acceptance tests examine how various components interact.

 

A unit of work is a task that isn’t directly dependent on the completion of any other task.

 

JUnit is a framework that was started by Kent Beck and Erich Gamma.

Erich Gamma is one of the Gang of Four who gave us the now-classic Design Patterns book

A framework is a semi-complete application. A framework provides a reusable, common structure to share among applications. Developers incorporate the framework into their own application and extend it to meet their specific needs.

 

JUnit (http://www.junit.org) is open source software, released under IBM’s Common Public License Version 1.0 and hosted on SourceForge. The Common Public License is business friendly: people can distribute JUnit with commercial products without a lot of red tape or restrictions.

Unit testing framework should follow several best practices like

– Each unit should run independently of all other unit tests.

– The framework should detect and report errors test by test.

– It should be easy to define which unit tests will run.

Example
public class Calculator {
public double add(double number1, double number2) {
return number1 + number2;
}
}

JUnit has many features that make it easy to write and run tests
– Separate test class instances and class loaders for each unit test to avoid side effects
– JUnit annotations to provide resource initialization and reclamation methods; @Before, @BeforeClass, @After, @AfterClass
– A variety of assert methods to make it easy to check the results of your tests
– Integration with other popular tools like ANT, MAVEN, and popular IDEs like Eclipse, NetBeans, IntelliJ and JBuilder

import static org.junit.Assert.*;
import org.junit.Test;
public class CalculatorTest { (1)
@Test
public void testAdd() { (2)
Calculator calculator = new Calculator(); (3)
double result = calculator.add(10, 50); (4)
assertEquals(60, result, 0); (5)
}
}

At 1, we start by defining a test class. The only restriction is that the class must be public; we can name it whatever we like.

At 2, we mark the method as a unit test method by adding the @Test annotation.
A best practice is to name test methods following the testXXX pattern. JUnit doesn’t
have method name restrictions. You can name your methods as you like; as long as
they have the @Test annotation, JUnit will execute them.

At 3, we start the test by creating an instance of the Calculator class (the “object
under test”), and at 4, as before, we execute the test by calling the method to test,
passing it two known values.
At 5, the JUnit framework begins to shine! To check the result of the test, we call
an assertEquals method, which we imported with a static import on the first line of
the class.

 

Assert Method

– assertTrue(String message, Boolean test)

– assertFalse(String message, Boolean test)

– assertNull(String message, Object object)

– assertNotNull(String message, Object object)

– assertEquals(String message, Object expected, Object actual) (uses equals method)

– assertSame(String message, Object expected, Object actual) (uses == operator)

– assertNotSame(String message, Object expected, Object actual)

 

 

File encryption/decryption using AES Algorithm

In this article the reader will learn about plain text file encryption using AES(Advanced Encryption Standard) algorithm.

In modern day cryptography AES algorithm gained much popularity because of its strength.

The message to be encrypted is called the plain text or clear text. The encrypted message, which is not in human readable format is called as cipher text.

In cryptography systems there are two main types of encryption standards.

  1. Public Key Cryptography
  2. Private Key Cryptography

In public key cryptography, there will be two keys for each party. One is private key and the other one is public key. The public key of the recipient is used by the sender for enctyption and the private key is used by the recipient for decryption. This kind of cryptography is also called as Assymetric key cryptography.

In Private key cryptography, there will be only one shared key between the two parties for both encryption and decryption. In this mode of cryptography maintaining the secrecy of the secret key is very important. This kind of cryptography is also called as Symmetric key cryptography.

In general the Asymmetric key cryptography is used for short messages such as encrypting passwords. Symmetric key cryptography is used for encrypting relatively long messages. In cryptography systems like PGP(Pretty Good Privacy) both kinds of cryptography techniques are used.

AES(Advanced Encryption Standard)

The key size of AES is in general 128 bits. Where as 256 bits and 512 bits keys are also possible to use. The javax.crypto package of the Java language has the implementation of the AES algorithm. For 256 bit key encryption/decryption special policy files should be copied into the \jre\lib\security directory, which can be downloaded from Oracle’s web site.

AES uses a block size of 16 bytes. That means it will encrypt the data in block sizes of 16 bytes. So, the plain text should be in multiples of size 16 bytes. But, a file may consist of data of any length. So, in order to encrypt data of length which is not multiples of 16 bytes we can use the following class CipherInputStream. This class can be used to encrypt a stream of plaintext.

public class Encryptor {

	//The password, salt and initializationVector should be preserved for decryption
	private char[] password;
	private byte[] salt;
	private byte[] initializationVector;

	public Encryptor() {
		this.password = "Some string as password".toCharArray();
		try {
			this.salt = "Some string as salt".getBytes("UTF-8");
			//Note: initializationVector should be of length 16 bytes
			this.initializationVector = "Some string of length 16 bytes".getBytes("UTF-8");
		} catch (UnsupportedEncodingException e) {
			e.printStackTrace();
		}
	}

	public static void main(String[] args) {
		Encryptor aes = new Encryptor();
                //Pass the plaintext file and location of the encrypted file as command line arguments
		aes.encrypt(new File(args[0]), new File(args[1]));
	}

	public void encrypt(File plainTextFile, File encryptedLicenceFile) {

		if (encryptedLicenceFile.exists() == false) {
			try {
				encryptedLicenceFile.createNewFile();
			} catch (IOException e) {
				e.printStackTrace();
			}
		}

		
		// To read the file to be encrypted
		FileInputStream fileInputStream = null;

		// To write the encrypted file
		FileOutputStream fileOutputStream = null;

		// To read the file information and to encrypt
		CipherInputStream cipherInputStream = null;
		try {
			fileInputStream = new FileInputStream(plainTextFile);
			fileOutputStream = new FileOutputStream(encryptedLicenceFile);

			SecretKeyFactory keyFactory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
			KeySpec keySpec = new PBEKeySpec(password, salt, 65536, 128);
			
			SecretKey secretKey = keyFactory.generateSecret(keySpec);
			SecretKey secret = new SecretKeySpec(secretKey.getEncoded(), "AES");

			Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
			IvParameterSpec ivParameterSpec = new IvParameterSpec(initializationVector);
			cipher.init(Cipher.ENCRYPT_MODE, secret, ivParameterSpec);

			cipherInputStream = new CipherInputStream(fileInputStream, cipher);

			int read;
			while ((read = cipherInputStream.read()) != -1) {
				fileOutputStream.write((char) read);
			}

			fileOutputStream.flush();

		} catch (NoSuchAlgorithmException e) {
			e.printStackTrace();
		} catch (NoSuchPaddingException e) {
			e.printStackTrace();
		} catch (InvalidKeySpecException e) {
			e.printStackTrace();
		} catch (InvalidKeyException e) {
			e.printStackTrace();
		} catch (FileNotFoundException e) {
			e.printStackTrace();
		} catch (IOException e) {
			e.printStackTrace();
		} catch (InvalidAlgorithmParameterException e) {
			e.printStackTrace();
		} finally {
			try {
				if (fileInputStream != null) {
					fileInputStream.close();
				}
				if (cipherInputStream != null) {
					cipherInputStream.close();
				}
				if (fileOutputStream != null) {
					fileOutputStream.close();
				}
			} catch (IOException e) {
				e.printStackTrace();
			}
		}
	}
}

This code uses one plainTextFile and produces the encryptedFile. For decrypting we need to follow similar steps. The following code demonstrates the decryption. In the decrypt method we can use the following lines of code.

		// To read the ecrypted file
		FileInputStream fileInputStream = null;

		// To write decrypted file
		FileOutputStream fileOutputStream = null;

		// To read encrypted file and decrypt it
		CipherOutputStream cipherOutputStream = null;
		ByteArrayOutputStream byteArrayOutputStream = null;
		try {
			fileInputStream = new FileInputStream(ecncrypedFile);
			byteArrayOutputStream = new ByteArrayOutputStream();

			SecretKeyFactory factory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1");
			KeySpec keySpec = new PBEKeySpec(password, salt, 65536, 128);

			SecretKey secretKey = factory.generateSecret(keySpec);
			SecretKey secret = new SecretKeySpec(secretKey.getEncoded(), "AES");

			Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");

			IvParameterSpec ivParameterSpec = new IvParameterSpec(initializationVector);

			cipher.init(Cipher.DECRYPT_MODE, secret, ivParameterSpec);

			cipherOutputStream = new CipherOutputStream(byteArrayOutputStream, cipher);
			byte[] buffer = new byte[4096];

			int read;
			while ((read = fileInputStream.read(buffer)) != -1) {
				cipherOutputStream.write(buffer, 0, read);
			}

			if (cipherOutputStream != null) {
                                //Unless you close here you won't get complete plain text
				cipherOutputStream.close();
			}

			//The byte[] now can be used to obtain the cleartext
			byte[] plainText = byteArrayOutputStream.toByteArray();

The password, salt, and initializationVector must be the same while decrypting the cipher text. The decrypt method is left as reader’s exercise.

Thank you for reading.

Thanks and Regards,

Rajasekhar
Helical IT Solutions

Create CrossTab In HDI(Helical Dashboard Insights) :

Creating CrossTab In Helical Dashboard Insight (HDI)

This blog will teach you how to create Cross Tab in HDI (Helical Dashboard Insights).

CrossTab uses PivotTable.js.Which is a JavaScript Pivot Table Library with drag and drop functionality. ‘PivotTable.js’ basic function is to turn a data set into a summary table and then optionally add a true 2-d drag’n’drop UI to allow a user to manipulate this summary table.

Main functions defined in ‘PivotTable.js’ is pivotUI()and is implemented as jQuery plugins.

PivotUI() takes 3 parameters :

1) Input : input is an array of objects, an array of arrays, a function or a jQuery object referencing a table.Here we are using Json Data as a input.

2) Option : is an object with the following keys:

  • aggregators: dictionary of generators for aggregation functions in dropdown, defaulting to common aggregators.
  • rows: array of strings, attribute names to prepopulate in row area, default is [ ]
  • cols: array of strings, attribute names to prepopulate in cols area, default is [ ]
  • vals: array of strings, attribute names to prepopulate in vals area, default is [ ] (gets passed to aggregator generating function)
  • aggregatorName: string, aggregator to prepopulate in dropdown (key to aggregators object), default is first key in aggregators .

3) Locale: is a string defaulting to en which controls the default locale for things like number formatting and error messages. Regardless of this setting, you can still override the default aggregators (which control number formatting) and error message strings. If this is set to something other than en you will have to load a locale-specific ‘language pack’ which creates this locale object before calling pivotUI().

For adding the Cross Tab in the HDI following steps should be followed:

1) EFW file: EFW contain the Title, author, description, Template name, visibility of the Dashboard.
2) HTML File:HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external link.
Here we are using the ‘PivotTable.js’ as external Library and it specified in the HDI as below:

<scriptsrc="getExternalResource.html?=CrossTab/PivotTable.js"></script>

We have to declare one CrossTab component in “CrossTab.html” file and in this component we need to provide the link of the file where the CrossTab chart property is defined.

3) EFWD File:EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database.

The DataSource Details used in our demo is shown as below:-

<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.

<DataMap id="2" connection="1" type="sql" >
       <Name>Query for CrossTab Component </Name>
		<Query>
			<![CDATA[
					SELECT distinct r.precinct,cd.candidate,sum(fact.votes) votes 
                                        FROM Voting_Summary as fact,region as r,contest as ct,candidate as cd 
                                        where fact.region_id=r.region_id and
                                        fact.contest_id=ct.contest_id and
                                        fact.candidate_id = cd.candidate_id 
                                        group by ct.contest,r.precinct,cd.candidate;

			]]>
              </Query>
</DataMap>

4)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source.
In Script we set the CrossTab as Below :

&ltscript&gt
		&lt![CDATA[											   				    
			    $(function(){
				$("#chart_1").pivotUI(data,
					    { 
						    rows: ["precinct"], 
						    cols: ["candidate"],
						    aggregatorName: "Integer Sum",
						    vals: ["votes"] ,
						   
					    }
					    );
			    });			    
						      
			]]&gt
		&lt/script&gt

By following these we are able to see the output which is as follows:
CrossTab

-By
Nitin Uttarwar
Helical It Solution

Create Table Component (Dynatable) in HDI (Helical Dashboard Insights) :

This blog will teach you how to Create Table Component (Dynatable) in HDI (Helical Dashboard Insights).

Dynatable : It is a library , it uses jQuery . It is a framework using which we can easily search ,   sort and filter the html tables data and data is in JSON Format.

Eg : –

<table>

<thead>

<th>name</th><th>class</th><th>address</th>

</thead>

<tbody></tbody>

</table>

And you ahve a record that looks like this:

{

“name”: “abc”,

“class”: “08-cs-08”,

“marks”: 72

}

Dynatable will render the “name” attribute to the first column, the “class” attribute to the second, and will ignore the “address” attribute since there’s no corresponding column header, and it will leave the last column blank since there’s not corresponding “marks” attribute.

By default, dynatable converts headings to JSON attribute names in camlecase . But we can change this by using the below property :

defaultColumnIdStyle:’underscore’

or

defaultColumnIdStyle: ‘lowecase’ etc.. 

To provide d3 dynatable feature in HDI , we need to go through following steps:

1. Database server – mysqlDB – foodmartTable : customerQuery :SELECT c.`customer_id` as customer_id, c.`account_num` as account_num, c.`fname` as fname, c.`lname` as lname, c.`address1` as address, c.`city` as city FROM foodmart.customer c;

The above information with username and password we need to provide in “demo.efwd” extension.

2.  We have to declare one dynatable component in “dynatable.html” file and in this component we need to provide the link of the file where the dynatable chart property is defined. 3.  In “dynaTable.efwvf” we need to define the dynatable properties and its link to database connection file i.e; “demo.efwd”

4.  One “dynaTable.EFW” file should also present which points to the template file i.e; “dynatable.html”

5.  Make all the three files in one folder and put              jquery.dynatable.css and jquery.dynatable.js file in the same folder and      give its path link in html extension file .

6.  By following these we are able to see the output which is as follows:

Capture2

Rupam Bhardwaj

Helical IT Solution

Learn Scheduling using Quartz

Why do we need scheduling?
Scheduling is needed if you want to automate the repetition of a task at specific intervals or        particular date. You could of course manually watch the time and execute your task, albeit an inefficient task. If anyone wants to do particular task after every 3 seconds, then who wants to watch the computer every 3 seconds just to hit the Enter key? No one.

History of the Quartz Framework

Quartz was created by James House, who envisioned the first conceptual pieces of the framework in 1998. These included the concept of a queue of jobs and a pool of threads to process the jobs, although probably in an unrecognizable form by most of today’s Quartz users.

 

QUARTZ:-

Before you can use the scheduler, it needs to be instantiated. To do this, you use a SchedulerFactory. Some users of Quartz may keep an instance of a factory in a JNDI store, others may find it just as easy (or easier) to instantiate and use a factory instance directly (such as in the example below).

Once a scheduler is instantiated, it can be started, placed in stand-by mode, and shutdown. Note that once a scheduler is shutdown, it cannot be restarted without being re-instantiated. Triggers do not fire (jobs do not execute) until the scheduler has been started, nor while it is in the paused state.

Here’s a quick snippet of code, that instantiates and starts a scheduler, and schedules a job for execution:

SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory();

 

Scheduler sched = schedFact.getScheduler();

 

sched.start();

 

 

 

 

// define the job and tie it to our HelloJob class

JobDetail job = newJob(HelloJob.class)

.withIdentity(“myJob”, “group1”)

.build();

 

 

 

// Trigger the job to run now, and then every 40 seconds

Trigger trigger = newTrigger()

.withIdentity(“myTrigger”, “group1”)

.startNow()

.withSchedule(simpleSchedule()

.withIntervalInSeconds(40)

.repeatForever())

.build();

 

// Tell quartz to schedule the job using our trigger

sched.scheduleJob(job, trigger);

 

 

By Prashansa Kumari

Helical IT Solutions

Adding new chart in Helical Dashboard Insight (HDI)

Adding new chart in Helical Dashboard Insight (HDI)

1) Adding the Pie Chart in HDI:-
HDI use D3 (Data-Driven Documents) library. D3 allows you to bind arbitrary data to a Document Object Model (DOM), and then apply data-driven transformations to the document. For example, you can use D3 to generate an HTML table from an array of numbers.
For adding the Pie chart in the HDI following steps should be followed:-
1) EFW file:- EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2) EFWD File: – EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database

The DataSource Details used in our demo is shown as below:-


<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.


<DataMap id="2" connection="1" type="sql" >
       <Name>Query for pie chart component - Order Status</Name>
		<Query>
			<![CDATA[
					select STATUS, count(ORDERNUMBER) as totalorders 
					from ORDERS
					where STATUS in (${order_status})
					group by STATUS;  
			]]>
                </Query>

       <Parameters>
          <Parameter name="order_status" type="Collection" default="'Shipped'"/>
       </Parameters>
</DataMap>

3)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source. In chart section we specify the chart script.

In Chart Script we set the below variables to customize the Pie chart
Setting Up the Chart


var placeHolder = "#chart_1";
var chartHeader = "Orders By status";
var width = 300,
height = 300;

The query returns 2 columns – ‘STATUS’ and ‘totalorders’.

As in the JasperReport we set Catagary Expression, Value Expression, Tool Tip for the pie chart,same way the Below variables are set accordingly.


var category = "d.data.STATUS";
var values="d.totalorders";
var tooltip = "\"Total orders with status \"+ d.data.STATUS+\" are \"+d.data.totalorders";
var legendValues = "d.STATUS";

You may change the below script directly for further customization


function angle(d)
{
var a = (d.startAngle + d.endAngle) * 90 / Math.PI - 90;
return a > 90 ? a - 180 : a;
}
$(placeHolder).addClass('panel').addClass('panel-primary');
var radius = Math.min(width, height) / 2;
var color = d3.scale.ordinal().range(["#1f77b4", "#ff7f0e", "#2ca02c", "#d62728", "#9467bd", "#8c564b", "#e377c2", "#7f7f7f", "#bcbd22", "#17becf"]);
var arc = d3.svg.arc().outerRadius(radius - 10).innerRadius(0);
var pie = d3.layout.pie().sort(null).value(function(d) { return eval(values); });
var heading = d3.select(placeHolder).append("h3").attr("class", "panel-heading").style("margin", 0 ).style("clear", "none").text(chartHeader);
Creating The SVG Element
var svg = d3.select(placeHolder).append("div").attr("class", "panel-body").append("svg")
.attr("width", width)
.attr("height", height)
.append("g")
.attr("transform", "translate(" + width / 2 + "," + height / 2 + ")");

Drawing The Pie


var g = svg.selectAll(".arc")
.data(pie(data))
.enter().append("g")
.attr("class", "arc");
g.append("path")
.attr("d", arc)
.style("fill", function(d) { return color(eval(category)); });
g.append("text")
.attr("transform", function(d) {
d.outerRadius = radius -10;
d.innerRadius = (radius -10)/2;
return "translate(" + arc.centroid(d) + ")rotate(" + angle(d) + ")"; })
.attr("dy", ".35em")
.style("text-anchor", "middle")
.style("fill", "White")
.text(function(d) { return eval(category); });

Drawing The Lable and Title


g.append("title")
.text(function(d){ return eval(tooltip)});
var legend = d3.select(placeHolder).select('.panel-body').append("svg")
.attr("class", "legend")
.attr("width", 75 * 2)
.attr("height", 75 * 2)
.selectAll("g")
.data(data)
.enter().append("g")
.attr("transform", function(d, i) { return "translate(50," + (i * 20) + ")"; }); legend.append("rect")
.attr("width", 18)
.attr("height", 18)
.style("fill", function(d){return color(eval(legendValues))});
legend.append("text")
.attr("x", 24)
.attr("y", 9)
.attr("dy", ".35em")
.text (function(d) { return eval(legendValues); });
]]>

4) HTML File:-
HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external resources such as :-


<script src="getExternalResource.html?path=UI_Testing/dimple.js"></script>

Below it we create the Divisions for proper alignments and the Position where the Pie Chart should be placed.


<div class="row">
    <div class="col-sm-5">
       <div id="supportChartObj4"></div>
    </div>
    <div class="col-sm-5">
       <div id="supportChartObj5"></div>
    </div>
</div>

Below the Division section,we have Script section we specify the parameters and the chart.

Parameter Creation:-


var select =
{
name: "select",
type: "select",
options:{
multiple:true,
value : 'STATUS',
display : 'STATUS'
},
parameters: ["order_status"],
htmlElementId: "#supportChartObj1",
executeAtStart: true,
map:1,
iframe:true
};

Chart Creation:-


var chart = {
name: "chart",
type:"chart",
listeners:["order_status"],
requestParameters :{
order_status :"order_status"
},
vf : {
id: "1",
file: "Test.efwvf"
},
htmlElementId : "#supportChartObj4",
executeAtStart: true
};

And all the parameters and chart specified in HTML file are passed to the dashboard.init.

This Way Pie chart is Created.

pie

2)Adding the Pie Chart in HDI:-

1)EFW file:- EFW contain the Title, author, description, Template name, visibility of the Dashboard.

2)EFWD File: – EFWD file contain the Data Source Connection Properties such as connection id and connection type.It also contain Url of the connection to the database, User name and Password to the Database

The DataSource Details used in our demo is shown as below:-


<DataSources>
        <Connection id="1" type="sql.jdbc">
           <Driver>com.mysql.jdbc.Driver</Driver>
           <Url>jdbc:mysql://192.168.2.9:3306/sampledata</Url>
            <User>devuser</User>
            <Pass>devuser</Pass>
        </Connection>
    </DataSources>

Data Map contains Map id and connection and connection Type. Map id is same as that specified in the EFWVF. Query for the Data Map and the Parameter to be used is specified in the Tags and Parameter in the Tags.
Ex:-


<DataMap id="2" connection="1" type="sql" >
       <Name>Query for pie chart component - Order Status</Name>
		<Query>
			<![CDATA[
					select STATUS, count(ORDERNUMBER) as totalorders 
					from ORDERS
					where STATUS in (${order_status})
					group by STATUS;  
			]]>
                </Query>

       <Parameters>
          <Parameter name="order_status" type="Collection" default="'Shipped'"/>
       </Parameters>
</DataMap>

3)EFWVF File :-
In EFWVF file we first set the chart id the chart we set the chart properties. For Pie Chart we set the chart Properties between the tags. The properties such as Chart name, Chart type, Chart Data Source. In chart section we specify the chart script.

In Chart Script we set the below variables to customize the Pie chart

Setting Up the Chart


var placeHolder = "#chart_1";
var chartHeader = "Orders By status";
var margin = {top: 20, right: 30, bottom: 30, left: 70},
width = 500 - margin.left - margin.right,
height = 400 - margin.top - margin.bottom;

The query returns 2 columns – ‘STATUS’ and ‘totalorders’.

As in the JasperReport we set Catagary Expression, Value Expression, Tool Tip for the pie chart,same way the Below variables are set accordingly.


var category = "d.STATUS";
var values="d.totalorders";
var tooltip = "\"Total orders with Status \"+ d.STATUS+\" are\"+d.totalorders";

You may change the below script directly for further customization


$(placeHolder).addClass('panel').addClass('panel-primary');
var x = d3.scale.ordinal().rangeRoundBands([0, width], .1);
var y = d3.scale.linear().range([height, 0]);
var xAxis = d3.svg.axis().scale(x).orient("bottom");
var yAxis = d3.svg.axis().scale(y).orient("left");
var heading = d3.select(placeHolder).append("h3").attr("class", "panel
heading").style("margin", 0 ).style("clear", "none").text(chartHeader);
var chart = d3.select(placeHolder).append("div").attr("class", "panel-body")
.append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.top + margin.bottom)
.append("g")
.attr("transform", "translate(" + margin.left + "," + margin.top + ")");

Drawing The Bar Chart


x.domain(data.map(function(d) { return eval(category); }));
y.domain([0, d3.max(data, function(d) { return eval(values); })]);
        chart.append("g")
        .attr("class", "x axis")
	.attr("transform", "translate(0," + height + ")")
	.call(xAxis);

	chart.append("g")
	.attr("class", "y axis")
	.call(yAxis);

	chart.selectAll(".bar")
	.data(data)
	.enter().append("rect")
	.attr("class", "bar")
	.attr("x", function(d) { return x(eval(category)); })
	.attr("y", function(d) { return y(eval(values)); })
	.attr("height", function(d) { return height - y(eval(values)); })
	.attr("width", x.rangeBand());
	
        chart.selectAll(".bar")
	.append("title")
	.text(function(d){ return eval(tooltip)});

4) HTML File:-
HTML file name should be the same that specified in the EFW file under the Template Section.
In HTML File On Top we specify links of the external resources such as :-


<script src="getExternalResource.html?path=UI_Testing/dimple.js"></script>

Below it we create the Divisions for proper alignments and the Position where the Pie Chart should be placed.


<div class="row">
    <div class="col-sm-5">
       <div id="supportChartObj4"></div>
    </div>
    <div class="col-sm-5">
       <div id="supportChartObj5"></div>
    </div>
</div>

Below the Division section,we have Script section we specify the parameters and the chart.

Parameter Creation:-


var select =
{
name: "select",
type: "select",
options:{
multiple:true,
value : 'STATUS',
display : 'STATUS'
},
parameters: ["order_status"],
htmlElementId: "#supportChartObj1",
executeAtStart: true,
map:1,
iframe:true
};

Chart Creation:-


var chart = {
name: "chart",
type:"chart",
listeners:["order_status"],
requestParameters :{
order_status :"order_status"
},
vf : {
id: "2",
file: "Test.efwvf"
},
htmlElementId : "#supportChartObj4",
executeAtStart: true
};

And all the parameters and chart specified in HTML file are passed to the dashboard.init.

This Way Bar chart is Created.

Bar Chart

BI Solution For Property and Casualty Insurance Domain

A BI solution will help an insurance company to get a holistic 360 degree view of customers, issues faced. Helical has got immense experience in developing BI solution in Property and casulaty insurance domain. Having built more than 50 reports, 20 dashboards, geographical dashboard, adhoc reports and OLAP cubes, these BI solution can be used across many different departments like underwriting, claims, billing, reinsurance, insurance etc.

Below present are snapshots of some of the reports dashboard developed by us for client.

Loss Stratification

Description

This report includes stratification of total incurred with claim count within each stratum. This report includes the ability to define the bands ($0-$100K, $100K-$250K, etc.) to meet your needs.

Purpose

This information is useful in reviewing severity patterns. For example, we see in this report that approximately 60-65% of claim count is generated by losses valued at $1,000 or less. Loss stratification provides information that can be used in several ways. It is an important consideration in insurance program design – retentions, limits, etc. And, this report is also useful in setting severity reduction goals and monitoring performance.

loss stratification report

loss stratification report

Loss Triangle

Description

A table of loss experience showing total losses for a certain period at various, regular valuation dates, reflecting the change in amounts as claims mature. Older periods in the table will have one more entry than the next youngest period, leading to the triangle shape of the data in the table. Loss triangles can be used to determine loss development for a given risk.

Loss Triangle Report

Loss Triangle Report

Large Loss Report

Description

This report provides detailed information on individual claims. It allows selection of claims over a certain threshold.

The amount of detail shown on this type of report can be completely customized to meet each client’s needs.

Purpose

By isolating claims over a chosen dollar amount, clients are able to focus upon claims making the greatest contribution to total incurred. Uses include claim reviews with administrators, insurance submissions, actuarial analysis and focusing operations staff on opportunities such as lost time reduction.

large loss report

large loss report

 

Open Claims

Description

These reports can provide a simple listing with relatively little detail or much more complete information for specific claims.

Purpose

It is often valuable to focus specifically on open claims. We have worked with clients to develop a number of approaches and reports to address this need. Often the focus is to mitigate lost time and close claims. Most effective claim management programs include reinforcement of this objective to both operations staff and claim administrators.

Examples provided here show all open claims, open claims with total incurred

Open Claims Report

Open Claims Report

 

Average Claims by Severity

Description

The Report displays business KPIs like Incurred Loss, Paid Amt, Claim Frequency and Avg Incurred Loss summarized by Adjusting Office.

average claims severity by adjuster

average claims severity by adjuster

 

Claims Cause

Description:This report helps identify specific causes of loss.

Claims Cause Report

Claims Cause Report

 

 

 

Claims Registered

Description: The report provides detailed information of all the claims which are registered. An end user will have the option to filter the data according to dates (month and year), company, line of business, dealer group etc. Based on those input parameters which are selected, the report gets populated up

 

Claims Registered Report

Claims Registered Report

Claims Schedule – By Company

Description: The report provides detailed information regarding the Payments, Loss Adjusting Expense, Outstanding Loss Reserves, Incurred Loss and Salvage Amount by claims.

Claims Schedule

Claims Schedule

 

Claims Recovery Summary

Description: The report displays summary of recovery amount by Country, Company, Agent, Line of Business, Claims, Coverage and Date of Loss.

claims recovery summary

claims recovery summary

 

General Premium Summary Report

Description: The report displays summary of premium amounts by Country, Company, LOB, Dealer group and Coverage.

general premium summary report

general premium summary report

 

 

Inforce & Unearned Premium Summary Report

Description: The report displays summary of new/renewal Inforce amount and new/renewal Unearned amounts premiums by Dealer Group and coverage

Premium summary report

Premium summary report

 

 

Loss Paid and Reserve by LOB

Description: The report displays summary of Loss Paid (MTD), Loss Paid (YTD), Loss Reserve, Expense Paid (MTD), Expense Paid (YTD) and Incurred Amount by LOB.

loss paid report

loss paid report

 

 

Policy Transaction Report

Description: The report gives details of policy transactions in given period.

policy transaction report

policy transaction report

 

 

Premium Bordereaux

premium bordereaux

premium bordereaux

 

 

Written & Earned Premium Comparison

Description: The report display comparison between Policy count, Written and Earned Premium for current period and previous period.

premium comparison report

premium comparison report

 

 

YTD Loss Listing

Description: The report displays YTD summary of Loss Payments, Recoveries and Incurred Loss by Company, Coverage, Dealer Group, Dealer, Claim Number and Date of Loss.

YTD loss listing

YTD loss listing

 

 

Premium production by agent

Description: The report allows user to view Written Premium, Average written Premium, and Policy Count for New and Renewal premiums by Agent

premium production by agent

premium production by agent

 

 

Profitability – Top 10 Agents

For the selected duration this report will show the amount earner from different companies.

Profitability top 10 agents

Profitability top 10 agents

 

 

Policy Register

Description: Description: The report provides detailed information of all policies registered for s

executive dashboard

executive dashboard

elected period.

policy register

policy register

 

 

Executive Dashboard

There could be various dashboards. For example the below dashboard shows information like loss ratio, incurred losses amount and change percentage, written and earned premium, money earner in new versus the renewals etc.

executive dashboard

executive dashboard

 

Underwriting Dashboard

The below dashboard, for the selected product, shows things like written versus earner premium, revenue growth over a period of time from new business / renewal / retention. It is also showing the loss ratio.

underwriting dashboard

underwriting dashboard

 

 

What – If Analysis

A what if analysis to in which there will be input parameter to select the product. Once the product is selected, end user can select different parameters like policy premium, policies per month, renewal retention, claim severity etc and accordingly he can see what how it will affect the loss ration, earner premium, written premium etc.

 

For having a demo of the same please get in touch at [email protected]

Nikhilesh Tiwari

Helical IT Solutions