Business Intelligence Project Lifecycle

Business Intelligence Project Lifecycle

BI&DW_Project_LifeCycle

Project Planning

Project life cycle begins with project planning. Obviously, we must have a basic understanding of the business’s requirements to make appropriate scope decisions. Project planning then turns to resource staffing, coupled with project task identification, assignment, duration, and sequencing. The resulting integrated project plan identifies all tasks associated with the Lifecycle and the responsible parties.

Project Management

Project management ensures that the Project Lifecycle activities remain on track and in sync. Project management activities focus on monitoring project status, issue tracking, and change control to preserve scope boundaries. Ongoing management also includes the development of a comprehensive communication plan that addresses both the business and information technology (IT) constituencies. Continuing communication is critical to managing expectations; managing expectations is critical to achieving your DW/BI goals.

Business Requirement Definition

Business users and their requirements impact nearly every decision made throughout the design and implementation of a DW/BI system. From our perspective, business requirements sit at the centre of the universe, because they are so critical to successful data warehousing. Our understanding of the requirements influence most Lifecycle choices, from establishing the right scope, modeling the right data, picking the right tools, applying the right transformation rules, building the right analyses, and providing the right deployment support.

Business_Requirement_Definition

Product selection and installation

DW/BI environments require the integration of numerous technologies. Considering business requirements, technical environment, specific architectural components such as the hardware platform, database management system, extract-transformation-load (ETL) tool, or data access query and reporting tool must be evaluated and selected. Once the products have been selected, they are then installed and tested to ensure appropriate end-to-end integration within your DW/BI environment.

Data Modeling

The first parallel set of activities following the product selection is the data track, from the design of the target dimensional model, to the physical instantiation of the model, and finally the “heavy lifting” where source data is extracted, transformed, and loaded into the target models.

Dimensional Modeling

During the gathering of business requirements, the organization’s data needs are determined and documented in a preliminary enterprise data warehouse representing the organization’s key business processes and their associated dimensionality. This matrix serves as a data architecture blueprint to ensure that the DW/BI data can be integrated and extended across the organization over time.

Designing dimensional models to support the business’s reporting and analytic needs requires a different approach than that used for transaction processing design. Following a more detailed data analysis of a single business process matrix row, modelers identify the fact table granularity, associated dimensions and attributes, and numeric facts.

Refer this link for more information on dimensional modeling process:
http://helicaltech.com/dimensional-modeling-process/

Physical Design

Physical database design focuses on defining the physical structures, including setting up the database environment and instituting appropriate security. Although the physical data model in the relational database will be virtually identical to the dimensional model, there are additional issues to address, such as preliminary performance tuning strategies, from indexing to partitioning and aggregations.

ETL Design & Development

Design and development of the extract, transformation, and load (ETL) system remains one of the most vexing challenges confronted by a DW/BI project team; even when all the other tasks have been well planned and executed, 70% of the risk and effort in the DW/BI project comes from this step.

BI Application Track

The next concurrent activity track focuses on the business intelligence (BI) applications.

BI application design

Immediately following the product selection, while some DW/BI team members are working on the dimensional models, others should be working with the business to identify the candidate BI applications, along with appropriate navigation interfaces to address the users’ needs and capabilities. For most business users, parameter driven BI applications are as ad hoc as they want or need. BI applications are the vehicle for delivering business value from the DW/BI solution, rather than just delivering the data.

BI application development

Following BI application specification, application development tasks include configuring the business metadata and tool infrastructure, and then constructing and validating the specified analytic and operational BI applications, along with the navigational portal.

Deployment

The 2 parallel tracks, focused on data and BI applications, converge at deployment. Extensive planning is required to ensure that these puzzle pieces are tested and fit together properly, in conjunction with the appropriate education and support infrastructure. It is critical that deployment be well orchestrated; deployment should be deferred if all the pieces, such as training, documentation, and validated data, are not ready for prime time release.

Maintenance

Once the DW/BI system is in production, technical operational tasks are necessary to keep the system performing optimally, including usage monitoring, performance tuning, index maintenance, and system backup. We must also continue focus on the business users with ongoing support, education, and communication.

Growth

If we have done your job well, the DW/BI system is bound to expand and evolve to deliver more value to the business. Prioritization processes must be established to deal with the ongoing business demand. We then go back to the beginning of the Lifecycle, leveraging and building upon the foundation that has already been established, while turning our attention to the new requirements.

– Archana Verma

Dimensional Modeling Process

Dimensional Modeling

Dimensional modeling is a technique, used in data warehouse design, for conceptualizing and visualizing data models as a set of measures that are described by common aspects of the business. It is especially useful for summarizing and rearranging the data and presenting views of the data to support data analysis.

Dimensional Modeling Vocabulary

Fact

A fact table is the primary table in a dimensional model where the numerical performance measurements of the business are stored. The term fact represents a business measure that can be used in analyzing the business or business processes. The most useful facts are numeric and additive.

For e.g. Sales, Quantity ordered

Dimension

Dimension tables are integral companions to a fact table. The dimension tables contain the textual descriptors of the business. Each dimension is defined by its single primary key, which serves as the basis for referential integrity with any given fact table to which it is joined.

Dimensions are the parameters over which we want to perform Online Analytical Processing (OLAP). For example, in a database for analyzing all sales of products, common dimensions could be:

  • · Time
  • · Location/region
  • · Customers
  • · Salesperson

 

Dimensional Modeling Process

Identify business process

In dimensional modeling, the best unit of analysis is the business process in which the organization has the most interest. Select the business process for which the dimensional model will be designed. Based on the selection, the requirements for the business process are gathered.

At this phase we focus on business processes, rather than on business departments, so that we can deliver consistent information more economically throughout the organization. If we establish departmentally bound dimensional models, we’ll inevitably duplicate data with different labels and terminology.

For example, we’d build a single dimensional model to handle orders data rather than building separate models for the sales and marketing departments, which both want to access orders data.

Define grain

The granularity of a fact is the level of detail at which it is recorded. If data is to be analyzed effectively, it must all be at the same level of granularity. As a general rule, data should be kept at the highest (most detailed) level of granularity.

For example, grain definitions can include the following items:

  • A line item on a grocery receipt
  • A monthly snapshot of a bank account statement

Identify dimensions & facts

Our next step in creating a model is to identify the measures and dimensions within our requirements.

A user typically needs to evaluate, or analyze, some aspect of the organizations business. The requirements that have been collected must represent the two key elements of this analysis: what is being analyzed, and the evaluation criteria for what is being analyzed. We refer to the evaluation criteria as measures and what is being analyzed as dimensions.

 

If we are clear about the grain, then the dimensions typically can be identified quite easily. With the choice of each dimension, we will list all the discrete, text like attributes that will flesh out each dimension table. Examples of common dimensions include date, product, customer, transaction type, and status.

 

Facts are determined by answering the question, “What are we measuring?” Business users are keenly interested in analyzing these business process performance measures.

 

Creating a dimension table

Now that we have identified dimensions, next we need to identify members , hierarchies & properties or attributes of each dimension that we need to store in our table.

Dimension Members:

A dimension contains many dimension members. A dimension member is a distinct name or identifier used to determine a data items position. For example, all months, quarters, and years make up a time dimension, and all cities, regions, and countries make up a geography dimension.

Dimension Hierarchies:

We can arrange the members of a dimension into one or more hierarchies. Each hierarchy can also have multiple hierarchy levels. Every member of a dimension does not locate on one hierarchy structure.

Creating a fact table

Together, one set of dimensions and its associated measures make up what we call a fact. Organizing the dimensions and measures into facts is the next step. This is the process of grouping dimensions and measures together in a manner that can address the specified requirements. All candidate facts in a design must be true to the grain defined in step 2. Facts that clearly belong to a different grain must be in a separate fact table.

 

Archana Verma

Helical IT Solutions

Getting Started with Mongo DB

Installation & Startup:

Download MongoDB installer for windows platform from http://www.mongodb.org/downloads and run. This simply extracts the binaries to your program files.

#Create DBPATH and log libraries:

Allocate a folder in your system that can be used for holding the mongo databases and also allocate a log file.

Ex – Allocated “C:\mongo\data\db” for databases and “C:\mongo\logs\mongo.log” as a log file.

#Starting the mongo database

Below are different ways of starting the mongodb:

1.    From the command prompt

Execute the mongod.exe present in the bin folder to start the database.

On command prompt à mongod --dbpath c:\mongo\data\db

There are other options that can also be specified alongwith dbpath. If dbpath is not provided, it looks for c:\data\db folder and gives error if not found.

To shutdown, press CTRL+C

 

2.    Starting with a config file

You can create a configuration file to define settings for the MongoDB server like the dbpath,logpath etc. Below is a sample file :

(This is a older format, for 2.6 version a new format is introduced. Older format is supported for backward compatibility)

#This is an example config file for MongoDB

dbpath = C:\Mongo\data\db

port = 27017

logpath = C:\Mongo\logs\mongo.log

Now you can use the below command –

C:\Program Files\MongoDB 2.6 Standard\bin>mongod --config mongo.conf

2014-04-15T10:27:18.883+0530 log file "C:\Mongo\logs\mongo.log" exists; moved to

"C:\Mongo\logs\mongo.log.2014-04-15T04-57-18".

As we haven’t specified “logappend” option in the config file, it allocates new file everytime you start the db. You can check the log file if you are getting errors while connecting to the db

To shutdown, use command “mongod –shutdown”

 

3.    Installing as Windows service:

Start the command prompt as administrator

You can use the below command to create the service, edit the same as per your settings:

sc create MongoDB binPath= "\"C:\Program Files\MongoDB 2.6 Standard\bin\mongod.exe\" --service --config=\"C:\Program Files\MongoDB 2.6 Standard\bin\mongo.conf\"" DisplayName= "MongoDB 2.6 Standard"

Please note this is a single line of command

You can now simply start/stop the service to start/shutdown the mongo database.

 

Using Mongo command shell:

Run Mongo.exe from \bin folder and you will see the below:

MongoDB shell version: 2.6.0

connecting to: test      //This is the Default database

Welcome to the MongoDB shell.

For interactive help, type "help".

For more comprehensive documentation, see

http://docs.mongodb.org/

Questions? Try the support group

http://groups.google.com/group/mongodb-user

 

Some basic commands to get you started

> show dbs                  // show databases

admin  (empty)
local  0.078GB

> use names               // switch to a particular database/creates one if it does not exist

switched to db names

> db.mynames.insert({name: 'shraddha', email: '[email protected]'})           // Inserting document
WriteResult({ "nInserted" : 1 })

//Note that , ‘db’ points to the current database in use. Here, Collection “mynames” is automatically created when you insert a document

> show dbs

admin  (empty)
local  0.078GB
names  0.078GB

> db.mynames.find()               //query the db, select operation

{ "_id" : ObjectId("534cbfd03dfb3fbd86d8029d"), "name" : "shraddha", "email" : "[email protected]" }

//One more way of inserting……

> a={"name":"test3","email":"test3.other"}

{ "name" : "test3", "email" : "test3.other" }

> b={"name":"test4",email:"test4.other"}

{ "name" : "test4", "email" : "test4.other" }

> db.othernames.insert(a)

WriteResult({ "nInserted" : 1 })

> db.othernames.insert(b)

WriteResult({ "nInserted" : 1 })

> db.othernames.insert(c)

2014-04-15T19:40:24.798+0530 ReferenceError: c is not defined

//…In all the above inserts, the “_id” which has the unique key is auto-generated..

 

> coll=db.mynames

names.mynames

> coll.find()

{ "_id" : ObjectId("534cbfd03dfb3fbd86d8029d"), "name" : "shraddha", "email" : "[email protected]" }
{ "_id" : ObjectId("534d3b89f4d4b90697c205d6"), "name" : "test1", "email" : "test1.helical" }

> coll=db.othernames

names.othernames

> coll.find()

{ "_id" : ObjectId("534d3dc3f4d4b90697c205d7"), "name" : "test3", "email" : "test3.other" }
{ "_id" : ObjectId("534d3dcdf4d4b90697c205d8"), "name" : "test4", "email" : "test4.other" }

 

> coll.find({name:{$gt:"test3"}})                  //find documents where “name” is >”test3”

{ "_id" : ObjectId("534d3dcdf4d4b90697c205d8"), "name" : "test4", "email" : "test4.other" }

> coll.find({name:"test3"})

{ "_id" : ObjectId("534d3dc3f4d4b90697c205d7"), "name" : "test3", "email" : "test3.other" }

>

> coll.find({$or:[{name:{$gt:"test3"}},{name:"test3"}]})

{ "_id" : ObjectId("534d3dc3f4d4b90697c205d7"), "name" : "test3", "email" : "test3.other" }
{ "_id" : ObjectId("534d3dcdf4d4b90697c205d8"), "name" : "test4", "email" : "test4.other" }

> coll.find({$or:[{name:{$gt:"test3"}},{name:"test0"}]})

{ "_id" : ObjectId("534d3dcdf4d4b90697c205d8"), "name" : "test4", "email" : "test4.other" }

>
 
//Example - Manually inserting ObjectID field (key value)
 
> coll=db.testobjs

names.testobjs

> coll.insert({_id:1,fld1:"abc",fld2:123})

WriteResult({ "nInserted" : 1 })

> coll.insert({_id:2,fld1:"cde",fld2:345})

WriteResult({ "nInserted" : 1 })

> coll.insert({_id:2,fld1:"cde",fld2:345})       //trying to insert duplicate value in _id

WriteResult({
"Inserted" : 0,
"writeError" : {
"code" : 11000,
"errmsg" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: names.testobjs.$_id_  dup key: { : 2.0 }"}}

> coll.find()

{ "_id" : 1, "fld1" : "abc", "fld2" : 123 }
{ "_id" : 2, "fld1" : "cde", "fld2" : 345 }

>
 

Importing a csv file into mongodb:

Alter the below command as per your requirement and execute:

C:\Program Files\MongoDB 2.6 Standard\bin>mongoimport --db northwind --collection orders --type csv --file C:\Shraddha\Official\MongoDB\northwind-mongo-master\orders.csv --headerline

connected to: 127.0.0.1
2014-04-17T18:24:22.603+0530 check 9 831
2014-04-17T18:24:22.604+0530 imported 830 objects

 

Options used –

–db : name of the database
–collection : orders
–type : type of input file (we can also import tsv, JSON)
–file : path of the input file
–headerline : signifies that the first line in the csv file is column names

 

Shraddha Tambe

Helical IT Solutions