Python – Get data from CSV and create chart
- 1. Install Python 3.6
- 2. Download Pycharm IDE (Community Version)
- 3. Basic knowledge of Python programming language
The following are the steps for creating mesh-grid in pycharm :
1. Installing NumPy,Pandas and Matplotlib libraries in Pycharm IDE :
- 1. Create a scratch file in Pycharm.
- 2. Click on “File” and goto settings.
- 3. Goto “Project Interpreter” and click on ‘+’ symbol to add new libraries.
2. Writing python code for importing csv data and creating meshgrid :
- 1. Create a csv file containing these data “-5,5,0.01”.
- 2. Use this python code in your scratch file .
- If you are able to sucessfully run the code, then you will generate mesh grid in an new Pycharm window.
import numpy as np
import matplotlib.pyplot as plt
point = ""
with open('C:/Users/Helical/Desktop/Blog/Python/MeshGrid.csv','r') as csvfile:
meshreader = csv.reader(csvfile, delimiter=',')
for row in meshreader:
point = point + ', '.join(row)
meshpoints = point.split(',')
points = np.arange(float(meshpoints),float(meshpoints),float(meshpoints))
dx, dy = np.meshgrid(points,points)
z = (np.sin(dx)+np.sin(dy))
plt.title('plot for sin(x)+sin(y)')
Data Analysis Techniques
The need and way to analyse data basically depends on the end and not on the source.That end is typically the need to perform analysis and decision making through the use of that source of data. Data analysis in these days typically include reporting, multidimensional analysis and data mining which relates to “Display”, “Analyse” and “Discover” respectively. Depending on the type of data analysis the source data’s requirement may vary.
If reporting is required for analysis then simplest of the data source would give best results.Query and reporting capability primarily consists of selecting associated data elements, perhaps summarizing them and grouping them by some category, and presenting the results. Retrieving relevant data from the data warehouse, transforming it into the appropriate context, and displaying it in a readable format.Finally, the report is delivered to the end user in the desired output format be it graph, pie and table form in the required output medium.
If the objective is to perform multidimensional data analysis, a dimensional data model would be more appropriate. This type of analysis requires that the data model support a structure that enables fast and easy access to the data on the basis of any of numerous combinations of analysis dimensions. For example,you may want to know how many of a specific product were sold on a specific day, in a specific store, in a specific price range.Multidimensional analysis enables users to look at a large number of interdependent factors involved in a business problem and to view the data in complex relationships. End users are interested in exploring the data at different levels of detail, which is determined dynamically. The complex relationships can be analyzed through an iterative process that includes drilling down to lower levels of detail or rolling up to higher levels of summarization and aggregation.This is a data analysis operation whereby the user takes a different viewpoint than is typical on the results of the analysis, changing the way the dimensions are arranged in the result. Like query and reporting,multidimensional analysis continues until no more drilling down or rolling up is performed.
As said before Data mining is nothing but “Discovery”. This discovery could take the form of finding significance in relationships between certain data elements, a clustering together of specific data elements,or other patterns in the usage of specific sets of data elements. After finding these patterns, the algorithms can infer rules. These rules can then be used to generate a model that can predict a desired behavior, identify relationships among the data, discover patterns, and group clusters of records with similar attributes.Data mining is most typically used for statistical data analysis and knowledge discovery. Statistical data analysis detects unusual patterns in data and applies statistical and mathematical modeling techniques to explain the patterns. Data mining is data driven . There is a high level of complexity in stored data and data interrelations in the data warehouse that are difficult to discover without data mining. Data mining offers new insights into the business that may not be discovered with query and reporting or multidimensional analysis. Data mining can help discover new insights about the business by giving us answers to unasked questions .
These data analysis techniques offers new insights into the business through keen look into the data for its analysis to fetching quality information, which can be be used for ultimate buisness intelligence through profitable buisness decisions in order to grace buisness upliftment and growth in different sectors.
Types of Dimensions in Data warehouse
What is Dimension?
Dimension table contains the data about the business. The primary keys of the dimension tables are used in Fact tables with Foreign key relationship. And the remaining columns in the dimension is normal data which is the information about the Objects related to the business.
Eg: Product,Customer,Orders,Company,Date etc.
Below are the different types of Dimensions:
1) Slowly Changing Dimensions (SCD) : Dimensions that change very slowly overtime rather than according to regular schedule.
Below are some popular approaches of SCDs:
Type 0: This is Passive method. The records which were inserted will not change any time.
Type 1: This method overwrites the old record with new values without tracking any historical data.
Type 2: This methods tracks the historical data by using version number or by using startdate, enddate columns.The drawback is the table will grow vertically. And it requires more space.
Type 3: This method will also track the historical data by inserting the column with new value. It preserves limited history.
Type 4: This method uses a separate table for storing all the historical data and main table will hold the current_data. And both will pointed with the same surrogate key.
Type 6: This is called hybrid method which uses all type 1+ type 2 + type 3. It uses flag values and effective date to differ between old and new records.
2) Rapidly Changing Dimensions: The Dimension which contains rapidly changing attributes. If we maintain any historical data for these type of tables. We will definitely get an issue related to memory and performance. The solution is to maintain mini dimension tables for historical data like type 4 Dimension in SCD. The main table should contain the current values and mini dimensions can contains historical data.
3) Junk Dimensions: In the data warehouse design we will come across a situation to use flag values. We can use one single table for this so that in Fact table no need to have multiple columns to store the Primary key values of these flag tables.
As per above, if we consider Gender_Marital_Status we can use only 1 single column in Fact table.
4) Inferred Dimensions: The Dimension which is important to create a fact table but it is not yet ready, then we can assign some dummy details for one ID and we can use that ID in fact table. After getting the details then we can update the details in the dimension.
5) Conformed Dimensions: Dimensions which are connected with multiple fact tables are know as conformed Dimension.
Eg: Customer Dimension and Product Dimension are required in Shipment Fact,Sales fact and Service Request Fact
6) Degenerate Dimensions: A degenerate table does not have its own dimension table. It is derived from a fact table. The column (dimension) which is a part of fact table but does not map to any dimension.
Eg: Invoice_Number column in Fact table
7) Role Playing Dimensions: The same dimension which can be used for multiple purpose
Eg: Date Dimension can be used as Date of Sale, Date of Hire etc.
8) Shrunken Dimensions: A shrunken dimension is a subset of another dimension. For example, the Orders fact table may include a foreign key for Product, but the Target fact table may include a foreign key only for ProductCategory, which is in the Product table, but much less granular. Creating a smaller dimension table, with ProductCategory as its primary key, is one way of dealing with this situation of heterogeneous grain. If the Product dimension is snowflaked, there is probably already a separate table for ProductCategory, which can serve as the Shrunken Dimension
9) Static Dimensions: Static dimensions are not extracted from the original data source, but are created within the context of the data warehouse. A static dimension can be loaded manually — for example with Status codes — or it can be generated by a procedure, such as a Date or Time dimension.
DATA VAULT MODELING
Data Vault Modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that, apart from the modeling aspect, deals with issues such as auditing, tracing of data, loading speed and resilience to change.
Data Vault Modeling focuses on several things:-
First, it emphasizes the need to trace of where all the data in the database came from. This means that every row in a Data Vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source.
Second, it makes no distinction between good and bad data (“bad” meaning not conforming to business rules) This is summarized in the statement that a Data Vault stores “a single version of the facts” as opposed to the practice in other data warehouse methods of storing “a single version of the truth” where data that does not conform to the definitions is removed or “cleansed”.
Third, the modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes.
Finally, Data Vault is designed to enable parallel loading as much as possible, so that very large implementations can scale out without the need for major redesign.
Data Vault’s philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data is not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being “wrong” is an interpretation of the data that stems from a particular point of view that may not be valid for everyone or at every point in time. Therefore the Data Vault must capture all data and only when reporting or extracting data from the Data Vault is the data being interpreted.
Data Vault attempts to solve the problem of dealing with change in the environment by separating the business keys (that do not mutate as often, because they uniquely identify a business entity) and the associations between those business keys, from the descriptive attributes of those keys.
The business keys and their associations are structural attributes, forming the skeleton of the data model. The Data Vault method has as one of its main axioms that real business keys only change when the business changes and are therefore the most stable elements from which to derive the structure of a historical database. If you use these keys as the backbone of a Data Warehouse, you can organize the rest of the data around them. This means that choosing the correct keys for the Hubs is of prime importance for the stability of your model. The keys are stored in tables with a few constraints on the structure. These key-tables are called Hubs.
DATA VAULT AND DIMENSIONAL MODELLING
The Data Vault modelled layer is normally used to store data. It is not optimized for query performance, nor is it easy to query by the well-known query-tools such as Cognos, SAP Business Objects, Pentaho et al. Since these end-user computing tools expect or prefer their data to be contained in a dimensional model, a conversion is usually necessary.
For performance reasons the dimensional model will usually be implemented in relational tables, after approval.
Note that while it is relatively straightforward to move data from a Data Vault model to a (cleansed) dimensional model, the reverse is not as easy.
Refer for more Detail: http://www.slideshare.net/dlinstedt