Python – A short guide

In this article of AIKnow.info Blog we will give a short revision to Python, a programming language very popular for data science applications. Our interest is particularly related to machine learning, where Python is very used as a consequence of the availability of a wide number of dedicated libraries.

So this article is preparatory for the implementation of machine learning algorithms that we will see in the following.

Python is an high level interpreted language. Python interpreter can be used to execute single line of code simply writing them after the >>> prompt (Fig.1) or a script using the python.exe command available in the installation directory followed by the name of the script (Fig.2).
Statement in a script are executed one at a time, starting from the first, in order from top to bottom. If a statement is a function call, the flow of execution goes to the first instruction of the body and when completes, the program come back to the calling point.


Fig.1 Python interpreter


Fig. 2 Python script execution

Assuming that you have basic programming knowledges, in the following we will see the basic sintax rules of Python programming language referring to version 3.

Operators
+, -, *, /, // floor division, % modulus, ** exponentiation

Logical Operators
and, or, not

Relational Operators

==, !=, >, <, >=, <=

Order of operations

Parentheses, exponentiation, multiplication and division, addiction and subtraction

String Operators

+ concatenation, * repetition

Comments

Begins with # symbol

Functions definition

def function_1(argument_1,argument_2):
concat=argument_1+argument_2
print(concat)
return concat

The header of the function has to end with a colon, while the body has to be indented whit the same number of spaces (usually 4).

A pure function does not modify any of the objects passed to it as arguments and it has no effect other than returning a value.

Sometimes it is useful for a function to modify the objects it gets as parameters. In that case, the changes are visible to the caller. Functions that work this way are called modifiers.

Conditional execution

if int(x)>0:
print (“x is positive”)
elif int(x)==0:
print (“x is zero”)
else:
print (“x negative”)

Keyboard input

x=input(“Write a number\n”)
The value returned from input() function is of string type.

Iteration

while n > 0:
print (n)
n = n-1

for char in “String”:
print char

String manipulation

String slices
>>>s = ‘A long String’
>>>print (s[2:6])
>>>long

String are immutable
>>> s = ‘A long string’
>>> s[8] = ‘S’
>>>TypeError: ‘str’ object does not support item assignment

String methods
>>> s1 = ‘string’
>>> s2 = s1.upper()
>>> print (s2)
>>>STRING

In operator
>>> ‘s’ in ‘string’
True
>>> ‘b’ in ‘string’
False

String comparison
if s == ‘Green’:
print (‘Ok you can go’)
elif s < ‘Green’:
print (‘Your word,’ + s + ‘, comes before Green.’)

All the uppercase letters come before all the lowercase letters

Lists

A list is a sequence of values
>>>list=[‘string1’,2.0,[‘sublist1′,’sublist2’]]
>>>print (list[0])
string1

>>>list[0]=2
>>>print (list)
[2, 2.0, [‘sublist1’, ‘sublist2’]]

Slide a list
for element in list:
    print (element)

for i in range(len(list)):
list[i]=list[i]*2
print (list[i])

List operations
>>> [1, 2, 3]+[4, 5, 6]
[1, 2, 3, 4, 5, 6]

>>> [1, 2, 3] * 3
>>>[1, 2, 3, 1, 2, 3, 1, 2, 3]

>>> t = [‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’]
>>> t[1:3]
[‘b’, ‘c’]

>>> t[:4]
[‘a’, ‘b’, ‘c’, ‘d’]

>>> t[3:]
[‘d’, ‘e’, ‘f’]

List methods
>>> t = [‘a’, ‘b’, ‘c’]
>>> t.append(‘d’)
>>> print (t)
[‘a’, ‘b’, ‘c’, ‘d’]

>>> t1 = [‘a’, ‘b’, ‘c’]
>>> t2 = [‘d’, ‘e’]
>>> t1.extend(t2)
>>> print (t1)
[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]

>>> t = [‘d’, ‘c’, ‘e’, ‘b’, ‘a’]
>>> t.sort()
>>> print (t)
[‘a’, ‘b’, ‘c’, ‘d’, ‘e’]

>>> t = [‘a’, ‘b’, ‘c’]
>>> del t[1]
>>> print (t)
[‘a’, ‘c’]

>>> t = [‘a’, ‘b’, ‘c’]
>>> t.remove(‘b’)
>>> print (t)
[‘a’, ‘c’]

Lists and string
>>> s = ‘string’
>>> t = list(s)
>>> print (t)
[‘s’, ‘t’, ‘r’, ‘i’, ‘n’, ‘g’]

>>> s = ‘string-string-string’
>>> delimiter = ‘-‘
>>> s.split(delimiter)
[‘string’, ‘string’, ‘string’]

>>> t = [‘string1’, ‘string2’, ‘string3’]
>>> delimiter = ‘  ‘
>>> delimiter.join(t)
‘string1 string2 string3’

Dictionaries

A dictionary is a mapping between a set of indices (which are called keys) and a set of values.

>>> ita2eng = dict()
>>> ita2eng = {‘uno’: ‘one’, ‘due’: ‘two’, ‘tre’: ‘three’}
>>> print (ita2eng[‘due’])
two

Example of a function to count the numerosity of a letters in a string
def histogram(s):
d = dict()
for c in s:
if c not in d:
d[c] = 1
else:
d[c] += 1
return d

Example of a function to obtain a key from a value
def reverse_lookup(d, v):
for k in d:
if d[k] == v:
return k
raise ValueError

A dictionary is implemented using a hashtable.

A hash is a function that takes a value (of any kind) and returns an integer. Dictionaries use these integers, called hash values, to store and look up key-value pairs.

This system works fine if the keys are immutable, so lists and dictionaries cannot be used as a key (but can be used as values).

Tuples

Syntactically, a tuple is a comma-separated list of values:

>>> t = ‘a’, ‘b’, ‘c’, ‘d’, ‘e’

A tuple is a sequence of values. The values can be any type, and they are indexed by integers, so in that respect tuples are a lot like lists. The important difference is that tuples are immutable.

>>> t = tuple(‘string’)
>>> print (t)
(‘s’, ‘t’, ‘r’, ‘i’, ‘n’, ‘g’)

>>> print (t[0])
‘s’
>>> print (t[1:3])
(‘t’, ‘r’)

>>> t[0] = ‘S’
TypeError: ‘tuple’ object does not support item assignment

Tuple assignment
>>> a, b = b, a

Functions accepting and returning multiple values
A function can only return one value, but if the value is a tuple, the effect is the same as returning multiple values.

Functions can take a variable number of arguments. A parameter name that begins with * gathers arguments into a tuple:

def printall(*args):
print args

The complement of gather is scatter. If you have a sequence of values and you want to pass it to a function as multiple arguments, you can use the * operator

>>> t = (7, 3)
>>> divmod(*t)
(2, 1)

Lists and tuples
>>> s = ‘abc’
>>> t = [0, 1, 2]
>>> zip(s, t)
[(‘a’, 0), (‘b’, 1), (‘c’, 2)]

Dictionaries and tuples
Dictionaries have a method called items that returns a list of tuples, where each tuple is a key-value pair.

>>> d = {‘a’:0, ‘b’:1, ‘c’:2}
>>> t = d.items()
>>> print (t)
[(‘a’, 0), (‘c’, 2), (‘b’, 1)]

Tuples can be used to initialize a new dictionary:

>>> t = [(‘a’, 0), (‘c’, 2), (‘b’, 1)]
>>> d = dict(t)
>>> print (d)
{‘a’: 0, ‘c’: 2, ‘b’: 1}

It is common to use tuples as keys in dictionaries (primarily because you can’t use lists).

directory[name,surname] = number
for name, surname in directory:
print name, surname, directory[name,surname]

Comparing tuples
The relational operators work with tuples and other sequences; Python starts by comparing the first element from each sequence. If they are equal, it goes on to the next elements, and so on, until it finds elements that differ.

Files

Read from file
>>>fp = open(‘C:/file.txt’)
>>>for line in fp:
>>>    print(line)

Write to file
>>>fp = open(‘C:/file.txt’,’w’)
>>>fout.write(‘line1\n’)
>>>fout.close()

Catching exceptions
A lot of things can go wrong when you try to read and write files. It is better to go ahead and try—and deal with problems if they happen—which is exactly what the try statement does.

try:
fin = open(‘bad_file’)
for line in fin:
print line
fin.close()
except:
print ‘Something went wrong.’

Class and Objects

A class is a user-defined type. A class definition creates a new class object.

>>>class Point(object):
>>>    “””Represents a point in 2-D space.”””

>>>p1=Point()
>>>p1.x=3.0
>>>p1.y=4.0

>>>print (p1.y)
4.0

Copy objects
import copy
p2 = copy.deepcopy(p1)

Class Methods
class Time(object):
def print_time(time):
print ‘%.2d:%.2d:%.2d’ % (time.hour, time.minute, time.second)
def increment(self, seconds):
seconds += self.time_to_int()return int_to_time(seconds)
def is_after(self, other):
return self.time_to_int() > other.time_to_int()

>>> start = Time()
>>> start.hour = 9
>>> start.minute = 45
>>> start.second = 00
>>> print_time(start)
09:45:00
>>> end = start.increment(1337)
>>> end.print_time()
10:07:17

Note that the subject, start, gets assigned to the first parameter of increment, self. The argument, 1337, gets assigned to the second parameter, seconds.

>>> end.is_after(start)
True

The init method (short for “initialization”) is a special method that gets invoked when an object is instantiated

# inside class Time:
def __init__(self, hour=0, minute=0, second=0):
self.hour = hour
self.minute = minute
self.second = second

__str__ is a special method, like __init__, that is supposed to return a string representation of an object.

# inside class Time:
def __str__(self):
return ‘%.2d:%.2d:%.2d’ % (self.hour, self.minute, self.second)

By defining other special methods, you can specify the behavior of operators on user defined types

Polymorphism
Functions that can work with several types are called polymorphic. Polymorphism can facilitate code reuse. For example, the built-in function sum, which adds the elements of a sequence, works as long as the elements of the sequence support addition.

Keywords

Python keywords are:

and del from not while as elif global or with assert else if pass yield break except import print class exec in raise continue finally is return def for lambda try

Knowage – Location Intelligence

Knowage Location Intelligence tools allows the definition of Geo-referenced documents. In this new AIKnow.info Blog article, we will see how to create a cartographic document in Knowage representing the distribution of population by regions. This will be done loading a layer of Italian regions and joining each cartographic element with the information of population coming from a specific dataset. The result will be a map showing Italian regions with different colors based on population (Fig.1).

Clearly the procedure illustrated in this simple example can be applied to more complex cases, regarding to cartographic or data side.


Fig.1 Location intelligence with Knowage

First of all we will create the cartographic layer in GeoJSON, one of the format in use by Knowage.

Since most common cartographic content comes in SHP Format, and also the one related to Italian Regions borders (that can be downloaded from https://www.istat.it/it/archivio/124086 see Fig.2), it’s necessary to convert the SHP in GeoJSON giving the correct coordinate system.

Fig.2 Source of Italian Regions borders layer

We will do that using GeoServer (http://geoserver.org/) an open source server to manage cartographic content. If you want to install GeoServer on the local machine where Knowage is installed too, take care to select for Geoserver a different port from the Knowage one; in fact the default port for both software is 8080.

You can find the procedure to publish a layer in SHP format with GeoServer at this link:  http://docs.geoserver.org/stable/en/user/gettingstarted/shapefile-quickstart/

In short you have to:

  • move the shape into <GEOSERVER_DATA_DIR>/data folder, where <GEOSERVER_DATA_DIR> is the root of the GeoServer data directory (if no changes have been made to the GeoServer file structure, the path is geoserver/data_dir/data/);
  • Create a new workspace
    • Navigate to Data – Workspaces and click on Add New workspace button;
    • Enter the workspace name and namespace URI (the URI does not need to resolve to an actual valid web address);
    • Click on Submit button;
  • Create a new store
    • Navigate to Data – Stores;
    • Click the Add new Store button;
    • Between the available formats, choose Shapefile, this will display the New Vector Data Source page;
    • Begin configuring the Basic Store Info:
      • Select the workspace previously created;
      • Enter the Data Source Name and a brief description;
    • Under Connection parameters browse to the location URL of the shapefile;
    • Clicking Save button will redirect to the New Layer page;
  • Create a layer
    • On the New Layer page click Publish beside the layer name;
    • The Edit Layer page defines the data and publishing parameters for a layer. Enter a short Title and an Abstract;
    • At this point it’s fundamental to define the Coordinate Reference System used by Knowage. To do that you have to set Declared SRS to EPSG:4326 and SRS handling to Reproject native to declared

 Fig.3 Coordinate Reference Systems definition

    • Generate the layer’s bounding boxes by clicking the Compute from data and then Compute from native bounds links;
    • Click the Publishing tab at the top of the page;
    • Under WMS Settings, ensure that the Default Style is set to line;
    • Finalize the layer configuration by scrolling to the bottom of the page and click Save.
  • Preview the layer
    • Navigate to the Layer Preview screen and find the layer just created;
    • Click the OpenLayers link in the Common Formats column;
    • An OpenLayers map will load in a new tab and will display the shapefile data with the default line style. You can use this preview map to zoom and pan around the dataset, as well as display the attributes of features (Fig.4).

Fig.4 OpenLayer map preview

Note that every cartographic element has two attribute COD_REG and REGIONE, that we will use in the following to join with the other information from the dataset.

To obtain the GeoJSON file come back to Data – Layer Preview Menu, and under All Formats combo box select GeoJSON format (Fig.5)


Fig.5 GeoJSON creation by Layer Preview tool

Now logging to Knowage as an administrator it’s possible to define the Layer Catalog. Clicking on Catalogs – Layer Catalogs the interface in Fig. 6 will be showed. Specify a Label and a Name, choose File as Layer Type and select the GeoJSON already created.

Complete the other information required: Layer label, Layer name, Layer ID and Layer order (select 1 as a value).

Finally select the role that can access the resource (/demo/admin and /demo/user in this example).


Fig.6 Layer catalog interface

To go forward we need a data source with the information to join to cartographic content. For this purpose it’s possible to follow the method illustrated in the first blog post http://www.aiknow.info/2017/10/24/knowage/.

In this case starting from a table with population data for every Italian regions (Fig.7), we will generate a dataset of Flat type (Fig.8)


Fig.7 Italian regions population data


Fig. 8 Data Set definition

With Layer data and Data Set ready to use, it’s now possible to generate a Geo-referenced analysis document.

This can be done logging to Knowage as a user and chosing My workspace icon, and then Analysis folder.

Clicking on Add icon will show GIS Document designer. First you have to select the dataset (that for this example is POP_REG, see Fig.9).


Fig.9 GIS Document designer Dataset definition

Then you have to select a layer from the list of the available ones, for this example we defined a layer called itareg (Fig.5 and Fig.10)


Fig. 10 GIS Document designer Layer selection

Then you can add one or more join columns, by choosing the matching column from the dataset and the  layer  (Fig.11).


Fig. 11 GIS Document designer Dataset join column

Then it’s possible to add an indicator and a filter defining a measure and a label for that. Finally check the Map menu configuration item you want to allow (Fig.12).


Fig. 12 GIS Document designer Indicators, Filters and Map Menu configuration

Clicking on save button, the template will be saved and you can have a preview of the Cartographic document clicking on Edit Map (Fig.13). Using menu on the right of the page it’s possible to use tools to interact with the map, layers, and to define configurations.

Fig. 13 Location intelligence document

Knowage – Monitor business objectives with KPI documents

A Key Performance Indicator is an index, consisting of one or more metrics, that can be used to monitor how effectively a company is achieving key business objectives. In this AIKnow.info Blog article, we will see how Knowage can be used to create  KPI documents (Fig.1)


Fig.1 A KPI document

The first step is to define a new measure/rule; this can be done from Kpi Model – Measure/Rule Definition menu. Clicking on the plus icon will opens the interface in Fig.2 where it’s possible to define the query.

This example suppose that the measure of interest is the number of product made in a production line, and that this number is inserted in a table every hour (Fig.3).


Fig.2 New Measure interface

Fig.3 table content

In Metadata tab it’s possible to define a typology and a category for every value selected by the query, while the Preview tab allows to check the result of configurations.

KPI can be defined from KPI Model-KPI Definition menu. The add button opens the interface in Fig. 4. Using CTRL key and space bar the system displays all measure defined.


Fig.4 KPI Definition Menu

Once the measure is choosen, clicking on it the function for that value can be selected (Fig.5). In our example we will control the minimum value assumed from the number of product made by product line. Clearly the formula can be more complex involving various measures and operators.


Fig.5 Function selection to apply to the measure

The Cardinality tab allows you to define the grouping level for the attributes of the defined measures.

Limit values can be set using the Threshold tab (Fig.6)  where specify label, minimum and maximum values and associated colors.


Fig.6 Limit value definition

The last step is to define a new Scheduler. In KPI Model-KPI Scheduler Menu, using add icon opens Fig.7 interface where in KPI tab, it can be added a KPI Association (Fig.8)


Fig.7 New Scheduler interface


Fig.8 KPI Association

In Frequency tab, can be defined start and end date and the periodicity of execution.


Fig.9 New Scheduler interface

Now a new Document can be created to view the KPI. In Fig. 10 you can see the basic configuration.


Fig.10 New KPI Document interface

In Fig.11 you can see the template build interface where it’s possible to define various configurations.


Fig.11 KPI Document designer

The result is visible in Fig.12

Fig.12 KPI Document

Knowage – OLAP Analysis

Onlyne Analytical Processing (OLAP) means analyze big amount of data, in real-time, with fast response time, to allow high interactivity. This is usually guaranteed by a pivot table: in Fig.1 you can see the Knowage OLAP Document that contains a pivot table as the main part.

In this AIKnow.info blog article we will see how to develop an OLAP document with Knowage, starting from the dataset described in the article Knowage – query databases in a visual way with QBE.


Fig.1 Knowage OLAP Document

OLAP employs Multidimensional Analysis. Whereas a relational database stores all data in the form of rows and columns, a multidimensional dataset, in short a cube, consists of dimensions, hierarchies and measures.

The first step to create an OLAP analytical document with Knowage is to write a new Schema with Mondrian. Mondrian is a ROLAP tool: it maps OLAP structures, such as cubes, dimensions and attributes directly on tables and columns of a relational data base via XML-based files. More information on this tool can be found at https://mondrian.pentaho.com/.

First it’s necessary to create a star schema, that consists on a table of fact that references two or more table of dimensions as depicted in Fig.2


Fig.2 Star schema

From the data in the table “0054_consumo_energia” (see article Knowage – query databases in a visual way with QBE Fig.3), it’s possible to generate a star schema where the fact is power consumption. The fact table (Fig.3) contains the power consumption data (column “dato”), the foreign keys “territorio” to Cities dimension table (Fig.4) and “anno” to Time dimension table (Fig.5).

Fig. 3 Fact Table

Fig.4 Cities dimension table


Fig.5 Time dimension table

With the star schema ready, it’s possible to create Mondrian schema. In few words a Mondrian schema is an XML file where dimensions and cube features are specified. In Fig.6 you can see in detail the XML file. The interpretation is quite simple, anyway more details about writing the schema can be found here https://mondrian.pentaho.com/documentation/schema.php


Fig.6 Mondrian schema

Once written, schema can be uploaded in Knowage using Catalogs-Mondrian schemas catalog Menu (Fig.7)


Fig.7 Mondrian schema upload

Now the Document can be created using Document Browser – New document – Generic Document Menu (Fig.8). Label and Name field are mandatory, Type must be set to On-line analytical processing, Engine to OLAP Engine, Data Source to the one in use (in my case Data Set name is MySQL). Then it’s necessary to specify the folder for the document using the tree in the right side of the page.


Fig.8 New Document Creation

Clicking on Template build icon opens the OLAP designer (Fig.8), where it’s necessary to choose the Type of Template (Mondrian), the Mondrian Schema (see Fig.6) and the Cube (see Fig.5).


Fig.9 OLAP Designer

Clicking Start on the right upper corner opens the Document (Fig.10), whose central part is the pivot table that can  be used to analyze data.

Axes panel can be used to place hierarchies (represented with filter card) on columns or rows axis, position them in a particular order, swap axes.

Clicking on the icon positioned on the top right side of the page you can open the side bar that allows to choose between different data representations and different drill types.

Fig.10 OLAP Document

Knowage – Create Data Reports with BIRT

Reports can be helpful when data have to be exported periodically in a specific format, even following a predefined template, to monitor processes and support their management.

With Knowage Report Designer Tool, available at https://download.forge.ow2.org/knowage/, it’s possible to create and publish reports to Knowage Server. In this AIKnow.info Blog article, that follows the one on Cockpit, we will see how to create a Report using BIRT, an open source platform used to create data visualization and reports http://www.eclipse.org/birt/.

Once downloaded and installed, Knowage Report Designer Tool can be connected to Knowage Server, giving the possibility to download and deploy Reports.

The first step is to open Knowage Report Designer Tool, create a new project (from File-New-Knowage Project menu or through Knowage icon) and add a new server (right-click on Resources – Server icon), see Fig. 1.

Fig.1 Create a new project and add a new server

Note that in the New Server configuration, you have to choose at least one Active server (Fig.2)

Fig.2 New server configuration and test

Now we can create a new Report, right clicking on Business Analysis folder icon and choosing Report-Report with Birt command (Fig.3)


Fig.3 New Report with Birt

Once created, the report can be designed double clicking on it and opening the Report Designer Perspective (Fig.4 and 5)

Fig.4 Changing perspective to Report Design

Fig.5 Report Design perspective

The first step is to create a new Data Source. This can be done right clicking on the specific icon in Fig.5. It opens a wizard (Fig. 6) where, choosing Knowage Server Data source from the list, it’s possible to retrieve Data Sets already defined in Knowage.

Fig.6 Create a New Knowage Server Data Source

Fig.7 Data source configuration

Once defined the Data Source, a new Data Set can be created right clicking on the specific icon of Report Design Perspective (Fig.5). First it’s necessary to give a name to the Data Set (Fig.8), then to specify a label (Fig.9). In this step it’s mandatory to specify the same label used for the Data Set in Knowage.

Fig.8 New Data Set wizard

Fig.9 Data Set wizard

It’s then possible to further configure Data Set (Fig.10)

Fig.10 Data Set configuration

Once Data Set is available, Report generation can be as simple as dragging it on working area and choosing the visible columns (Fig.11 and 12). Clearly report features can be modified from Property Editor interface (Fig.13).

Fig.11 Report generation

Fig.12 Report generation


Fig. 13 Property editor interface

At this point we can test the report, right clicking on it and choosing Report – Run Report command (Fig.14).

The command opens the default web browser and, if all is working well, you will see the report created (Fig.15).

Fig.14 Test report with Run Report command

Fig.15 Report preview

Now coming back to Knowage perspective, it’s possible to deploy the report to the server, right clicking on it and choosing Deploy icon (Fig.16). The command opens a Wizard where it’s necessary to specifiy Label, Name, Description and DataSource (Fig.17).

Fig. 16 Deploy report


Fig.17 Report deployment wizard

Once Deployed, you can find the report in Knowage, opening Document Browser (Fig.18).


Fig.18 Knowage Document Browser

Clicking on the play icon, the report will be generated, and could be exported in one of the available format (Fig.19)


Fig.19 Report generation and export

Knowage – Monitor data with Cockpit

In this AIKnow.info Blog article, that follows the one on Knowage Query by Example, we will see how to use Knowage to create Cockpits to monitor data trends. We will create an example where, starting from resident population data, it will be possible to see the evolution of this value by year for a specific city. Clearly this example can be translated into other areas, for example those connected to business metrics.


Fig.1 Knowage Cockpit to monitor resident population for city and year

First of all we need a Data Set, with the information of interest. We can create one from Data Providers – Data Set Menu (Fig.2).


Fig. 2 – Data Set creation

 From Data Set definition interface, using pencil icon, it’s then necessary to define if a field is an Attribute or a Measure (Fig.3).


Fig. 3 Field metadata definition

For user profilation purposes, it’s useful to define a specific Category for the Data Set (in this example Dati_Italia). This can be done from Server Settings – Domain Management Menu (Fig.4).


Fig. 4 New Category type definition

Once defined, the category can be added to a specific role using Profile Management – Roles Management Menu (Fig.5)


Fig. 5 Category association to a role

From Data Set interface, selecting TYPE Tab, it’s possible to define data set features (Fig.6) and have a preview of the Data Set created (Fig.7).


Fig.6 Data Set features definition


Fig.7 Data Set preview

Now, logging to Knowage like a user, in the Analysis area under the Workspace menu item, it’s possible to click on the Create Analysis icon and choose Cockpits.

This will open a blank page that can be edited using tools available in the sandwich icon menu.

Fig.8 Cockpit menu

The Add Widget icon opens the interface in Fig. 9 where it’s possible to choose the elements to add to the cockpit.


Fig.9 Cockpit widget

In our example we will add a table and a chart. In Fig. 10 you can see the interface to configure the Table Widget. This is made by 4 tabs where it’s possible to define colums, style, cross navigation and filters behavior.


Fig.10 Table widget configuration interface

In Fig.11 you can see the interface to configure the chart widget. This is made by 5 tabs where it’s possible to configure Dataset, design Chart with a specific engine, and define style, cross navigation and filters behaviour.


Fig.11 Chart widget configuration interface

The Chart Engine Designer allows to choose chart style (Fig.11), Chart structure (Fig.12) made by Categories  (x axis) and Series (y axis). Note that for Categories you have to choose an attribute, while for Series a measure (Fig.3).


Fig.12 Chart widget configuration interface

In Fig.13 you can see the result of the above steps: a table containing all the records selected from the dataset and a chart representing by year the sum of the measure called “Dato”


Fig.13 Cockpit resulting from the above configuration

One interesting Knowage function is the one represented in Fig.14 where, choosing a city and double clicking on “Territorio” column, table data are filtered for this attribute and chart content is consequently updated.


Fig.14 Cockpit resulting once selected a city

City selection can be modified from selection list widget (Fig.15).


Fig.15 Selection list widget

Knowage – query databases in a visual way with QBE (Query By Example)

Knowage (www.knowage-suite.com) is an open source Business Intelligence suite, that descends from well known SpagoBI.

In this AIKnow.info Blog article,  we will see how to create a QBE (Query By Example), very useful when you want to allow end users the possibility to query databases in a visual way, i.e. without using interrogation languages like SQL.

In the following image (Fig.1) you can see QBE interface. Tables regarding facts are available in the left side, and can be analyzed dragging and dropping the attributes of interest in the central panel, where you can order them, group them, filter the resulting informations using various principles and so on.

Fig.1 Knowage QBE

Now we will see the procedure to obtain the above result. Let’s assume that you have Knowage installed, and that you can use a dedicated data base schema. Even if the data of your interest are already in a data base, it’s always a good idea to work in a separate schema to avoid interferences between operational informative systems and business intelligence ones. In this way you can also manage better the so called ETL (Extract, Transform and Load) processes: an essential part of a business intelligence system.

For this example I’ve downloaded some data sets from dati.italiaitalie.it open data portal. I’ve loaded them in a MySQL shema using the Import Wizard utility available in Toad for MySQL tool (Fig.2).


Fig.2 Toad for MySQL Import Wizard

As a result I’ve in MySQL 7 tables regarding some facts of interest: resident population absolute and relative, energy consumption, bank deposits, available income, bank use total, home theft. These tables can be joined by means of two fields: Territory and Year.

Fig.3 data loaded in My SQL

Now we can open Knowage and start the configurations needed to build the QBE.
The first step is to define a new Data Source. You can do it logging to Knowage as administrator and using Data Providers menu section. In Fig. 4 you can find the configuration for MySQL DB.


Fig.4 Data source configuration

The second step is to define the Business Model: this is possible accessing the Menu Catalogs – Business Models catalog, and giving some information (Fig.5). In order to easily profile user access to data, it’s important to define a custom category (ITALIA) for the Business Model. This can be done from Menu Server Settings – Domain management (Fig.6)


Fig. 5 New Business model interface


Fig. 6 New Business Model category definition

Once given the informations requested in Fig.5, clicking on save enables the metamodel creation button. The first step to define the metamodel is to choose the tables to include, from the list coming from the datasource (Fig.7).


Fig.7 Selection of tables from datasource

Clicking on continue it’s possible to edit the Business Model. It’s important to define for every table the key to use (Fig.8) and the relationships between tables (Fig.9)


Fig.8 Selection of keys from tables


Fig.9 Relationship definition

Once all relationships are created, coming back to Catalogs – Business models catalog,  and selecting the one just created, it’s available the new button GENERATE (Fig.10 and 11) that allow the creation of the datamart needed to query the metamodel from the Workspace interface.


Fig.10 Business Model creation interface with GENERATE button available


Fig.11 Dialog box to confirm datamart creation

For a user to work with the metamodel just created, it’s necessary to give him the related permissions. This can be done accessing Profile Management – Role Management menu and allowing the business model category ITALIA to the role you want (Fig.12). If the role or user is not already present, you can add it from Roles Management or Users Management menu in Profile Management section.


Fig.12 Adding ITALIA category to the role user

Now, logging in Knowage with the user with the right profile, you can find the metamodel in Data-Models section (Fig.13).


Fig.13 user workspace with metamodel available

Clicking on magnify icon will open QBE interface (Fig.14). In the left column you can find available tables, and for each one available attributes. You can select those of your interest dragging and dropping them in Query Editor window. Here you can define various properties to apply to the attribute (Alias, Ordering etc.) and the filters your need.


Fig.14 QBE interface

When you’re sure about the attribute and the related properties, it’s possible to see a preview of the report (Fig. 15) using the command at the top right of Query editor panel. This can be saved for later use with the floppy disk icon.


Fig.15 Report Preview