Wednesday, September 19, 2018

Cube Designer - Tips and Tricks #1

For me, the Cube Designer is a powerful and natural way to design an Essbase cube.  I love the fact I can use the power of Excel to aid me in all aspects of dimension building.  However, one of the drawbacks of the DBX paradigm is that it is fairly rigid in it's support for incrementally updating an existing cube.  For example, by default you cannot move members from one parent to another in an existing cube.


But there is a way around this limitation.  It utilizes a few undocumented features of the DBX process.  I will discuss the process for enabling DBX to do everything except rename a member in an existing cube.

Friday, September 7, 2018

OAC - Essbase Drill-Through Reports

Essbase drill through reports had various versions in the past years. There were studio drill-through reports, EIS drill through reports and GL drill through reports. All of them served well for viewing the supporting details but all had pros and cons.

OAC - Essbase has a new and easy way to maintain drill through reports and is entirely re-written to solve all the problems for supporting details. As this is based on connections and data sources abstraction, there are many opportunities to innovate and make it a lot better.  In this article, we are going to discuss the mappings involved in the drill through reports. In the next article, I will show you how to create one from scratch.

Here is the connections and data sources abstraction:


1. Dimension or Current member mapping
2. Generation Mapping
3. Level-0 Mapping

Lets discuss them in detail:

1. Dimension or Current Member Mapping

This is the easiest one and it will query on top of the current member in the selection. For example, if you selected Jan, it will query  like ".... where year in ('Jan') and if query on Qtr1, it will query like year in ('Qtr1').  As all levels data is unlikely in the supporting details, this makes sense when you are building detailed data for flat dimension or want to drill only on particular generation.


2. Generation Mapping 

This will query the members in the dimension from a mapped generation. For example if you have details for month and you want to show the result when quarter is selected, this mapping will work perfectly.
As this is a recursive in nature, you can allow this result on any upper generation. It will not allow work if you map upper level generation and allow to drill on lower level generation.  For example if your supporting detail data is at year level but you allow to drill through on month generation, it will not work.


As shown in the example above, you can either go from year or quarter to see the results at month generation. This will work on the same generation too. For example, if you select Jan, it will still work.

3. Level-0 Mapping

Level-0 mapping is for the ragged hierarchies. When generation mapping does not fit as supporting details are for different generations but for level-0 members then this is the best fit.



As shown above, Qtr3 has no children and if you think that there are supporting details for Qtr3 as well then this mapping suits your use case.

As this is based on common framework of data sources, we will get following advantages:
1. Join Excel and DB data for supporting details once "Join Data Source" is available.
2. Editable Data source - Make changes in the supporting details to fix some data.
3. Multi cell drill though reports - In future, you will be able to select multiple cells at once and execute the drill through reports.

This is enough of the theory and we will see this in practice in my next article.
Stay Tuned!

Thursday, August 16, 2018

Loading data into OAC-Essbase from legacy systems using REST apis

As promised in my previous blog, we will try to load the data from a legacy on-premise system to OAC-Essbase.
As we already discussed, not all legacy systems generate data which can be directly imported into Essbase and not all support JDBC. We may need to write a connector which understands both systems.  After moving to the OAC-Essbase this task will be even challenging as Essbase will stay in the cloud and legacy system may stay at on-premise.

Well, there is a good news! OAC-Essbase does have streaming REST apis to do exactly this.

One can think of writing an exporter for legacy system to a file, upload it and load. This will take some time though. Can we load the data while exporting like producer-consumer pattern?

Yes, its really easy.

Current Sequential Solution



New Parallel Solution

As we see in figure-1, there are lot of idle resources and it could be significantly slow compared to the figure-2.
Essbase data load allows multiple streams at the same time so you can divide and concur the data as well. Dimension build will be sequential so only queue is possible currently per cube.

Let's say that we have already exporter code which can start generating stream of the data from our on-premise system.
e.g
"100-10","New,York","Actual","Jan",678,271
"100-10","New,York","Actual","Feb",645,258
...

Let's not store this data while generating and send directly to cube. I am assuming that rule file is already there to process this data in the cube directory.

For data load, there are three steps:
1. Start a data load stream
POST https://<host>/essbase/rest/v1/applications/sample/databases/Basic/dataload
{
         "ruleFileName":"Data"
}

This will return the unique identifier based URL where we can keep on sending the data like below:

https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/dataload/123456789

2. Keep sending the data
POST https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/dataload/123456789
"100-10","New,York","Actual","Jan",678,271
"100-10","New,York","Actual","Feb",645,258
...

POST https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/dataload/123456789
"200-10","New,York","Actual","Jan",700,280
"200-10","New,York","Actual","Feb",800,288
...


3. End stream
DELETE https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/dataload/123456789

You can start as many processes as possible to do this in parallel.

Same can be done with dimension build. 
1. Start a dimension build stream
https://<host>/essbase/rest/v1/applications/sample/databases/Basic/dimbuild
{
     "restructureOption":  "PRESERVE_ALL_DATA"
}

Other restructure options are :
PRESERVE_LEAFLEVEL_DATA
PRESERVE_INPUT_DATA
PRESERVE_NO_DATA

This will return the unique identifier based URL where we can keep on sending the data like below:
https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/dimbuild/123456789

2. Keep sending
It is always a good idea to load all the dimensions and then restructure than building one dimension at at a time followed restructure.

2.1 Start dimension build for first rule file

POST  https://<host>/essbase/rest/v1/applications/sample/databases/Basic/dimbuild/123456789/product


2.2  start sending POST https://<host>/essbase/rest/v1/applications/sample/databases/Basic/dimbuild/123456789/product


"Product","Diet","","~","Diet Drinks","","","","","","",""

"Diet","200-50","","","Diet Root Beer","","","","","","","",""
"Diet","300-60","","","Diet Cream","","","","","","","",""
...

You can repeat this until product is done.


2.3 You can repeat 2.2 for all remaining dimensions


3. End stream

DELETE https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/dimbuild/123456789

This is where restructure will happen once for all the loads done in 2.x


Data is sent over HTTPS and there is no security concern. You can use your favorite language like Java, JavaScript, Python... or you can use command line using cURL.

Try it out and let us know if you have any questions... 

Thursday, July 26, 2018

Why to open ports when you can stream the data to Essbase (OAC) over HTTPS?

Migrating to cloud is sometimes hard when few things are on premise and few are moved to cloud. It becomes harder when cloud is connecting back to on premise systems. Say we have a large warehouse on premise and some application is on cloud and now cloud application wants to connect to this warehouse.
Some cloud providers came with *** Direct Connect, few came with **** Connect.
We may have to open the system and ports to public network and maintain those ports backed by strong IT team. Well, good luck with that!

One other approach could be to upload the huge files and process those files on cloud. Not a bad approach security wise but time consuming.

What could be really cool is someone comes up with a solution where we do not have to open the ports but we can stream on premise data to the cloud using industry standard HTTPS.
Wish listened! OAC- Essbase  streaming command line will allow us to do that as it comes with a tiny stream loader for a giant problem.
Let's try it out...

In this example, we will try to load the data from MySQL database which is on premise and we will load the data to famous Sample/Basic cube.
Your database can be any database which supports JDBC. e.g. Redshift, SQL Server, Teradata, DB2, SalesForce, ...

I created a sample data table in MySQL like below:


Now created a rule file like below with SQL query inside it. We can keep the query empty in rules and dynamically provide as well. Query should be JDBC compliant and should not contain any Essbase substitution variables.


Download Essbase CLI and install it at your favorite location.

login to CLI.
>esscs.bat login -url https://host/essbase -u

Create a local connection so that it can be used multiple times or in scripting.

>esscs.bat createlocalconnection -cs "jdbc:mysql://localhost:3306/world" -D "com.mysql.cj.jdbc.Driver" -u <user> -p <password> -N mysql1

Before trying out actual data load or dim build , make sure that you add an EXTERNAL_CLASSPATH with JDBC jars of your database.
Only Oracle DB jdbc jars are provided with CLI.

e.g. echo command on windows box should print your jar like below.
>echo %EXTERNAL_CLASSPATH%
D:\study\mysql\mysql-connector-java-8.0.11\mysql-connector-java-8.0.11.jar

Now, load the data to database using below command:

>esscs.bat dataload -application Sample -db Basic -rule data1.rul -user root -stream -connection mysql1
Streaming to Essbase...
........
Streamed 4224 rows to cube

In the CLI/log directory, a log file will have an entry with time taken to load these many records.
Total time taken for streaming 0 secs 270 millis

Note: Rule file should exist in the DB directory.

You can use this for building a dimension as well. See the details here:
https://docs.oracle.com/en/cloud/paas/analytics-cloud/essug/loading-dimensions-and-data-streaming-remote-database.html

Try it out with your favorite database and let us know if you like it.

This command only supports loading the data from JDBC sources; what if we have a legacy system which cannot expose JDBC but we want to load data from that...

Can OAC expose a REST api to achieve this??

Stay tuned!




Friday, July 13, 2018

Playing with essbase ad hoc analysis using command line

If you are not familiar with cURL, JSON and REST, please read my earlier blog. Initial section of it has few links to quick start.

In this blog, we are going to play with a grid using command line. Wait! Is this even possible?
Let's try...

Create a Sample/Basic cube from Gallery.
Let's get a default grid first.
curl -X GET -u <user>:<password> "https://host/essbase/rest/v1/appliations/Sample/databases/Basic/grid"

It returned the default grid, it is based on the Smart View protocol.
...
  "slice" : {
    "columns" : 4,
    "rows" : 3,
    "data" : {
      "ranges" : [ {
        "end" : 11,
        "values" : [ "", "Product", "Market", "Scenario", "", "Measures", "", "", "Year", "105522.0", "", "" ],
      ...

The json above explains that it's a 3x4 grid and values contains all the values in a flat array.

Now let's Zoom In Year. Really??

Save above grid as "grid.json" by adding an initial 'zoomin' action node.


curl -X POST -u <user>:<password> "https://<host>/essbase/rest/v1/applications/Sample/databases/Basic/grid" -H Content-Type:application/json  --data "@/mnt/d/grid.json"

{
"action" : "zoomin",
"ranges": [[2,0,1,1]],
"grid": { <above grid goes here> }
}

"slice" : {
    "columns" : 4,
    "rows" : 7,
    "data" : {
      "ranges" : [ {
        "end" : 27,
        "values" : [ "", "Product", "Market", "Scenario", "", "Measures", "", "", "     Qtr1", "24703.0", "", "", "     Qtr2", "27107.0", "", "", "     Qtr3", "27912.0", "", "", "     Qtr4", "25800.0", "", "", "Year", "105522.0", "", "" ],
...

ranges in the above json has four values: row#, column#, number of rows, number of columns.  This means we can add multiple symmetric ranges in a single operation.

Great, what next? Can we keep only, remove only, zoom out, pivot?
Yes. See the details here: https://drive.google.com/file/d/1qC6xIM8TbnqTw7DDcJyI4LbZDyD6lf2V/view?usp=sharing
(Download and view it in MS Word or Libre Office as Google docs cannot render it properly)

Let's zoom in, keep only to get the editable grid like below. ( Hack Hint: You can manually build a grid and call refresh to avoid too many zoom-ins...)
...
"slice" : {
    "columns" : 5,
    "rows" : 3,
    "data" : {
      "ranges" : [ {
        "end" : 14,
        "values" : [ "", "", "", "Actual", "Actual", "", "", "", "Sales", "COGS", "New York", "Cola", "Jan", "678.0", "271.0" ],
...

Let's change the COGS to 300 from 271. Save this grid like earlier in grid.json with initial "submit" action node.

{
"action" : "submit",
"grid": { <above grid goes here along with the dirtycells array inside slice tag> }
}

Here is how to add dirtycells. Dirty cells are the indexes of changed cells.

"slice" : {
    "columns" : 5,
    "rows" : 3,
    "data" : { ....}
    "dirtyCells":[14]

Here 14 is the index of an array which starts from 0.

Grid comes back with changed values. We can zoom out on Profit to see the impact.

By the way, you can execute MDX queries as well. See the link above.

Cool, isn't it? Let me know if have any questions.




Monday, July 9, 2018

Introduction to Cube Designer - Files


OAC-Essbase has introduced a new platform concept called a catalog. The catalog is an area of storage within the cloud for both Essbase artifacts and end user artifacts. The catalog uses the OAC-Essbase security to restrict access.

Can Python digest a cube?


Then, I drew the inside of the boa constrictor, so that the grown-ups could see it clearly. They always need to have things explained.
But let's put elephants and boa constrictors aside. The more important question would be -- Can Python digest a cube? Nowadays even kids in elementary schools do python, so I thought it would be useful to show how Python code can call OAC Essbase REST APIs. Let's try to list files under /gallery/Cubes directory in the Essbase catalog.