OracleBIBlog Search

Thursday, December 31, 2009

BI Apps Performance Optimization and Consideration - Part1

This topic may be used at a high level but covers most of the common aspects of the performance tuning and optimization considerations for OBIEE deployments. This topic does not detail much about handling ETL performance tuning. Couple of points are picked from Oracle's recommendations

At a high level the following needs to be reviewed and analyzed

  • Data model and custom star schema's
  • Physical Layer (including Joins and Keys configured)
  • Business Model and Mappings Layer (including Joins and measures defined)
  • Presentation Layer Layout
  • Application Performance (including Joins, and aggregate tables configured)
  • Caching options configured
  • Security and Personalization
  • Initialization Blocks and Variables configured
  • Investigate the use of aggregate tables and mini dimension tables to increase performance
  • Define data archive strategy and table partition strategy to manage data sizes in the database
  • Database optimization
  • Hardware setup


  • Consider partitioning large Fact tables having more than 20 million rows
  • Identify eligible columns of type DATE for implementing range partitioning
  • Connect to the Oracle BI Server repository and check the usage or dependencies on each column in the logical and presentation layers
  • Analyze the summarized data distribution in the target table by each potential partitioning key candidate and data volumes per time range, month, quarter or year
  • Basing on the compiled data, decide on the appropriate partitioning key and partitioning range for your future partitioned table
  • The recommended partitioning range for most implementations is a month, though you can consider a quarter or a year for your partitioning ranges

The following columns may be considered as partitioning keys

W_GL_ACCOUNT_D may be a Date WID

Storage Considerations for Oracle Business Analytics Warehouse

  • Setting excessive parallel query processes
  • Running multiple I/O intensive applications, such as databases, on a shared storage
    choosing sub-optimal storage for running BI Applications tiers
  • Make sure you carefully plan for storage deployment, configuration and usage in Oracle BI Applications environment
  • Avoid sharing the same RAID controller(s) across multiple databases
  • Set up periodic monitoring of your I/O system during both ETL and end user queries load for any potential bottlenecks
  • Update optimizer statistics
  • Consider implementing Oracle RAC with multiple nodes to accommodate large numbers of concurrent users accessing web reports and dashboards
  • Check for the database configuration parameters as per the recommendations specified in the install guide

Managing Slow Running Reports

  • Analyze the session SQL
  • Run Trace or Explain plan on query
  • Analyze the logical joins leading to the slow performance
  • Analyze the database
  • Increase the log level to 5 and analyze NQLQuery.log file and look at Query string,
    Logical request, Physical SQL query, Query outcome status, Physical query response time, Rows returned to client
  • Key approach should be to address performance issues as close to the data as is possible, moving processing down the stack to the data

The next part will be about handling performance at metadata, reports & dashboards, presentation catalogs

Optimization of Established Planning Applications – Part 1

A common question that I am often asked when first arriving at a client with established Planning applications is, “How can I improve my applications performance.” Two short–term initiatives that can often provide more robust performance for established Planning environments are the following:
1. Remove unnecessary history
2. Reorder outlines

Remove Unnecessary History

Often an inordinate amount of history maintained within Planning applications to facilitate year over year reporting. Maintaining excessive history in a Planning application creates unneeded blocks, the greater the number of blocks, the greater the processing time for calculations. While year over year analysis capabilities undoubtedly have to be maintained, I often leverage the practice of creating a reporting application to facilitate Planning Optimization.

Native Essbase reporting databases are developed to archive historical data. These new reporting databases are based on the existing databases within the Planning applications. All data not associated with generating future budgets or forecasts from each of the Planning applications is moved to the reporting database through partitioning, xrefs or data extracts and loads. Planning data (i.e. Budgets, Forecasts, and Plans) would then be moved into the reporting databases at scheduled intervals to allow the year over year analysis, in addition to allowing for an optimal configuration of planning.

Basic steps to moving history out of Planning:

  1. Create native essbase reporting cubes to archive historical data
  2. These new reporting cubes should be based on the existing cubes in the Planning applications
  3. Remove all data not associated with generating future budgets or forecasts from each of the Planning applications
  4. Load all historical to the new reporting cubes
  5. Remove dimension members pertaining to historical data and alternate hierarchies from the Planning applications
  6. Integrate current year data from the Planning applications to the reporting cubes
Reordering of Outlines

Industry wide standards recommend that outlines be structured in an hourglass shape. From top to bottom; dense dimension members with the most stored members to the dense dimension with the smallest number of stored members. Then sparse dimension members with the least stored members to the sparse dimension with the largest number of stored members.

I acknowledge that this rule doesn’t apply to every model; however I do suggest that application administrators run multiple iterations of a baseline calculation reordering the dimension to ascertain the optimal outline order for calculation processing. Reordering outlines to increase the chance of achieving a single anchoring dimension with multiple bitmaps often result in achieving the optimal calculation performance.

Administrators can simulate calculations using SET MSG ONLY in a calculation script. A simulated calculation produces results that help you analyze the performance of a real calculation that is based on the same data and outline.

By running a simulated calculation with a command like SET NOTICE HIGH, you can mark the relative amount of time each sparse dimension takes to complete. Then, by performing a real calculation on one or more dimensions, you can estimate how long the full calculation will take, because the time a simulated calculation takes to run is proportional to the time that the actual calculation takes to run.

While these steps aren’t the be all, end all to optimization, these initial steps will help get you started.

In the next blog, we’ll address leveraging multiple plan types and calculation running in topdown and serial modes

The Impact of IFRS for EPM Reporting – Part 3

In Part 3, I want to provide more detail on the similarities differences and convergences regarding: Consolidations, Joint Venture Accounting, and Equity Method Investees. The details below were from a presentation I viewed during a company sponsored educational seminar about IFRS.


  • The basis for determining whether or not subsidiaries are consolidated is based on control, but there are differences in the definition of control. Generally, subsidiaries subject to control by the parent are consolidated.
  • Equity investments (referred to as “an associate” in IFRS) in which the investor has “significant influence” (generally 20% or more) but not consolidated is considered an equity method investment


Consolidation Model

  • US GAAP – focus on controlling financial interest (FIN 46)
  • IFRS – focus on the concept of the power to control – presumed to exist at 50% voting

Special Purpose Entities

  • US GAAP – FIN 46 requires the primary beneficiary (determined based on the consideration of economic risks and rewards) to consolidate
  • IFRS – consolidate when the substance of the relationship indicates that an entity controls the SPE – No concept of QSPE

Significant events between reporting dates

  • US GAAP – Disclosed in financial statements when different dates are used
  • IFRS – adjusted for in the financial statements

Joint Ventures

  • US GAAP – generally accounted for using the equity method
  • IFRS – either the proportionate consolidation method or the equity method


FASB and IASB had a joint project that addressed non-controlling interests, which culminated in the issuance of FAS 160, Non-controlling Interests in the US and a revision of the accounting for non-controlling interests for IFRS.

IASB recently issued an exposure draft that proposes the elimination of proportionate consolidation for joint ventures.

In Part 4, I will provide more detail about our next two topics business combinations and inventory.

BICG in Jan/Feb 2010 Issue of Oracle Magazine

There is an article titled, 'Honoring Excellence' within the most recent issue of Oracle Magazine that provides visibility to the companies Oracle honored with awards in 2009. On page 28 of the printed copy, BICG is listed as the sole winner of the Oracle Business Intelligence/Enterprise Performance Management Implementation Partner of the Year Award.

Wednesday, December 30, 2009

Should I Implement Oracle's Hyperion Workforce Planning Module

One of the questions that have been coming up more frequently with prospects and clients these days is whether they should implement the Workforce Planning module or develop a custom-built workforce planning solution. Before we discuss that question further, let’s briefly review some of the features and functionality of the Workforce Planning module.

Workforce Planning is a purpose-built module employee planning solution that is seamlessly integrated into Hyperion Planning. Workforce Planning contains pre-built functionality of the most commonly used planning practices. Upon initialization of the Workforce Planning module a series of pre-defined Account, Entity, Employee, Scenario and Version members, web forms, Smart Lists, member formulas, business rules and menus are created. These artifacts assist in the development of the employee expense plan.

Workforce Planning also includes functionality that calculates a variety of employee related expenses such as Merit Increase, Bonus, Overtime, Health Insurance, Employee Taxes (i.e. Social Security, FUTA, SUI, etc.). Additional Workforce Planning features include New Hire functionality, Transfer In/Transfer Out functionality, the ability to identify each employee with a specific Employee Status’ (i.e. Active, Leave of Absence, Maternity, etc.) as well as FTE and Headcount planning.

Benefits of implementing Workforce Planning

The obvious benefit of implementing Workforce Planning include having a purpose-built application for employee planning that incorporates the vast majority of employee related expenses. A by-product of implementing Workforce Planning is that it provokes thought around questions that may not be considered when developing a custom-built workforce planning solution. Workforce Planning requires specific detailed information and data at a lower level of detail than most custom-built solutions and so the application design needs to take that into consideration.

A residual benefit from a design perspective of implementing Workforce Planning is that it creates a fourth Essbase database for your Hyperion Planning application. A standard Planning application contains three Essbase databases. Often times during an implementation having a third database available is very useful in that it provides greater application design flexibility.

For many organizations, employee related costs are the primary cost driver, Workforce Planning allows companies to plan employee-related expenses at the employee level, whether a New Hire or an Existing employee. Workforce Planning also provides flexibility to plan at a higher level, by department for example. Home-grown spreadsheet models typically do not provide that level of flexibility.

Workforce Planning’s seamless integration with Hyperion Planning it enables you to combine your employee data and information with your non-employee data for a complete Plan or Forecast enabling to you to analyze the impact of your workforce plan in real time.

Video: Oracle Procurement and Spend Analytics at Los Alamos National Laboratory

Ware Hartwell, BI Project Manager at "the lab", spoke at a recent conference about their approach to implementing the Oracle BI Applications - specifically the Procure & Spend module. The video outlines some of the challenges that the implementation team faced and their recommendations for others facing the same challenges.

Los Alamos National Laboratory (LANL) is one of the largest science and technology institutions in the world that does multidisciplinary research in fields such as national security, outer space, renewable energy, medicine, nanotechnology, and supercomputing. Through a unique mentor-to-mentee training program, LANL is shifting from standard operational reporting to generating business intelligence insight with the use of Oracle Business Intelligence Enterprise Edition and Oracle Business Intelligence Applications.

Part 2 of 3

Part 3 of 3

DAC, ETL and Database Performance Tuning

DAC Setup:

DAC High Performance Execution:

· Automate Full and Refresh mode optimization rules (Analyze tables)

· Queue execution tasks for performance

· Set the level of Informatica session concurrency

· Load balancing across multiple Informatica Servers

· Restart the execution plan from point of failure

· Automate updating the database optimizer

· Analyze Data Warehouse tables when refreshing the Load.

Load Balancing across multiple Informatica Servers:

· On can set the maximum number of workflows that can be executed on a per server basis.

· This option provides tremendous flexibility to quickly fine tune for the maximum throughput

· Tasks are automatically load balanced across multiple servers

Database Tuning:

· Database Server Throughput

1. Processes commands from the Informatica Server to read, transform and write data

2. Data Storage is a critical performance variable

3. Spread data across multiple disks

4. RAID configuration can offer significant benefits

· Database configuration and schema management dramatically affect the performance of ETL and queries

1. Create relevant indexes

2. Partition large fact tables

3. Ensure that database server parameters are optimal

· Ensure that cost-based optimization is enabled

· Ensure that statistics isolate Data Warehouse Data into its own database

· Create Foreign Keys in the Database but configure not to enforce the foreign key relationship

· Analyze for occurrences of highly skewed data that is indexes.

Review Indexing Strategy for Data Warehouse Tables

· Index all New Primary Keys and Foreign Keys

· Frequently used in drill-down paths in a Dimensional Hierarchy

· Columns used in frequent reports within Filters

· Use different indexes for different types of data:

· Bitmap:

A query constrains multiple columns which have few distinct values in their domains (large numbers of duplicate values).

A large number of rows satisfy the constraints on these columns

The referenced table contains a large number of rows.

· B-Tree:

Occur frequently in WHERE clauses

Often used to join tables (include aggregate tables)

Occur in ORDER BY clauses (the index can facilitate ordering)

Occur in a foreign key reference constraint

Used to enforce PRIMARY KEY and UNIQUENESS constraints

Composite Key Indexes.

ETL Setup/Tuning:

The first step is ETL performance tuning is to identify performance bottlenecks. This can happen in the Source or Target database, Mapping, Transformation and System. The strategy is to identify a performance bottleneck, eliminate it and then identify the next performance bottleneck until it is satisfied with the performance.

· Run Test Sessions

· Study performance details and thread statistics

· Monitor system Performance

· Eliminate Source and Target Database Issues

· Eliminate Mapping Issues

· Eliminate Transformation Issues

Study performance details and thread statistics:

You can use thread statistics to identify source, target, and transformation issues. The Integration service uses one reader thread, one transformation thread and one writer thread to process a session. The session log provides the following thread statistics:

· Run Time

· Idle Time

· Busy

If a transformation thread is 100% busy, consider adding a partition point in the segment. When you add partition points to the mapping, the integration service increases the number of transformation threads. If system is running near full capacity, then do not add more threads.

Mapping Issues:

Mapping level optimization may take time to implement but it can significantly boost session performance. Generally you reduce the number of transformations in the mapping and delete unnecessary links between transformations to optimize the mapping. Configure the mapping with the least number of transformations and expressions to do the most amount of work possible. Here are some simple steps:

· Optimize the flat file sources

· Configure Single Pass reading

· Optimize Simple Pass through Mappings

· Optimize filters

· Optimize data type conversions

· Optimize expressions

Transformation Issues:

You can optimize following types of Transformation in Mapping:

· Aggregator Transformation

· Joiner Transformation

· Lookup Transformation

· Sequence Generator Transformation

· Source Qualifier Transformation

Aggregator Transformation:

Here are some guidelines to optimize the performance of an aggregator transformation:

· Group by simple columns

· Use sorted Inputs

· Use Incremental aggregation

· Filter data before you aggregate it

· Limit port connections.

Joiner Transformation:

Joiner transformations can slow performance because they need additional space at run time.

· Designate the master source as the source with fewer duplicate key values

· Designate the master source as the source with the fewer rows

· Perform joins in a database when possible.

Lookup Transformation:

Lookup table is on the same database as the source table in your mapping and caching is not feasible, join the tables in the source database rather than using a lookup transformation.

· Use the optimal database driver

· Cache lookup tables

· Optimize the lookup condition

· Index the lookup table

· Optimize multiple lookups

Sequence Generator Transformation:

The Number of cached values property determines the number of values the Integration Service caches at one time. Make sure that the number of cached value is not too small. You may consider configuring the number of cached values to a value greater than 1000.

SQL Qualifier Transformation:

Use the select distinct option for the source qualifier transformation if you want the Integration Service to select unique values from a source. Use Select Distinct option to filter unnecessary data earlier in the data flow. This can improve performance.

Data Warehouse (Target Database):

When ETL sessions writing into Data Warehouse tables, you can perform following tasks to increase performance:

· Increase Checkpoint Intervals

· Use Bulk Loading

· Minimize Deadlocks

· Increase database network packet size

· Optimize Target Database

Increase Checkpoint Intervals:

The Integration service performance slows each time it waits for the database to perform a checkpoint. To increase performance, consider increasing the database checkpoint interval. When you increase the database checkpoint interval, you increase the likelihood that the database performs checkpoints as necessary, when the size of the database log file reaches its limit.

Use Bulk Load:

When bulk loading enabled, the Integration service bypasses the database log, which speeds performance. Without writing to the database log, however, the target database cannot perform rollback. As a result, you may not able to perform recovery. But when you use bulk loading, weigh the importance of improved session performance against the ability to recover an incomplete session.

Minimize Deadlocks:

To improve session performance, you can increase the number of target connection groups the Integration Services uses to write to the targets in a sessions. To use a different target connection group of each target in a session, use the different database connection name for each target instance. You can specify the same connection information for each connection name.

Increase Database Network Packet Size

Increase the network packet size to allow larger packets of data to cross the network at one time. Increase the network packet size based on database you use:

· Oracle – You can increase the database server network packet size in listener.ora and tnsnames.ora file. Check with your DBA for additional information about increasing the packet size.

Optimize Target Database:

If the target database is Oracle, you can optimize the target database by checking the storage clause, space allocation and rollback or undo segments.

When you write to an Oracle Database, check the storage clause for the database objects. The database should also store table and index data in separate tablespaces, preferably on different disks.

When you write to Oracle Database, it uses rollback or undo segments during loads. Make sure you have appropriate tablespaces. The rollback or undo segments should also have appropriate storage clauses.

Tuesday, December 29, 2009

Use your remaining 2009 budget for 2010 training

BICG currently has a promotion going where you can take advantage of BICG University expertise at 30% off for a limited time. This is a great opportunity for those with a fiscal year that ends in December and few dollars left in their 2009 budget... you can use it for training that can be taken in 2010.

BICGU offers professional training for Oracle Business Intelligence and Enterprise Performance Management. Courses are offered via computer-based training (CBT), self-paced, online classroom, or on-site (domestic or international).

Oracle BI & EPM Training
30% off All Courses
Enroll before Jan 31

Monday, December 28, 2009

Experiences with Hyperion Budgeting Applications

Boy, do I miss the annual budgeting process. Who can argue with long hours, strict deadlines, and countless iterations?! Ok, I may be stretching it a bit here, but I certainly have had plenty of experience with the corporate planning process in many shapes, forms, and flavors. Following is a recount of my experiences with the growth and evolution of the Hyperion planning tools.

As a corporate financial analyst in the early ‘90s, the company I worked for performed the annual budget, quarterly forecasts, and monthly reporting using Excel spreadsheets – a practice that was commonplace back then, and still is today, to a large degree. As we all know, Excel is a fantastic tool that serves many purposes. But, as part of the budgeting process, it definitely presents many challenges and leaves a lot to be desired.

Some difficulties I’ve experienced when using Excel for planning include:
• Having to consolidate a multitude of linked spreadsheets on a regular basis.
• Many sources of data.
• Time spent having to repair or update faulty links.
• A confusing maze of cell formulas.
• Trying to determine which file is the most current version.

In the mid-90s, we purchased a tool called Hyperion Pillar to facilitate the budgeting cycle. Pillar was a very good planning tool that was at the top of its class in its heyday. As a side note, the one distinct thing I remember about Pillar when I was first introduced to it was that administrator users were called Motherships and planner users were called Satellites. I think this set the product apart all by itself.

For all the things that were faulty about the naming conventions, the tool itself was very strong. You had a centralized place for standardized calculations (globals). Data was neatly categorized into different modules (expense, revenue, asset, etc.). Financial reports were built by default. There was a single, master file that consolidated in each planner’s file (yea, no more spreadsheet links!). And, it had built-in security down to the dimension level (depts, accts, etc.).

Pillar was a great tool (and still is), which aptly served its purpose for many organizations. But, like other financial applications, it also had its limitations and weaknesses. For example, the reporting aspect did not allow any real formatting or customization. The distribution and consolidation process (sending files to and collecting files from end users) was cumbersome and often resulted in mismatched data. There were limitations in dimensions, line items, and other application objects that made it unusable or inefficient for larger organizations.

I spent many years supporting Pillar while I worked in Hyperion Technical Support and one of the most frequent requests we heard went as follows:

Client: “Hi, can you please have the developers implement Print Preview”.
Me: “Um, yeah, we’ll get right on that….”

If you ever used Pillar, this was one of the biggest complaints about the product. “Every application has a Print Preview!!” For various reasons, Hyperion was not able to implement this functionality into Pillar until the latest releases. And, at that point, Pillar was in its twilight. But, again, for the few faults it had, I would still be the first to say, “I Loved Hyperion Pillar!”

Around the year 2000, Hyperion developed a new budgeting tool that was supposed to be the next generation version of Pillar. I was lucky enough to support the beta and 1.0 versions of Hyperion Planning. Now you want to talk about fun – this was fun! Well, not really…. Challenging might be a more appropriate word. As you can imagine, supporting any 1.0 version of a product is difficult. After supporting a stable, well-liked product like Pillar for many years, trying to support this new brain-child called Planning was downright impossible. We went from a propriety, self-contained program to an application that incorporated relational databases, web application servers, an OLAP engine (what the?!), separate reporting tools, calculation scripting, batching, scheduling, etc. All I could think was, “I want my mothership back!!”

But, as with most successful programs, things always get better. Having been a part of that initial release, I’ve been able to experience Planning’s tremendous growth and stability. You have a web-enabled tool that allows end users to enter their budgets over the web. No more spreadsheets, no more files - the only footprint an end user needs is a web browser. The tool incorporates built-in process management, through both the budgeting and approval process. Hyperion Planning is well integrated with its OLAP engine Essbase and we finally see the ability to perform most of the administrative functions in the web interface. And, for those who still love their spreadsheets (and I know there are many who do!) Planning and Essbase are fully incorporated with the Microsoft Office suite, using a tool called Smart View. This tool allows you to keep a dynamic link into the Hyperion database while working from the familiar Office environment.

Today, with the acquisition by Oracle, Hyperion Planning’s capabilities have expanded farther than ever before. Take reporting, for example. You have the use of the native tools, Financial Reporting Studio and Web Analysis. Or, you can take advantage of the Oracle Business Intelligence (OBIEE) suite of products. You also have more flexibility in data integration. Oracle has the tools that allow you to automate the loading of source data or metadata from your ERP, synchronize your headcount information from your HR system, or drill-down to detailed data in a data warehouse.

Going forward, I believe the Oracle acquisition will be good for Hyperion Planning as integration with other products and interfaces will become more seamless and user friendly. Who knows maybe the budgeting process can actually become pleasant? Ok, maybe the holiday punch I had was a little “strong”, but certainly the current release of Planning should be a tool that allows analysts to do more analyzing (what a concept!) and less number crunching.

Wednesday, December 23, 2009

Loyalty Analytics in 7.9.6 for an actionable Customer Insight

The latest version of Oracle BI Applications Marketing Analytics 7.9.6 has released Loyalty Analytics which is critical for any organization that effectively

uses Marketing Analytics as a toll to track and execute their Marketing track their Marketing Programs, Campaigns. Loyalty Analytics may be treated as an add-on to helping an aggressive Customer Relationship Management and retain more customers. This would surely help an organization to maintain competitive edge especially in the current business world.

It is a fully integrated solution for Siebel Loyalty Management allowing users to analyze the performance of Loyalty Programs.

B2B which is Account loyalty specific analytics like Account revenue, growth, status, geography, industries and B2C analysis which is Consumer loyalty specific analytics.

Of the 61 OOTB reports built in 4 Dashboards and 13 Dashboard Pages, let’s understand the specifics of the four Dashboards

Finance, Loyalty Program, Membership Analytics, Partnership

The finance dashboard tracks reports specific to Revenue for a Program or Member, liability of programs, Redemption Costs

Loyalty Program is very important dashboard that can be used to track reports specific to the loyalty programs launched in Siebel CRM application.

It facilitates reports to track Loyalty overview, transactions, promotions and points. It provides more helpful reports for executives to analyze top performing and underperforming programs, Redemption trends

Membership Analytics is another dashboard that helps to analyze membership trends, revenue and membership churns

Partnership dashboard helps analyze the performance of partners in terms of profitability and revenue trend.

There are quite a few important reports that may be developed beyond the existing OOTB canned reports to explore more analysis

1. For Partners -->

a. Partners performance by geography (Revenue)

b. Customers retention by partner

2. For Loyalty Program --> Points transferred

3. Enrollments --> Unused/Dead enrollments

4. Transactions --> Lease Redeemed products

5. Points --> Bonus Points

6. Cost and Revenue --> Unused Credits for cancellations

While the above spelled is not limit there can be many reports based of business requirements and customization in the base Loyalty Application

What's the right level of detail to include in Hyperion Planning

When facilitating design sessions with clients, the question that invariably comes up is what level of detail should be included within their Hyperion Planning application.

Often, a clients first instinct is to include the level of detail that exists within their ERP system for actual data. When confronted with this desire/requirement, I like to encourage spirited conversation between project sponsors and subject matter experts by having them address the following two items.
1) Does a desire exist across the majority of the user base, to plan/budget at the same level of detail for which actual data occurs?
2) Does this level of detail for formulating a plan/budget coincide with management’s performance management objectives?

I have found that the best practice for the level of detail contained within a planning application or applications is that it should reflect management’s performance management objectives. Detail for detail’s sake wastes resources, but arbitrarily limiting the level of detail is not a best practice if it limits analysis against important factors that must be controlled.

If the driving desire to include this level of detail in the Planning application is only to facilitate analysis against actual data and business owners have no desire to perform data entry to each and every travel expense account for example, then other alternatives exist that will not encumber the Planning application(s).

Successful Planning implementations provide the level of detail within the planning model that business owners are accustom to and desire to formulate their Plans/Budgets. Meaningful reporting and analysis often may require further details then what users desire to plan to. This disconnect can be addressed through properly architected reporting applications, partitioning, or drill through.

When addressing the level of detail that is to be included within a Planning application, answering the two fundamental questions above has lead me to successfully architect and implement in excess of 30 Planning applications. More often than not, most clients when really pushed to consider what is really necessary from a Plan/Budget perspective versus what is necessary from a Reporting/Analysis perspective will arrive at a Planning application that is more summary level.

While meeting management’s performance management objectives for Reporting and Analysis shouldn’t be ignored, encumbering users with unnecessary levels of detail within a Plan/Budget only introduces potential for issues with performance, maintenance and end user satisfaction.

The mantra that I as well as many other seasoned EPM professionals subscribe to when asked to architect a Planning application is that, “Provide the level of detail necessary to formulate the Plan. Planning should be used as a means to facilitate Plan/Budget vs. Actual analysis, but if the level of detail that this analysis occurs at is beneath the level of Plan formulation then this analysis should be done outside of Planning (i.e. Essbase, partitioning to a reporting app, drill through).”

Wednesday, December 16, 2009

Oracle partners with BICG on BI Applications Hands-on Workshop

Pleasanton, California - December 09, 2009

BICG partnered up with Oracle to conduct a hands-on Oracle Business Intelligence Applications workshop for Oracle clients.

Oracle provided the meeting-space, latest and greatest view of the technology, and business value gained when implementing the technology presentation. This allowed all attendees to see first hand what is happening and what is coming. You had to be there!

As the only recipient of the coveted Oracle BI/EPM Partner of the Year designation in association with the Oracle Innovation Awards at this year's Oracle OpenWorld, Minneapolis based BICG was selected to provide the "hands-on" training. BICG has long been recognized as a top performer when it comes to executing "experience based" success in both product training and project design, development and implementation. BICG was also presented to the group on "Developing a Corporate Business Intelligence Strategy". The presentation directly addressed potential pitfalls and recommendations surrounding Business Intelligence projects. BICG chose 30 year BI veteran Steve Tinari to lead the training and make the BI Strategy presentation.

Registration for the workshop was required and limited to 40 seats. This is all the space there was in the meeting room. Registration was sold-out in days and a wait-list was quickly developed. Last look had 25 clients waiting to claim a seat.

BICG has been asked to continue conducting this hands-on Oracle BI Apps workshop at strategic locations across the US and abroad. Registration for upcoming workshops can be found at .

BICG Workshop Advice: register early for the upcoming events and don't forget to bring your laptop. BICG uses a virtual training method that allows you to connect to their environment using only a web browser!

Wednesday, December 9, 2009

Oracle OpenWorld Video: Spotlight on Oracle BI at LCRA

During Oracle OpenWorld, Keith Niesner from the Lower Colorado River Authority (LCRA) provided an overview of business intelligence at his organization.

Niesner highlights the use of Oracle Business Intelligence at LCRA while detailing the challenges faced during deployment and lessons learned..

More Video from Oracle OpenWorld and BICG >>

Friday, December 4, 2009

GL Segments in the BI 7.9.6 GL Balance Sheet subject area

The OBIEE 7.9.6 GL Balance out-of-the-box subject area provides for segment reporting. A segment reporting hierarchy provides users with a drag-and-drop way to produce segment reports. Segments can be any subset of a dimension, but in our example let’s use the set of accounts in the Gross Margin report.

The value of managing the hierarchies for the segment is that multiple levels of the segment can be created, from most aggregated to most granular, and can be saved and exposed for drag-and-drop off the presentation layer in Answers and Office Add-Ons. For example, a user would be able to take Level3 of the Gross Margin Dimension, the Ending Amount from the Fact table, and get the Gross Margin report. Level4 can be just slightly different than Level3 for some reason specific to Gross Margin reporting, and that is perfectly fine since this hierarchy is dedicated specifically to Gross Margin reporting. The user should also be able to take Level 1, and then drill down to levels to see further breakouts of gross margin lines. Expense might break down to lower levels on the hierarchy like Indirect, Direct, and then further down at further levels to more granular groupings of accounts.

It is interesting to see that the GL Balance Sheet subject area has delivered 10 different account hierarchies. There is an assumption that there will be a need for many hierarchies for various segments in reporting. The hierarchies are labeled GL Segment1 through GLSegment10.

What's the value of hierarchy management?

Hierarchy management introduces the notion that hierarchies need neither be fixed, as is often seen in OLTP systems, nor unmanaged, as suggested by needs of ad-hoc end user reporting needs. In fact hierarchies are so important that they deserve the opportunity to have any number of iterative “what-if” scenarios to quickly come to the truth hidden behind their aggregates. In order to accomplish this, the hierarchy is abstracted away from both the operational and analytical systems (i.e. OLAP). Since most data distills into groups with lists of members, the art and science of manipulation of these taxonomies becomes its own managed system, improves analysis of business processes, and also enforces data quality. Hierarchy management has become more recognized as expectations have increased for consolidated reporting, service oriented architecture and data integrations. In these environments, not only do different account numbers in underlying systems need to be resolved, but there needs to be apples-to-apples reporting where accounts are segmented using hierarchies. Different roles in the organization need different hierarchies. According to Eric Kavanaugh of TDWI Research Institute:

“Hierarchy management serves as a translation layer between values in a data warehouse or other business intelligence environment, and the reporting or analysis view that business users see. By abstracting this translation layer, individual business users can manipulate variables within that layer without overwriting data in source systems or requiring hard-coded changes to those systems. This abstraction opens up an entire realm of analysis that can help organizations maintain competitive advantage in an increasingly complex and demanding marketplace” (The Magic of Abstraction: Hierarchy Management and Decision-Making

TDWI Research article, By Eric Kavanagh)

Implementation Implications

Exposing multiple hierarchies helps unlock the advantages of the managed hierarchies using OBIEE. The 11g release is going to have some improvements in the presentation of the levels on the hierarchies, but this does not solve the whole problem. The challenge I have found in implementation is twofold. If the hierarchy management is not effective, it is difficult to find effective definitions of levels for segments such as a Gross Margin segment. But in the past the bigger problem has come around the decision whether to expose a hierarchy at all in OBIEE. In the past it was difficult to convince an organization to expose a hierarchy called Gross Margin, since all the accounts were in the Account hierarchy already. The problem was very serious though. The Levels were not tailored specifically for Gross Margin segment reporting, which made it impossible to do easy drag-and-drop reporting. OBIEE reporting was not seen as a viable option compared to other tools. Now that BI Apps 7.9.6 comes with 10 segment hierarchies out of the box, it is more clear that multiple segment hierarchies are valuable to the organization, and many hierarchies need to be exposed.

Another important tool to help with hierarchies is the BICG product IMPACT(c), because the hierarchies need to be analyzed at many levels. BICG IMPACT(c) provides a great way to analyze which hierarchies are being used where, how often, and by whom. When integrated with the ETL source, the legacy source can be exposed for information purposes using the BICG IMPACT(c) subject area to create OBIEE iBots, answers and dashboards.

Thursday, December 3, 2009

CAF = Migration Utility? Use Caution!

There has been a lot of chatter recently in the OBIEE blogosphere regarding the Content Accelerator Framework (CAF) and its use in the process of migrating OBIEE content from dev/test to production. Despite the CAF being available for quite a while now, it is suddenly being heralded as the new way to migrate OBIEE content to production. In this blog post, I'd like to talk about reasons why this may not be a good idea and why caution should be used in adopting CAF as the crux of this critical process.


Before we get into that explanation, though, let's talk about what CAF is really meant to be/do. In the documentation for CAF, the utility is described in the following way:

Content Accelerator Framework V1 (CAF V1) is designed to help OBI EE power users deploy template reports and RPD constructs from any source environment into their own OBI EE environment. The key functionality of CAF V1 allows easy duplication of any existing reports or logical RPD construct from one OBI EE environment to another one. Both, source and target environment could have nothing in common and be completely different. The only prerequisite is that the target environment already has at least a basic logical model designed within its RPD.

CAF V1 clones any report existing in any Webcat, there is no specific property that makes a report be a template eligible to cloning by CAF V1. From a list of existing reports or dashboards (any kind of reports, including a highly formatted layout with different views, including various Webcat calculations), a functional user is able to select analysis of his interest and can clone this analysis to his environment.

In those two paragraphs, the concept of "template reports" is mentioned. This concept is the thrust of CAF and explains what CAF is really meant to do.

Back when OBIEE was released, Oracle switched the out-of-the-box demo content from "Paint" to "Sample Sales". Sample Sales also included brand new sample reports and dashboards that really showcased the capabilities of OBIEE in a way not shown before. The reaction to the Sample Sales content was very positive, with many customers asking Oracle, "How can I get reports like those in my own environment?" CAF is the answer to that question. It allows a customer to take the Sample Sales content and "map" it into their own environment, replacing Sample Sales columns for their own, while all the rich formatting, report structure, and logical metadata comes along for the ride without the user having to do any development at all.

I was actually shown a very early version of CAF when it was an Excel-based tool that leveraged macros to do its thing. The thrust was all about accelerating content development (hence the name) for users with Sample Sales as the template for the new content. It was thought, at the time, that this somehow might be the beginning for having template capabilities in OBIEE, but ultimately it was all about taking a desired Sample Sales report and push it into a customer's environment without doing all of the manual development work. Since then, the tool has evolved into the Catalog Manager add-in that it is today.

Why Does OBIEE Need A Migration Utility?

Now that we understand the background behind why CAF was developed and what its intended use is, it's important to understand why this buzz about CAF as a migration utility is even occurring. The only reason for people to get excited about a new option for migrating OBIEE content is due to the fact that current options are less-than-desirable on an Enterprise level. For the customer supporting a lot of content in OBIEE across hundreds or thousands of users, the existing options for migration to production are difficult to swallow. I'm not going to get into those options here and discuss why they are inadequate, but it's safe to say that experienced implementers know the challenges, experienced customers know the challenges, and even Oracle understands the challenges (and hopefully 11g offers some improvements as a result).

Why CAF Shouldn't Be The New Best Way To Migrate

So now that I have all of that background material out of the way, let's get to the heart of the matter. CAF was not designed as a full-blown migration utility. Nowhere in the documentation will you see "migrating to production" as one of its purposes. While I love applying creativity and using software in outside-the-box ways, there are some very good reasons why CAF should not be considered the new best way to migrate OBIEE content to production. They are:
  1. It's still not automated
  2. It's more labor intensive than the other ways
  3. It can only migrate certain content
  4. It's not a supported product
So let's look at these in more detail.

1. It's Still Not Automated

This is the weakest of my reasons, but I'm going to list it anyway. The biggest challenge I see customers having with OBIEE production migrations is the fact that it is very, very difficult to automate the migration of catalog content to production. Again, I'm not going to go into the reasons why, but the bottom line is that the thing that I hear customers wanting most is not present in this solution. That's not a reason by itself to forego CAF as your migration solution, but it's worth mentioning.

2. It's More Labor Intensive Than The Other Ways

One of the selling points of CAF as a report template mechanism that you can take a report you like, clone it, and quickly map the columns on the report template to the columns you want to use in another subject area instead. This is a great feature, until you want to do this for dozens if not hundreds of reports, like you might do in a production migration. While CAF offers the ability to select multiple reports or entire dashboards, you still have to map all of the distinct columns across all of the reports to the same columns in the target RPD (because presumably in a production migration, the source RPD and target RPD will be the same). But wait - it gets even harder than that, because to do this mapping, you first have to select the target subject area, which means if you have dozens or hundreds of reports that span multiple subject areas, you have to do this needless mapping step subject area by subject area, one batch at a time.

If all that extra mapping mess weren't enough, there are other extra steps to consider. If there are columns used in filters that aren't displayed in the criteria, you have an extra mapping step for that. If there are columns that require level definition (AGO, TODATE), you have to map the levels for them (even if they are the same). You have to specify the target catalog folder path, which implies you can realistically only clone one parent folder at a time. You have to specify whether to create a dashboard or not, implying you can only clone one dashboard at a time. All of these steps add-up to extra effort that far exceeds the current migration techniques. While it's true you can save mappings for future re-use, that only helps if you're going to migrate new content against the same unchanged metadata, which is only a portion of all possible production migrations.

3. It Can Only Migrate Certain Content

The CAF documentation lays out the known limitations of CAF:
  • Cannot clone reports using set operations (UNION, MINUS, INTERSECT)
  • Cannot clone reports with aliases
  • Does not carry forward saved filters that are used to filter reports
  • Only carries forward the default column in a column selector
  • If a dashboard is cloned that contain reports from multiple catalog folders, they will be cloned into a single folder
  • Any link to a report will reproduce the link, but not clone the target report
  • Cannot clone any report if the source or target RPD contains Essbase metadata
  • Command line utilities may cause problems with parsing a specific RPD syntax
While each of these are show-stoppers in their own right, there are other undocumented limitations when considering CAF as a migration utility:
  • Does not migrate any object security, groups, or privileges(!!!)
  • Does not move reports or dashboards to another folder location
  • Does not delete reports or dashboards from the catalog
All of these limitations narrow the scope of migration scenarios very quickly, making it impossible to recommend using CAF for all production migrations.

4. It's Not A Supported Product

If I still haven't swayed you, the coup de gras is the fact that this utility is not an officially licensed product, which means no maintenance or support. The very first page of the CAF documentation spells it out plainly:

CAF V1 is a free utility code, not maintained by Oracle as a licensed product.

A migration to production is a critical event in the administration and use of OBIEE. If the migrations don't happen successfully, accurately, and timely, that can be a very big deal. While free utilities have their place, I would not want to rely on one to carry the weight of something as heavy as a migration to production on a regular basis. With no options for support if something goes wrong, and no guarantee that Oracle will continue to enhance the utility, it's hard to recommend it, regardless of whether the utility does the job or not.


So as you read all of the chatter on the blogosphere that advocates using CAF to perform your mission critical migrations to production, make sure you proceed with caution before doing so. Perform plenty of testing, consider the risks, and make an educated decision. If you find that CAF works as a migration solution for you, drop me a line. I'd love to hear that I'm wrong on this, as we're all looking for a better way to migrate.

Wednesday, December 2, 2009

Integrating Testing in the System Development Life Cycle

After you have spent three to six months gathering requirements, designing, and building the OBIEE Application you start the User Acceptance Testing with the business users. When the business users start to do the acceptance testing many times we hear the following from the business users:

1.) The data on the Reports and Dashboards do not agree with the data on the in my Excel Spreadsheet
2.) The data on the Reports and Dashboards do not agree with the data in our source system
3.) It is difficult to navigate between the Dashboard and Reports to see the necessary information
4.) We do not like the gauges and look and feel of the Reports and Dashboards
5.) The Dashboard Prompts do not give us the selection criteria we need to see the reports
6.) We cannot see the information that we need on the Reports and Dashboards
7.) We have a need to look at data to follow up on the out of bounds metrics on the Reports and Dashboards

How many times have you encountered this in on a BI Application or OBIEE Project? If you have been on many of the projects that I have worked on it has occurred to frequently. Instead of going to the next project in the BI Program, you have to spend many hours trying to meet the user’s needs found during User Acceptance Testing.

So what has contributed to this situation? There are many items that have contributed to the above situations. Some of them are:

1.) Poor Requirements gathering
2.) Poor Design
3.) Not involving the user in all phases of the system development life cycle
4.) Poor or no Testing Processes

Of the above one of the biggest contributors to this situation are Poor or no Testing Processes. So where do we start the testing processes for a project. Bill Hetzel in his book “The Complete Guide to Software Testing” states that Testing Processes need to start with the Project Initiation Phase. He recommends the following steps for integration testing within the System Development Life Cycle:

Project Initiation:

Develop Broad Test Strategy
Establish the overall test approach and effort


Establish the testing requirements
Assign testing responsibilities
Design preliminary test procedures and requirements-based tests
Test and validate the requirements


Prepare preliminary system test plan and design specifications
Complete acceptance test plans and design specification
Complete design-based tests
Test and validate the design

Complete the system test plan
Finalize test procedures and any code-based tests
Complete module or unit test designs
Conduct the tests
Integrate and test subsystems
Conduct the system test

Conduct the acceptance test
Test changes and fixes
Evaluate testing effectiveness

As Bill Hetzel states above Testing needs to be involved in all phases of the System Development Life Cycle. Now one of the arguments against testing is that it takes a long time do and conduct. However, without following at least some of the major steps projects are prone to some of the same symptoms that we initially discussed. Also a fail or project that exceeds the cost and schedule and does not provide the client the information that he needs result in a failed project – this is the perception by the business users if the project does not meet their perceived needs and requirements.

It has been said the Projects are 80% communications and 20% technical. If we fail to communicate the testing needs and implement projects that the users perceive do not meet their needs and requirements then we are contributing to a failed project. Implementing testing into the System Development Life Cycle greatly improves the communication on a project, and results in users perceiving that the projects meet the needs and requirements. There is no major secret to integrating testing within the System Development Life Cycle, it just requires that it be included and the users be involved in all phases of the System Development Life Cycle. If the proper communication and testing processes are included in the System Development Life Cycle, User Acceptance Testing should only be a minor effort for the project because the users are only validating what already has been defined and tested.

Tuesday, December 1, 2009

Are your Performance Management metrics useful?

Performance Metrics are the heart of any BI/EPM implementation, but often, the ability to effect change in the metrics is not seamless. In this blog, I'll explore three key factors related to less-than-optimal usage of Performance Management Metrics and suggest practices to improve application of metrics to an organization's benefit.

Metric Background
Metrics are intended as a simple measurement of activity. By developing and publishing organizational metrics to decision makers, the labor intensive effort of collecting source data, manipulating or adjusting the data, and ultimately calculating metrics, is eliminated. Unfortunately, this also eliminates a large part of the knowledge capital necessary to understand how to influence metrics.

  • Key decision makers expected to act upon metrics must understand the full formula for deriving the metric. Only with this information can the best decisions regarding how to improve a business's performance be made.

Relevance of a change in a Metric's value
Similar to having an understanding of the makeup of a metric, organization decision makers must have an understanding of the relevance of a change in the value of a metric. Often, formulas producing metric results mask exponential changes in source values to appear to be linear changes. Over time, this can result in less efficiency from additional attempts to improve a measurement.
  • Organizations must have an understanding of the relevance of a change in a key metric at different points. A thorough understanding of the historical change, as well as an analysis of the point at which further improving a metric results in materially diminishing returns, will guide the organization's decision makers to base their decisions on the most accurate basis at a point in time.
Industry Trends
Over time all organizations undergo changes to their business model, whether due to internal, external, or a combination of factors. Flexible metrics can enhance the ability for an organization to respond to changing market dynamics by quickly refocusing key decision makers to respond to business model changes.
  • Frequent review of Performance Metrics for relevance to the current business climate allows for improved competitive advantage by reducing the time necessary to refocus an organization.

In conclusion, Performance Metrics are an invaluable tool to improve the decision making intelligence of an organization. However, to leverage maximum long and short term benefit requires a commitment to supporting the organization with the best possible guidance. Organizations that embrace their Performance Metrics as an avenue for executing strategy can realize significant gains in competitive advantage.