OracleBIBlog Search

Monday, January 25, 2010

BI Apps Performance Optimization and Consideration - Part 2

This is in continuation to the earlier posting. We would now look at some recommendadtions for OBIEE Repository(Metadata), Reports and Dashboards and Presentation Catalogs


· Analyze the driving tables and nested loop joins
· Analyze the SQL if using any view object definitions in the physical layer
· Consider the possibility of complex joins in the physical layer
· Analyze the possible usage of Inner, Outer, left inner
· Consider usage of aggregate tables and model the rollups in the logical layer thus avoid detail tables as and when required
· Analyze the aggregate tables so that they do not combine multiple levels of aggregation instrad put each level of summarization within its own table
· Consider subset large physical dimension tables creating mini dimension tables but this must be done by identifying the family of queries that fall within the subset table
· Consider use of filters than CASE statements in rpd
· Manage cache and analyze if any stale queries are generated
· Evaluate possible cache purging methods
· Evaluate design
· Create additional aliases to avoid circular joins for dimension tables that are joined to more than one or other table
· Use canonical time dimension
· Use multiple time dimensions for each level of granularity
· A single logical time dimension may be modeled having many LTS’s of physical time dimension aliases

· Increase the max number of connections per connection pool based on need only
· Possibly do not use opaque views
· Validate proper logical level settings at the content tab for confirmed model
· Validate the metrics aggregation rules as when the metrics are created using logical columns then the aggregation rule need to be applied before and similarly when the metrics are created using the physical columns then the aggregation rule must be applied after creating the metric
· Use estimate levels to set the number of elements for each dimensional hierarchy level

Reports and Dashboards

· Recommended to split their reports based out of data level visibility. Now that a dashboard is with all data across all users. splitting with data level security model enhances performance as it restricts users with limited data at query level
· Must implement data level filters across dashboards to restrict data on dashboards that require significant scrolling and navigation
· To help users to locate their data in which they are interested organize dashboards by user role
· Home dashboard page might have only summary level reports and create a guided user navigation linked reports to facilitate users to drill to details
· Also you may create data driven drill through reports on summary to navigate to detailed reports of hierarchy
· Use conditional formatting to focus attention on data that is outside given parameters
· If the columns that are included in a report use web variables or carry only filter criteria, then hide these columns
· Direct users to other analysis based on how their dashboard appears at that moment
· Search and destroy all excluded column in pivot views across reports
· It is observed that pivot table views are being used across many reports. It is recommended to avoid using pivot views as far as possible
· Try to increase values for and parameter to override maximum number of records that can be processed by a pivot table
· Prune repository and web catalog to requirements
· For a table view just in case you want to increase Rows of data beyond 65000 then change parameter in instanceconfig fileVIRTUAL_TABLE_PAGE_SIZE in NQSConfig.ini file can be increased to 256

Presentation Catalogs

· Try to stay to the rule of 7, i.e. 7 top level folders and 7 columns per folders
· Create subfolders under relevant folder section
· When there are many fact tables within a presentation catalog then assign an implicit fact table

In the next part we will see the techniques for aggregation, caching, database optimization