OracleBIBlog Search

Wednesday, January 27, 2010

Three steps to organize your Presentation Layer

Often, we are asked to put a "quick" demo/ad hoc/test repository together for one reason or another. Typically, these are short lead time activities and do not always leave you with the ability to make things the way they should be. Aside from making the repository work the way it should, you should take the time to organize the presentation layer. The time you spend looking through a list of facts like the one below while putting together the reports and dashboards necessary is drastically reduced.

This is a series of screen shots that were taken while paging down the list of columns in one unorganized fact table.

Imagine scrolling through this list to find columns for the reports you are going to provide. What would it take to make it easier?

Step 1 - Reduce
There are many facts in this table that you may simply not need to accomplish the task asked of you. The easiest thing you can do is deleted the unnecessary columns from the presentation layer.

The former series of 14 pages is reduced to 2 by simply removing extra facts.

Step 2 - Arrange
Once you have deleted the superfluous columns, arranging them in alphabetical order will help any coworker(s) or those of us with an aging brain find them quickly.

This step may not be necessary if there are very few columns left from Step 1 or Step 3 is taken.


Step 3 - Organize
Organize the columns into meaningful sub-folders. This creates a user-friendly environment and shortens the list of things to look through.
To do this:

1. Add a presentation table


2. Give the Presentation Table the name of the Sub-folder and add "->" to the description field


3. Move the Presentation Folder under the main folder. In this example, Revenue will be a sub-folder of "Facts" since "Expenses" and "Fees" are also sub-folders
(remember keep them in alphabetical order)
4. Move or Copy the columns from the main folder to the sub-folder(s)
(hint: if moved or copied in alphabetical order, you will not have to reorder them later)


Even in a quick project these steps can be a tremendous help to everyone involved, and they take less time then one thinks.

Monday, January 25, 2010

Annotation Service in Hyperion Workspace – Providing collaborative commentary

A significant enhancement that was delivered with the release of Hyperion Reporting and Analysis Release 11.1.1 was annotations. Annotations provides a comprehensive set of annotation functionality that enables users to freely collaborate, save facts and findings in their reports, and access tools to readily manage annotations across their systems. This commentary is available interactively through the Hyperion Workspace, is equally accessible through printed output, and provides a unified read-only mechanism for the Hyperion Financial Management and Hyperion Planning commentary (for example, Cell Text, Cell File Attachments, and so on).

The Annotation Service, used with Financial Reporting reports, provides a collaborative tool that captures collective information that can be shared on particular documents and data. It provides a range of flexibility from simple notations to full-fledged threaded discussions, laying the foundation for collaboration, compliance reporting, and business process analysis. Authorized users have the ability to create, remove, and respond to annotations. Annotations can be attached to a report object (grid object, text object, chart object, and image object), and can reference a full or partial POV. Annotations included in reports can be viewed when selected for books and snapshot books.

The screen shots below demonstrate the application of an annotation to a grid object for a specific full POV.





After rendering the report through WorkSpace, right click on the cell that is specific to the annotation

Select new Annotation, the following dialog box is displayed:



The annotation tab captures the title, description and a predefined category. Predefined categories exist describing the nature of the annotation. Categories are Comment, Assessment, FYI, Misc (Miscellaneous), Other, Performance, or Target.



The Context tab captures the Object and Dimension information. Object information can be a datasource or any object on the report. For example: for a data source, the annotation is attached to any Financial Reporting report using the specified data source. For a Grid:data source, the annotation is attached to the specified grid and data source. For Text, Charts, or Grids, the annotation is attached the specified object with no dimensions on the object.



Attachments tab — Link attachments to an annotation. Attachments can be a file on your local drive, any object in the repository, or a URL.



When OK has been selected and the annotation has been applied a visual indicator, a paper-clipped document appears within the upper left-hand corner of the cell



By hovering over this icon, the description of the annotation appears, included within this dialog box are links to showing details, a reply capability, open attachment and closing the dialog box

When show details has been selected, the following is displayed:



You can close the details dialog box by selecting the down arrow between the grid and details window



You can also use the View menu to enable or showing or hiding annotations. To view annotations select View > Show Annotation. To hide annotations, select View > Hide Annotations.



Users with the Modify or Full Control permissions can respond to annotations through the annotations bubble or the Annotation List Panel in a Financial Reporting report. While replying, one can change the category and also add attachments like EPM Workspace documents, local documents on your computer, or URLs.

To respond to annotations:


In a report that contains annotations, do one of the following:


  1. Click an annotation icon, then click the Reply icon and respond to the annotation.



  2. Locate the Annotation List Panel at the bottom of the screen, hover over the Show panel and click it. Then click the Reply icon and respond to the annotation.

You can render this report in the PDF format by selecting Open in PDF as shown belown:



The report is rendered in the Adobe PDF format. The annotation is evident with a footnote number 1.



When printed, the annotations and reply threads will be displayed on a separate footnotes page



BI Apps Performance Optimization and Consideration - Part 2

This is in continuation to the earlier posting. We would now look at some recommendadtions for OBIEE Repository(Metadata), Reports and Dashboards and Presentation Catalogs

Repository

· Analyze the driving tables and nested loop joins
· Analyze the SQL if using any view object definitions in the physical layer
· Consider the possibility of complex joins in the physical layer
· Analyze the possible usage of Inner, Outer, left inner
· Consider usage of aggregate tables and model the rollups in the logical layer thus avoid detail tables as and when required
· Analyze the aggregate tables so that they do not combine multiple levels of aggregation instrad put each level of summarization within its own table
· Consider subset large physical dimension tables creating mini dimension tables but this must be done by identifying the family of queries that fall within the subset table
· Consider use of filters than CASE statements in rpd
· Manage cache and analyze if any stale queries are generated
· Evaluate possible cache purging methods
· Evaluate design
· Create additional aliases to avoid circular joins for dimension tables that are joined to more than one or other table
· Use canonical time dimension
· Use multiple time dimensions for each level of granularity
· A single logical time dimension may be modeled having many LTS’s of physical time dimension aliases

· Increase the max number of connections per connection pool based on need only
· Possibly do not use opaque views
· Validate proper logical level settings at the content tab for confirmed model
· Validate the metrics aggregation rules as when the metrics are created using logical columns then the aggregation rule need to be applied before and similarly when the metrics are created using the physical columns then the aggregation rule must be applied after creating the metric
· Use estimate levels to set the number of elements for each dimensional hierarchy level

Reports and Dashboards

· Recommended to split their reports based out of data level visibility. Now that a dashboard is with all data across all users. splitting with data level security model enhances performance as it restricts users with limited data at query level
· Must implement data level filters across dashboards to restrict data on dashboards that require significant scrolling and navigation
· To help users to locate their data in which they are interested organize dashboards by user role
· Home dashboard page might have only summary level reports and create a guided user navigation linked reports to facilitate users to drill to details
· Also you may create data driven drill through reports on summary to navigate to detailed reports of hierarchy
· Use conditional formatting to focus attention on data that is outside given parameters
· If the columns that are included in a report use web variables or carry only filter criteria, then hide these columns
· Direct users to other analysis based on how their dashboard appears at that moment
· Search and destroy all excluded column in pivot views across reports
· It is observed that pivot table views are being used across many reports. It is recommended to avoid using pivot views as far as possible
· Try to increase values for and parameter to override maximum number of records that can be processed by a pivot table
· Prune repository and web catalog to requirements
· For a table view just in case you want to increase Rows of data beyond 65000 then change parameter in instanceconfig fileVIRTUAL_TABLE_PAGE_SIZE in NQSConfig.ini file can be increased to 256

Presentation Catalogs

· Try to stay to the rule of 7, i.e. 7 top level folders and 7 columns per folders
· Create subfolders under relevant folder section
· When there are many fact tables within a presentation catalog then assign an implicit fact table

In the next part we will see the techniques for aggregation, caching, database optimization

Thursday, January 21, 2010

Automating Your Reporting Package in Hyperion

Putting together your monthly, quarterly, and annual reporting packages is menial enough, but if that process is done manually, it makes it all the more tedious. Especially if, in a worst case scenario, you’re having to consolidate, reconcile, and update individual spreadsheets and print those off one-by-one….

Hyperion Workspace provides a great solution to remedy this problem through its inherent batching and scheduling functionality. This feature set allows you to collect reports into a book, batch reports and books, and schedule them for a variety of output. Following is a brief rundown of the scheduling process.

First off, you will need to decide whether your reports should be gathered into a book. Books generally consist of logical sets of reports and/or report snapshots. A report snapshot is a picture of a report on a specific date and time and does not dynamically link back to the source database. You might see a book gathered by category (monthly, quarterly, annually), by scenario (actual, budget, forecast), or by functional area (sales, engineering, administrative). It all depends on business need and logical groupings.

Books have their own point of view (POV) that allow you to override user or grid POVs. For those who aren’t familiar with Hyperion POVs, in a report, a POV is a single dimension member that applies to the entire report. From a report perspective, you have dimension members in rows, columns and the page. Any dimensions not included in the row, column, or page fall to the POV and any dimension in the POV can only have a single member selected. A grid POV applies to a grid in a specific report, whereas a user POV applies for a specific user across multiple reports. So, as an example, if you wanted to run all reports for the budget scenario for 2010, you could override report selections by using the book POV. Books can also be set up with prompts, allowing member selections to occur when the book is run.

Finally, books have a table of contents feature, enabling you to show and/or print out a listing of the reports included in the book.

The second step would be to create a batch. A batch can include items such as individual reports, report snapshots, and books. The batch is required in order to schedule reports for output. Also, if reports or books included in the batch contain prompts, members can be selected for those prompts as part of the batch set up. Like a book, a batch has its own POV that can override the POVs of its included objects.

The last step in automating your reporting package is to schedule the batch. The batch scheduler allows you to generate output immediately or on a specified date, time, and recurring frequency. Oftentimes, what I’ll see clients do is run their scheduled reports afterhours, so the processing time occurs while end users are out of the system. From a performance standpoint, this is optimal since the batch generation won’t compete with end users for system resources.

A newer feature of batch scheduling is bursting. Bursting allows you to enable reports to be run for multiple POV members. As I mentioned above, the POV is a single dimension member applying to a particular object. Bursting lets you choose multiple members and then runs the batch output for those selections. This would be useful if, for example, you wanted to run both budget and actual reports or reports for the western, eastern, northern, and southern regions.

The next batch setting is the output. You have the following options:
• Generating a report snapshot saved in the reports repository. A nice feature here is that you can set permissions to the output as part of the batch scheduler.
• Printing output.
• Exporting the output in PDF format to a shared directory and/or as an email attachment. This can be especially useful for users who do not have an ID set up in the application.
• Exporting the output in HTML format to a shared directory.

Finally, you can set up the batch scheduler to send an email if the batch generation is either successful or unsuccessful.

In terms of the mechanics for setting up books and batches and scheduling them, it’s fairly straightforward. All of these items are created in Hyperion Workspace. For books and batches, that’s done through the File -> New -> Document menu option. To schedule batches, click the Navigate icon and go to Schedule -> Batch Scheduler.

Once all your reports, books, and batches have been set up and scheduled, you really should have a self-sustaining system with dramatically decreased cycle times.

Monday, January 18, 2010

Optimization of Planning Applications – Part 2

In this edition of optimization of Planning applications, we'll address leveraging multiple plan types and calculation running in top-down mode.

Let's first address calculations running in top-down mode. When looking to optimize calculations I like to run simulations calculations to determine bottlenecks in the processing of data. As discussed previously, administrators can simulate calculations using SET MSG ONLY within a calculation script. A simulated calculation produces results that help you analyze the performance of a real calculation that is based on the same data and outline. By running a simulated calculation with a command like SET NOTICE HIGH, you can mark the relative amount of time each sparse dimension takes to complete. Then, by performing a real calculation on one or more dimensions, you can estimate how long the full calculation will take, because the time a simulated calculation takes to run is proportional to the time that the actual calculation takes to run.

Essbase uses one of two calculation methods to perform a calculation of a database outline, a bottom-ups calculation or a top-down calculation. By default, Essbase does a bottom-up calculation.


 

For a bottom-up calculation, Essbase determines which data blocks must be calculated before it calculates the database. Essbase then calculates only the blocks that must be calculated. The calculation begins with the existing block with the lowest block number and works up through each block in number order until the existing block with the highest block number is reached.


 

Forcing a bottom-up calculation on a formula usually increases performance time. If the formula contains complex functions (for example, range functions) or if the formula's dependencies are not straightforward, a bottom-up calculation may produce results different from those of a top-down calculation.


 

Multiple approaches can be leveraged to force a bottoms-up calculation; these include @CALCMODE, CALCMODE, SET FRMLBOTTOMUP, and CALCOPTFRMLBOTTOMUP. The information below contains how these approaches differ and the syntax leverage.


 

@CALCMODE

Enables within a calculation script, the choice of an execution mode of a formula. @CALCMODE can control two types of modes:

  • Whether a formula is calculated in block calculation or cell calculation mode when calculating formulas that contain certain functions (in particular the @ISMBR function)
  • Whether a formula assigned to a sparse member is calculated in bottom-up or top-down mode

Syntax

@CALCMODE (CELL|BLOCK|TOPDOWN|BOTTOMUP)

CELL

Turns on the cell calculation mode

BLOCK

Turns on the block calculation mode

TOPDOWN

Turns on the top-down calculation mode

BOTTOMUP

Turns on the bottom-up calculation mode


 

CALCMODE

CALCMODE allows you to set the calculation mode at the server, application, or database level within the configuration file instead of indicating it in a calculation script using @CALCMODE. NOTE: This setting does not apply to aggregate storage databases.

Syntax

CALCMODE [application_name [database_name]] [BLOCK| BOTTOMUP]

SET FRMLBOTTOMUP

Leveraged within calculation scripts, SET FRMBOTTOMUP allows Essbase to perform a bottom-up calculation on formulas that would otherwise require a top-down calculation. This command optimizes the calculation of complex formulas on sparse dimensions in large database outlines.

Syntax

SET FRMLBOTTOMUP ON|OFF;

Parameter

Description

ON

Turns on the bottom-up sparse formula calculation method.

OFF

Turns off the bottom-up sparse formula calculation method. The default setting is OFF. You can change this setting by using CALCOPTFRMLBOTTOMUP TRUE in the essbase.cfg file.

Notes

CALCOPTFRMLBOTTOMUP

Specifies within the configuration file that Essbase optimizes the calculation of complex formulas on sparse dimensions in large database outlines. If enabled, this setting tells Essbase to perform a bottom-up calculation on formulas that would otherwise require a top-down calculation.

This setting does not apply to aggregate storage databases.

Syntax

CALCOPTFRMLBOTTOMUP TRUE | FALSE

TRUE—Optimizes the calculation of formulas on sparse dimensions in large database outlines by forcing a bottom-up calculation.

FALSE—Does not force a bottom-up calculation for formulas on sparse dimensions in large database outlines. The default is FALSE.

Leveraging Multiple Plan Types

As you create accounts, entities, and other elements of the application, you associate them with plan types, so the database for each plan type contains only application dimensions, members, and data values relevant to the plan type. This optimizes application design, size, and performance. When architecting applications for Planning I prescribe to the method of leveraging all databases within an application that are available for use. This means deploying all three available databases inherent to an application and WorkForce and Capex if these modules are available. Segmenting the accounts dimension to specific databases based upon purpose allows for the number of stored dimension members in the typically dense Accounts dimension to be reduced, thus decreasing the block size and creating more opportunities for optimization and creating dynamic roll-ups within the database/

Thursday, January 14, 2010

How to search Planning Supporting Detail: Part 1 - Getting the data

Background Information

Supporting Detail is a powerful tool that helps planners build and communicate bottom-up values when planning such expenses as travel, salary, and projects, where you need to calculate aggregate values. Supporting detail can include text, values and operators that define how data aggregates.


This feature is particularly useful when budgeting when you have multiple expenses falling in the same line. However, one of the drawbacks is when you want to report on at the supporting detail level for particular line item. Suppose you want to look at the supporting detail for Facilities Expenses across multiple entities. You want to see all the entities that have the supporting detail item of Landscaping. This is rather difficult to do given the current mechanisms in planning for reporting on supporting detail.


One of the common complaints about supporting detail is this inability to report and search at the supporting detail level. However through the use of database objects such as views and Excel, this supporting detail can be searched and reported on. By no means do you have to use Excel to retrieve this data but it is the most common BI tool that all companies have. You can use any business intelligence reporting tools such as OBIEE to also provide the means to retrieve the supporting detail.


This blog entry will focus on setting up of the database to retrieve the data.


Overview


First, Supporting Detail is not stored in Essbase. This is the root of why you can’t get to it via the Add-In or Smartview. Supporting Detail is stored in the planning relational database in a couple of tables called HSP_COLUMN_DETAIL and HSP_COLUMN_DETAIL_ITEM. HSP_COLUMN_DETAIL provides the actual intersection where the supporting detail is stored whereas the actual supporting detail labels and values are stored in the HSP_COLUMN_DETAIL_ITEM table. The data is stored as keys so you also need to query the table HSP_OBJECTS and HSP_PLAN_TYPE to decode the key values to meaningful data as you would see in Planning.


The data that is stored in the HSP_COLUMN_DETAIL is stored as a member name so if you would like to see your supporting detail using the alias you will also need to include a mechanism to get the alias name from the HSP_ALIAS table.


There are many different ways to extract data from these tables ranging from stored procedures to database views. I prefer to use database views to give a real time look into the supporting detail but there are drawbacks and possible performance issues using this method.


For this blog entry I will focus on a simple method of extracting supporting detail. I will follow up this entry with additional blogs with methods for incorporating the alias and also converting the supporting detail value from the local currency to USD.


Example


For this example, assume you have a planning model that has 9 dimensions. The data is stored in HSP_COLUMN_DETAIL as DIM1, DIM2, etc ... so you will need to determine which dimension equates to the appropriate DIM column in the table. For this model the following table shows the dimensional mapping for the HSP_COLUMN_DETAIL table.



I have created a view using the following SQL. It is important to note that you will need to use outer join syntax to join HSP_COLUMN_DETAIL and HSP_OBJECT. HSP_COLUMN_DETAIL can have null values in the dimensional columns depending on your planning model.


select p.type_name PLAN_TYPE,

o1.object_name SCENARIO,

o2.object_name ACCOUNT,

o3.object_name DEPT,

o4.object_name PERIOD,

o5.object_name VERSION,

o6.object_name CURRENCY,

o7.object_name YEAR,

o8.object_name EMPLOYEE ,

o9.object_name PROJECT,

cdi.label SUPP_DETAIL,

cdi.value LOCAL_VALUE

from

hsp_column_detail cd inner join hsp_column_detail_item cdi on cd.detail_id=cdi.detail_id

left outer join hsp_object o1 on cd.dim1 = o1.object_id

left outer join hsp_object o2 on cd.dim2 = o2.object_id

left outer join hsp_object o3 on cd.dim3 = o3.object_id

left outer join hsp_object o4 on cd.dim4 = o4.object_id

left outer join hsp_object o5 on cd.dim5 = o5.object_id

left outer join hsp_object o6 on cd.dim6 = o6.object_id

left outer join hsp_object o7 on cd.dim7 = o7.object_id

left outer join hsp_object o8 on cd.dim8 = o8.object_id

left outer join hsp_object o9 on cd.dim9 = o9.object_id

left outer join hsp_plan_type p on p.plan_type = cd.plan_type;


By using the sql to create a view I am able to query the supporting detail as it is saved in planning. This view can then be queried via your business intelligence reporting tool. This view gives the user a the ability to look at the supporting detail that is stored in a planning model and the user can utilize the filter analysis
functions of the reporting tool to take a close look at the supporting detail items.



I will follow up this blog entry with subsequent entries showing how to do currency conversion from local to USD with supporting detail, return the alias instead of a member name in the supporting detail record set, and utilize the planning security model to return specific supporting detail based on identity.

Wednesday, January 13, 2010

Getting Your EPM Project Off to a Successful Start

I think most would agree that getting a software implementation project off to a good start is key to arriving at a successful finish. Even so, I think that many people involved with these types of projects would be able to site several instances of cases where that did not occur.

One problem I’ve experienced in the past is when there is a misunderstanding between requirements gathering and design. Essentially, the project would jump right into design without a clear understanding of business requirements. When that happens, you can end up with a constantly changing design, missed deadlines, budget overruns, and lack of user acceptance, among a host of other things. In fact, without a clear understanding of the requirements, you would rarely ever end up finishing a project successfully.

Requirements should tell an organization what its business needs are in order for it to perform and succeed. These requirements should be gathered via interviews with key stakeholders and subject matter experts, from the highest level of the organization to the lowest, as deemed necessary. They should be an all inclusive list that is prioritized based on budget, timing, business need, etc. The design, on the other hand, should tell you how to build the tool once you’ve determined what the immediate business requirements are.

What I’ve experienced most often during EPM implementation projects is a small amount of requirements (we want a distributed system that’s easily deployable to end users, we need to have single source of the truth, we need to have multiple what-if versions of data) and a lot of design (here is our chart of accounts, this is how our data is sourced, these are the reports we need to generate). Requirements should take a fair amount of time to gather, fully vet, and complete. Once completed, a document should be presented and signed-off by all parties before proceeding to the design phase. If an adequate amount of time is taken to complete requirements and design, the build portion of the implementation should, in theory, become much easier.

Let me add a caveat here. Projects have budgets and timelines and do not continue in perpetuity. Therefore, requirements gathering and design should not go on forever and, obviously, have to be done within the confines of the project. The point is to do a complete and thorough job both gathering requirements and developing the design, to give your organization the best shot at successfully building out a solution.

After requirements and design are complete, key stakeholders should stay involved throughout the project to make sure their requests are accurately being fulfilled. It’s one thing to gather a list of all the requirements and develop a design, it’s another to build it into an application and make sure the end product is truly what was desired and meets the needs of its users.

Finally, success criteria should be included as part of requirements and design and re-visited at the end of the project to make sure all criteria were met. This provides a means for “grading” the project and ensuring that key items were considered and implemented successfully.

Division of Responsibilities Matrix for Hyperion Planning Implementations

I am often asked, "How should we delineate task responsibilities across the organization to support our Oracle Hyperion Planning implementation?" I have compiled an Excel workbook that I have shared through Microsoft Office Live Workspace which I have developed for clients in the past which provides my perspective on where tasks should reside throughout the organization.

By clicking on the title above, Division of Responsibilities Matrix for Hyperion Planning Implementations, you will be brought to the Microsoft Office Live Workspace that houses this spreadsheet for your review and use.

Please let me know your thoughts relating to this matrix, any comments are greatly appreciated

Essbase on Unix: Tips and Tricks

Oracle's Hyperion Essbase is a multidimensional database primarily utilized for providing robust analytical capabilities. Real-time slicing and dicing of data, exploration of KPIs and their material basis, and decision making assistance make Essbase an indispensable financial engine for organizations of all sizes. This blog addresses some of the lesser known tips and tricks with running Essbase on Unix. Linux, AIX, Solaris, and HP UX are all supported, see the Oracle web site for a full list of supported operating systems, including specific release notes.

1. Essbase startup
Oracle Essbase has a preference for running under a C or Bourne shell. Often, this conflicts with the default shell on many installations (Korn). Ensure that your startup script for Essbase begins with "/bin/sh". Also, avoid using "nohup" (it is not necessary for a C or Bourne shell; it is only needed for a Korn shell)- nohup has been shown to cause a race condition in larger installations when making changes to the security file. For some reason, the default startup script included with the version 11 release does not follow the documented convention to start Essbase; if you're having strange problems with your installation I would recommend making changes to this script to make it similar to the documented startup script. For an example startup script see the Essbase Technical Reference chapter "Maintaining Essbase", section "Running Essbase Servers, Applications, and Databases." (the example script also shows how to hid the console password from nosey users on HPUX/Solaris systems)

2. Environment variables
Oracle Essbase requires certain environment variables to be preconfigured for correct operation. These variables are typically documented in a file named "hyperionenv.doc" on the Unix server during installation, however the convention utilized does not export the variables. Ensure that the processes that start Essbase, as well as any scripts calling Esscmd or Maxl scripts, have their environment configured by using this script as a template (remember to export the variable settings).

3. Filesystems
Essbase doesn't like NFS. It gets all whiny and fussy, like a 6 year old that can't have cake. In all seriousness, Essbase's IO requirements and file locking characteristics don't mesh well with filesystems mounted via NFS. This problem is particularly troublesome for ASO database, but affects BSO databases as well. To sum it up, using NFS for Essbase is not supported, so save yourself the headache- don't do it.

4. Stopping Essbase
No one runs Essbase in the foreground, so you can't just type "exit" to stop Essbase. Essbase can be stopped via Administration Services by manually selecting the server and selecting Stop. Alternatively, scripts can be written on the Unix server utilizing either Maxl or Esscmd to stop the server. The relevant Maxl statement is "alter system shutdown" (after logging in to the server) or "shutdownserver" in Esscmd (no login necessary, as the login information is used as a part of the shutdownserver command syntax). Whatever you do, do not "kill" the Essbase server- at worst you're risking serious database corruption, at best you're applications will need to go through free space recovery on startup- a time consuming operation that is unavoidable when the server is killed and which prevents users from being able to connect to their applications and be productive. Note- the version 11 installation does not include a shutdown script; you'll have to write your own, and add it to the appropriate place in the EPM shutdown script to properly stop your server.

5. Best Practices (not necessarily for Unix, but for Essbase administration in general)
Consider carefully the tradeoffs with any of these recommendations. They're all made with the assumption that their advantages outweigh their disadvantages.

  • Database restructures (particularly for read/write applications) - performing periodic full database restructures (using Maxl statement "alter database application.database force restructure;") will improve performance, and reduce free space recovery time if a crash does occur.
  • Place the application in read-only mode before shutting the server down - this forces the Essbase server to write all open files held in memory to the disk, in particular the Free Space file.
  • Fully shutdown the server before performing a backup. Applications can be backed up in read-only mode while the server is running, however the Essbase security file can not. Shut the server completely down to make a backup of this critical file.

Tuesday, January 12, 2010

Video: Maximizing Oracle Business Intelligence Adoption: Amway and LCRA



A customer panel at a recent conference analyzed different approaches to maximizing user adoption for Oracle Business Intelligence. Business Intelligence Consulting Group moderates this customer panel, which features representatives from Amway, Lower Colorado River Authority (LCRA), and BICG.

Monday, January 11, 2010

Financial Reporting Studio Report Design & Development Considerations

As a standard part on any EPM implementation the output from the culmination of the all project work (requirements, design, development, testing, etc.) is reporting. Typically, prior to an EPM implementation for Planning or Essbase, client reporting is typically performed in Excel spreadsheets. Thru the use of Excel spreadsheets reports are developed and consolidated into a reporting package. Utilizing Excel spreadsheets for financial reporting is fraught with many pain points. Making changes to multiple worksheets within a single workbook is often a tedious and time consuming task as well as the potential for human error in terms of incorrect cell references and worksheet links. I’ve worked with clients who have had to change dozens and dozens of worksheets in preparation for the next budget cycle. This process took weeks and weeks to complete and the owner of this process was never sure that all the spreadsheets were absolutely correct. The lack of flexibility with an Excel-based reporting is significant.

Oracle’s Hyperion Financial Reporting Studio (FRS) relieves companies of this problem. We’ll be discussing high-level reporting strategies to consider when creating reports using Oracle’s FRS.

Report Design Considerations

1) Determine the set of reports to be developed in FRS – this is a critical aspect of report development and should be determined during the requirements gathering process of a project but should be revisited as you move closer to the report development cycle to ensure the report set is still in alignment with the project requirements. In working with clients, the most successful report development phase of projects has been where client have been able to provide a full set of static reports not only for the current cycle but reports for a full year. This will provide you with the ability to determine if there any anomalies or differences between the reporting cycles.

2) Dynamic Reports – the objective should always be to develop dynamic reports because of their inherent ability to reduce maintenance. Dynamic reports should include:

a. Use of Essbase substitution variables. The benefits of utilizing substitution variables include reduced go-forward report maintenance and provides client with the ability to reuse a set of reports for different reporting cycles.

b. Use of relationship members rather than hard-coding dimension members. This will reduce report maintenance when new accounts, dept members, etc. are added to the Essbase outline. Instead of going into every report to update the particular row or column, the new member will automatically incorporate appear in reports. For clients that have a significant number of FRS reports the potential on-going time-savings is considerable.

c. Use of FRS functions. FRS provides a series a pre-built functions that enable report developers to develop reports that are dynamic in nature.

3) Report Consistency – reports should be developed with a consistent “look and feel”. Rows and columns should have a consistent height and width, spacing rows and columns widths should be consistent as well. Fonts should be consistent across reports. Headers and footers should be consistent across all reports.

By following these guidelines during the report developed process, clients will have the ability utilize FRS to its full effectiveness and have in place dynamic flexible reporting that will create more time for analysis of the data rather than the ticking and tying process. In addition, the ability to reusability of dynamic reporting and reduced maintenance and preparation for the future reporting cycles will enable will provide organizations the insight to their business that may not have had previously as well as provide consistent reporting themes across the organization.

Thursday, January 7, 2010

Building a Rolling Forecast in Hyperion Planning

In a time of economic uncertainty, businesses are looking for more ways to effectively plan and adapt to ever changing business climates. One more commonly occurring practice to combat this uncertainty is the implementation of a rolling forecast. A periodic “outlook” or “re-forecast” makes a lot of sense when, in a lot of ways, your annual budget essentially becomes outdated once the first month or quarter of actuals become available, maybe even before that…. The re-forecast allows you to update numbers based on more current information and assumptions, and thus, should give you the ability to make more informed decisions.

Generally, a rolling forecast is a new, forward-looking plan, based on the most current actual data. These forecasts are not done to the same level of detail as the annual budget, so the time to prepare them should be a bit shorter, as a result.

I’ve experienced both monthly and quarterly rolling forecasts. Some may say that re-forecasting on a monthly basis requires too much effort, while others may say that forecasting on a quarterly basis doesn’t give them timely enough information. My opinion would be that it depends on the business. If your industry is highly volatile and the environment changes frequently, then a monthly re-forecast might be a fit for you. However, if the business that you’re in is fairly stable and follows a cyclical path, quarterly forecasts may be all you need.

Building a rolling forecast in Hyperion Planning is a fairly common practice. As a user of Planning, that would affect both your data inputs (re-forecast of future periods) and your reporting (mix of actual months and forecast). As an application administrator, it’s a bit more complex as you would need to provide a means for dynamic dimension member updates, as well as create calculations that address the rolling functionality.

The first item to take care of is making sure the appropriate substitution variables are set up in Essbase. A substitution variable is a placeholder for members that change regularly. For example, you can set up variables for the current month and current year. Then, as those values change, you make the update in one location instead of having to go to every input form and every report to make the updates. In the case of a rolling forecast, this makes application maintenance much less time consuming.

The next item to address is the web data input forms. What I generally do is put the Year dimension in the page drop-down and all twelve months in the columns. This allows a user to toggle from one year to the next. A key to this design is to set the forecast scenario to the current period and year range so that only the forward-looking periods are open for input. One drawback to this method is that you don’t get a true rolling effect with actual periods rolling up to forecast. As a work-around, you could:

  • Include the scenario dimension in the page with forecast and actual as choices.
  • Create a composite web form with the second grid containing actual data.
  • Create a custom menu link to a rolling forecast report.
Creating rolling forecast reports can get a little tricky, but that’s mostly due to the fact that they can accommodate more dynamic member selections compared to web forms. The column set up is the most complex portion as you have a mix of actual and forecast data, which oftentimes crosses multiple years. The report design will also be dependent on the start month for the forecast and how far out the forecast goes.

For a year-over-year forecast, you will essentially have multiple sets of columns which display or suppress depending on certain criteria, like whether or not the forecast start and current period are in the same year, for example. I generally use a substitution variable to make the criteria determination. Because of the many options in terms of forecast length and type, I won't go into detail on how to design the columns for these types of reports.

If your forecast is all contained within the same year, then one column would range from January to the current month variable and the second column would range from the current month variable offset by 1 to December.


Finally, you have calculations that need to be updated to accommodate the rolling forecast. Depending on how any trending you have is formulated, these calculations may need to incorporate cross dimensional operators to the appropriate scenario (actual, forecast). Also, when the calculations roll year-over-year, the formulas will need to appropriately account for the change from one year to the next. Usage of substitution variables is also extremely helpful here and is definitely a best practice.

When all is said and done, you should have multiple buckets for each of your planning processes. For example, you most likely would have a set of Annual Budget web forms and reports with their corresponding set of folders. You would also have a set of budget calculations. If you implemented a rolling forecast, that process would then have it’s own set of items as well. From an organization standpoint, it’s cleaner and it also makes it easier for end users to sift through.

In summary, this is generally the way I handle rolling forecasts in Hyperion Planning. Is it the only way to do it – no. I’m sure there are many other options. But, I’ve found this to be an efficient and effective way to implement the solution. If there are other ideas, alternatives, or opinions out there, I’d love to hear them. As a wise man once told me, “The best thing about Essbase calculation scripts is that there are a hundred ways to write the same formula. But, the worst thing about Essbase calculation scripts is that there are a hundred ways to write the same formula!” I guess something similar could be said about implementing rolling forecasts....

Monday, January 4, 2010

Article Features Kaleida Health’s Use of OBIEE to Combat the Flu

Kaleida Health’s Flu Monitoring Dashboard was developed quickly using Oracle Business Intelligence Enterprise Edition to provide insightful reporting regarding patients with the flu, including H1N1.


The innovative use of Oracle Business Intelligence was recently detailed in a SearchOracle.com article. Kaleida used their Oracle Business Intelligence Enterprise Edition (OBIEE) dashboard technology for analyzing emergency room visits where people arrived at a clinic with flu-like symptoms, tracking various metrics that included what time of day or night the visit took place. With this information, Kaleida was able to effectively calculate when a spike may occur so they could start diverting patients with flu-like symptoms to another clinic, minimizing the possibility of flu patients infecting others in a crowded waiting room.