OracleBIBlog Search

Wednesday, March 31, 2010

BI Publisher - Layout Builder preview in 10g

If you are a BI Publisher user of release 10.1.3.4.1 (Patch 9229122) then you would like to know that this release includes a preview of the Layout Builder (WSYWYG Editor) part of the official release 11g.
Only thing you would have to do is set one particular parameter in xmlp-server-config.xml located in the repository directory xmlp\XMLP\Admin\Configuration as follows:
After restarting OC4J this change will take effect. When you create or modify a report then you will notice that under the "Layout" section there is a new button "Layout Builder" available.

The template prepared by the Layout Builder will have extension .xpt and will be an XML file which contents will only be rendered properly in the Layout Builder tool. In addition to this, the tool will not read any other file type than .xpt.

I guess that if you want to edit the .xpt file outside the Layout Builder you could use any XML text editor and assign a schema or DTD to it, but that is the topic for another post in this blog :).

At this point I would like to make a disclaimer: the parameter is an undocumented feature in 10.1.3.4.1 release and thus is not supported by Oracle in this release.

Friday, March 26, 2010

Which Tool Should Be Used for Reporting?

My clients often wonder which Oracle tool they should use for reporting, building graphs, and generating dashboards. It’s a great question because, as anyone in this realm knows, there are many different options. This blog will touch on the three reporting tools I’m familiar with: Financial Reporting Studio (FRS), Web Analysis (WA), and Oracle Business Intelligence Answers (OBI).

One caveat here is that the bulk of my experience comes from the Oracle Hyperion side, so I have much more familiarity with FRS and WA, and definitely less so with OBI. So, this blog will be written from that perspective.

Below, I’ll give a synopsis of what each tool does, based on my experience, followed by my opinion of when each should be used.

Financial Reporting Studio
FRS is a great tool for producing regular reports, like a monthly reporting package. It allows you to create production level reports with a multitude of formatting options. FRS reports can be viewed in PDF or HTML format, from a client component or over the web. Reports can be gathered together in Books and batch scheduled to run at the frequency of your choosing, and then saved to a particular location or emailed out to a list of users.

FRS is not a tool for producing dashboards, is limited in its chart and graph functionality, and would not generally be used for ad-hoc reporting.

Web Analysis
WA comes from the Hyperion suite of products as the tool for building dashboards. In my experience, it’s most often used to create quick snapshots of data for management or executive level users. WA allows you to create multi-view looks at your critical business metrics, either in graphical or grid format. For example, a dashboard might include a line graph of sales by region in one quadrant, a bar chart of sales by VP in another quadrant, a pie chart of expenses by category, and a grid showing spending by department. WA includes traffic lighting as a feature, allowing you to highlight or color significant variances of data.

WA would not be used for production reporting.

Oracle BI Answers
OBI Answers is a tool for building both reports and dashboards from a variety of sources including relational and multi-dimensional databases. You can build similar dashboards to what WA offers and publish reports similar to what FRS offers. The entire OBI suite has pre-built modules by industry that allow for easier implementation depending on the particular business case.

While OBI Answers can use Oracle Essbase as a data source, there are some issues in doing so that prohibit the use when a particular hierarchy exists. This should be addressed in an upcoming release, but at this point, the OBI link to the Oracle Hyperion suite of products is very limited.

Which Tool?
So, which reporting tool should be used when? With the current releases and functionality available, here is how I would use them:

FRS – Use for regular production reporting such as income statements, operating expenses, headcount, and any other meaningful financial metrics from an Oracle Hyperion data source such as Essbase, Planning, or Financial Management. I generally don’t create charts and graphs using FRS because the options and functionality are fairly limited. But, if you have fairly straightforward and simple chart requirements, then FRS should work fine for you.

WA – I would use Web Analysis for producing grid and chart dashboards from an Oracle Hyperion data source. WA does a good job of incorporating “bells and whistles” that make a dashboard “pop”, providing important metrics quickly.

OBI Answers – OBI is a great tool to use to quickly build reports and dashboards from a data warehouse. In my opinion, this is the easiest tool to learn and use of the three. As I mentioned above, I don’t think it’s currently the right tool to use with Oracle Hyperion data sources, but that very well could change in the near future.

Going Forward
In future releases, I think that OBI will become the tool of choice for reporting and dashboarding, even for Oracle Hyperion data sources. Oracle is very good at creating synergies between their product lines, and while each individual application generally has their own set of tools, eventually, they converge to similar toolset technologies. I think as OBI gets more integrated with the Oracle Hyperion suite of products, it will become the tool of choice for reporting. In my opinion, it is easier to both learn and use compared to both Financial Reporting Studio and Web Analysis.

Vintages - Time x Time

One of the more unique conceptual challenges when designing an Essbase or Planning model can revolve around handling multiple Time-basis dimensions. Many model requirements need granularity beyond the standard observation date that is typically defined as being the Time Period in a model. Consider the following:
A customer has a portfolio of investments, each having several characteristics related to some measurement of time.

  • The origination date of the investment
  • The maturity date of the investment
  • An expected length of the investment, based from which the origination or maturation date can be interpreted
  • An activity date
A single Time dimension (or two time dimensions that split portions of time apart, i.e. separating Years from Months and Quarters) can not accurately capture metrics in the previous example like the performance over time of all investments that originated in a given period, or how much return at a certain age of an investment can be expected.

Multiple Time Dimensions
Assuming the standard Essbase or Planning time dimensions represent the observation date, additional time-style dimensions can be created to provide more granularity. In the previous example, a dimension indicating the age of an investment can be utilized to analyse performance of investments based on how long ago they were originated. Likewise, identification of the starting or ending dates of investments allows analysis based on similar origination or maturation dates.

How much is too much granularity?
That all depends on the analysis requirements for a model. Adding more detail to a model will increase its size and complexity, but will provide a more detailed picture of time's effect on activity. All decisions regarding expanding a model require careful considerations of the benefits of additional complexity.



Other Time Time appears multiple Times (Time beyond core modelling requirements)
  • Scenario revisions
First pass, second pass, etc: These are essentially observations or snapshots of a process at a point in time. Depending on how many of these snapshots are required, it may be advantageous to have a separate dimension identifying when the snapshot was presented. Consider the benefits carefully of disposing of forecast revisions. Often, one key long term goal to improve forecast reliability is the ability to track variances between revisions over time.

Thursday, March 25, 2010

How to Start Testing during an OBIEE Project

How to start testing during an OBIEE Environment:

In an earlier posting I discussed the importance of and where to implement testing in the Project Life Cycle for an OBIEE Application. In this posting I want to discuss further how to implement testing for the application and some of the major steps during the Project Life Cycle.
During the Project Initiation Stage of the Project Life Cycle the first step in testing is to develop a Test Strategy. The Test Strategy outlines the recommended approach to the testing processes during the Project Life Cycle. Some of the key components of the Test Strategy are:

· Purpose and Test Objectives
· Quality Standard
· Test Phase Ownership
· Test Phases and Test Types
· Test Plans
· Test Case Repository
· Test Case Design
· Defect Tracking System
· Severity Levels of Defects
· Test Tools
· Test Reports

Since testing will be incorporated of the Project Life Cycle, it is critical that all project team members be familiar with the test strategy and understands their roles in the Project Testing Processes. All project team members will be involved is multiple phases of the testing processes, and it is important that they understand their role in testing and the roles of the other project team members in the testing processes.

Once the Test Strategy has been completed by the Testing Team and approved by the Business Users the next step is to start the Project Test Plan. The Test Plan is started during the Project Requirements Phase. As the requirements are being gathered it is important to identify how the requirements can be validated and tested to insure that they are met by the provided application. The development of the Test Plan involved all members of the Project Team as well as the Business Sponsor and Users. Once a requirement has been defined it is important to have some way to substantiate that the requirement has been met by the application. While looking for ways to test a requirement, it also helps the business user better define and understand the requirements. In most cases the business users have a good understanding of their source application, but have very little insight into the metrics and attributes needed by measure and manage their business processes. Also when a business application is used by many different groups they may have many different business processes to do the same functionally. While gathering the Business Requirement and Testing Requirements, there may be many different test cases to measure the same functionally across different departments in an organization. It is important to gather this testing information to insure that the business requirements can be validated by the different departments. One of the important steps that are often overlooked in testing is to test the actual requirement to insure that is valid and verifiable to measure and manage their business processes. Unfortunately this is often not the case and not until the business users see the Dashboards and Reports do they understand the value of the metrics and attributes for measuring their business processes. The Test Plan can help to identify the type of Testing and Test Cases that need to be developed to validate the requirements. Some of the items that can be included for testing are:

Validate the access to the System – Security
Validate the navigation to a particular screen, dashboard, page, and report
Validate access to the right data, screens, dashboards, pages, and reports
Validate the response time to access the application
Validate the response time to access a particular dashboard, page and report
Validate the capability and ease of using prompts to select data on a report with specific attributes
Validate the use of drilldown on Reports
Validate the use of navigation on Reports
Validate that the attributes that are used to display the metrics are correct
Validate that the metrics on the Reports provide helpful and valuable information to help them measure and manage their business processes

The next major testing task is to develop test cases to validate the user requirements. The Test Cases are usually designed and built during the design and build stages of the Project Life Cycle. In most situations the Test Cases are iterative, because more details are fleshed out during the Project design and build stages. For each requirement the minimum information needs to be developed for each test case:

Test Case Number
Test Case Summary
Test Case Description
Test Case Steps
Test Case Expected Results
Test Case Error Condition
Person responsible for Test Case

Once the Test Cases are built and agreed upon by the Project Team and Business Users, and the design and build stages for the application are complete then the execution of the Test Cases can begin. One major advantages of building the Test Cases early in the Project Life Cycle is the Development Team will understand the test conditions and can execute many of the Test Plans during Unit Testing. Quality is an important of any project and each team member is responsible for building quality into the finished product. By using the aforementioned Testing Processes quality can be built into the project from the earliest part of the Project Life Cycle, instead of waiting until the finished product is turned over to the business users for user acceptance testing. If testing is only implemented during user acceptance testing many of the quality issues that would have been identified and corrected in the early part of the Project Life Cycle will come to light, and will cost more in terms of resources and money to correct than if they were identified and corrected in the earlier stages of the Project Life Cycle.

BICG Events, News, Knowledge Base, and Video Now Available Through the iPhone and iPod


BICG is providing mobile iPhone access to a variety of features including its BICG events, case studies, white papers, tips and tricks, and YouTube video.

Not only have specific features been specifically designed for mobile browsing, they also have been streamlined to speed up on-the-go web surfing.

1. Go to www.biconsultinggroup.com on your iPhone
2. Touch the plus sign (+)
3. Touch "Add to Home Screen" button.

Read the full press release here >>

Monday, March 15, 2010

Attributes, UDAs, and Alternate Hierarchies

I'm often asked what the best practice is regarding implementation of Attribute Dimensions, Alternate Hierarchies, and User Defined Attributes. While there are no hard and fast rules, these guidelines outline the strengths and weaknesses of each.

Attribute Dimensions
Attribute dimensions are created by tagging members in a sparse dimension with an attribute tag. Members tagged with a member from an attribute dimension must be at the same Level as all other members tagged with an attribute from the respective attribute dimension. In theory, as long as members are at the same level they may be assigned attributes. In practice I recommend avoiding utilizing attributes for members that are not level zero members. This is due to difficulties ensuring members are at the same level. While this often is apparent in ragged hierarchies (also known as unbalanced hierarchies), symmetrical hierarchies can also have members that appear to be at the same level (i.e. they are at the same generation), but are in fact at different levels due to implied sharing where a parent only has one child. Also, this can create situations where a member is at multiple levels, and the resulting data value utilized by an attribute is not what a user expects.

User Defined Attributes (UDAs)
UDAs are flags that are placed on an Essbase member. A given UDA can be placed on any dense or sparse member in the outline. In addition, unlike Attribute dimensions, UDAs are not linked to the level of the members they are tagged to. In practice, this flexibility is most often leveraged to identify member sets for use in a calculation, especially since UDAs can be tagged to Dense members. While it is possible to refer to UDAs in most reporting and analysis packages, however, I do not recommend deploying UDAs for use by end users.

Alternate Hierarchies
Alternate hierarchies are secondary rollups of members within the same dimension. The level zero members in the alternate hierarchy are called Shared Members, and point back to an identically named real member elsewhere in the dimension. It is important to note that the real members do not have to be level zero members, or at the same level. One of the key advantages of Alternate Hierarchies is the ability for upper level summary members to have a completely unique structure (as opposed to utilizing an Attribute dimension, which merely re-totals the same hierarchy with only a subset of members). While there are many uses for this functionality, this often is utilized to build Management and Legal organizational hierarchies from the same source data, as well as facilitate financial reporting with different standards (i.e. IFRS vs. US GAAP). Conversely, Alternate Hierarchies typically require more maintenance, and increase the size of database. Therefore, they should be carefully deployed only in circumstances that justify the additional expense.


In summary, Attribute Dimensions, User Defined Attributes, and Alternate Hierarchies all have advantages and disadvantages. A typical deployment will often leverage all three areas of functionality to solve specific business problems, but will pay particular attention to the negatives of each option to prevent creating an overly difficult to support and expensive database.

Sunday, March 14, 2010

Tips and hints relating to the development of Financial Reporting Studio reports – Tips 2 & 3



When we left off on the last installment of Tips and Hints for Financial Reporting Studio, we had covered Selecting Multiple Members Based on Criteria, in this installment I intend to walk you through how to create a User defined Member List. Member lists are user-defined saved queries. Member lists can include members, other member lists, and functions.




Member lists
To define a member list:




  • Double-click a dimension cell for which you want to assign a member.

  • In the Select Members dialog box, take an action:

  • In the Members or Lists tab, highlight the members to include in the member list, right-click inside the Available area, and then click Save Highlighted.

  • Or once members have been selected and are contained within the Selected window, right click in the selected area and select save

  • Click OK.

  • Enter a name and description for the member list, and then click OK.




  • Click OK to return to your report.

Two limitations that you should be aware of:



  1. Do not use quotation marks (") when entering a member list. If you do, Financial Reporting saves the name with the quotation marks, but you cannot select the name later.

  2. User-defined lists can be created only with Oracle Essbase as a database connection. The size of a member list cannot exceed 4K.

Assigning Multiple Database Connection to Grids




You can populate a grid with data from multiple database connections. After you define a grid's initial database connection and dimension layout, you can change the database connection for one or more rows or columns of the grid. You can define another database connection for each data row or data column.

You can preserve the members selected in the grid when changing database connections on a grid row or column. You must map all dimensions that are assigned to rows or columns in a grid from the current database connection to the new database connection. Dimensions on the point of view (POV) need not to be mapped.

To assign multiple database connections to a grid:


  1. In a grid, select one or more data rows or columns for which you want to change the database connection.

  2. In the Row or Column Properties sheet, click the Select Database Connection button, .

  3. In the Select a Database Connection dialog box, select a database connection from the Database Connection drop-down list, then Click OK.

  4. In the "Map Dimensions Between..." dialog box, map dimensions between the default database connection and the selected database connection. You do this by selecting a dimension from the pull-down list in the right column for each dimension in the left column.


  5. Click OK.

Friday, March 12, 2010

What skill set should a Hyperion EPM Administrator have? Part I

Working with clients in today’s environment of streamlined resources, one of the areas often overlooked in an EPM implementation is the candidate who is going to fill the role of System Administrator for an organization. There is a direct correlation between the level of success an EPM project has and the candidate fulfilling this all important role.

I’ve been associated with projects where the Administrator has had a less than enthusiastic attitude toward this responsibility; thinking that this is just one more area of responsibility being piled on them. I always try to encourage and communicate that this is an incredible opportunity for them to learn/master a new technology on the job. In addition, the skill set they can acquire through “knowledge-transfer” with the project consultants is invaluable. The portability of this knowledge with future employers is also a benefit that administrators don’t always seem to grasp, as there is a constant need for seasoned Hyperion administrators in the marketplace. Clients consistently inquire as to whether we can assist them in recruiting experienced Hyperion Administrators based on our network of contacts.

Some of the skills from an implementation perspective that we as a project team like to see in an Administrator is a willingness to embrace the technology that’s being implemented Successful Administrators are the ones who jump in to the “deep-end” of the pool with the consulting team and understand that there may be some hiccups along the way, but ultimately the organization will be in a significantly better place once the project has gone “Live”. A consistent theme to those who are resistant to change is how their current legacy systems can handle can adequately handle the organizations requirements, however when questioned a little deeper about the limitations and issues of their current system and how that platform can position the organization for future growth, that’s when the realization of the shortcomings of the current are truly recognized.

The ability to effectively communicate is another critical skill that a successful Administrator must have because they will need to have the ability to speak to different audiences. Success administrators need to be able to communicate with their peers as users and often times users from different parts of the organization outside the familiar confines of the Finance area. In addition, successful Administrators need the ability to effectively communicate with Manager, Director, Vice President level employees. The Administrator must also be able to effectively communicate with the IT group as they will be the liaison between the business side and the technical side of the organizations. Each of these groups has their own “language” and vested interest in the organization which must be recognized by the Administrator in order to be successful.

Much of skill set that successful Administrators need is technical in nature but too often the “people” side is under emphasized. Overall, a good attitude with the ability to effectively communicate at various levels within the organization as well as the technical aptitude to learn a new technology are the ingredients for a successful administrator.

In Part II, we’ll discuss some of the more technical skills that a Hyperion Administrator should either have or acquire in order to be an effective and successful administrator.

Tips and hints relating to the development of Financial Reporting Studio reports – Tip 1

In this installment of my blog contributions I would like to focus on some techniques that I have leveraged to get through some of the more challenging aspects of reports development.


While the function builder for member selection often allows you to get the detail selection that is needed I have found instances where I have had to become creative to achieve the exact detail that was necessary within the report being developed.


Four approaches that I leverage quite frequently are:

  1. Selecting Multiple Members Based on Criteria
  2. Defining Member Lists
  3. Referencing separate data sources within the same grid
  4. Using multiple grids and hiding all but one in the final product

Selecting Multiple Members Based on Criteria

You can select members dynamically based on user-defined criteria. You define criteria by creating expressions of members, Boolean operations, and commands. Boolean operators enable you to specify precise member combinations for the report, which is useful for dealing with large volumes of data. Use the AND, OR, UNION, and NOT Boolean operators, combined with expression commands, to refine your member selections.


For demonstration sake, lets assume you need to include all level 0 children of "Software/Other" but none of these members parents within the report. It is also necessary that when a new level 0 member is added that the report dynamically picks this member up.



To select multiple members based on criteria for this need I perform the following:



  1. Double-click a dimension cell for which you want to assign members.

  2. In the Select Members dialog box, right-click inside the selected area, then click Advanced.

  3. Build the expression to satisfy the above need by 1st selecting the descendants of "Software/Other"

  4. Next toggle the Select Member pane to "Lists" and select "Level 0, Product "

  5. Now update the operator to "AND"

  6. Now render the report in Print Preview, the objective has been obtained

  7. Explore other functions:
  • Select NOT to add the Boolean operator NOT to the expression. NOT is the inverse of the selected condition.
  • Type a left parenthesis, (, to add an opening character to the expression.
  • Type a right parenthesis, ), to add a closing character to the expression.
  • Click in the OPERATOR column, then select AND, OR, or UNION.
  • When all conditions must be met, use the AND operator.
  • When one condition of several must be met, use the OR operator.
  • To combine the data, use the UNION operator.

In the next installment we will cover "Defining Member Lists."

Tuesday, March 9, 2010

Essbase member name manipulation - Net Transshipments

Transshipments
In some Essbase models there is a need to capture two essentially identical elements in separate dimensions. One common example of this is a transshipment model, where it is necessary to identify both the origination and destination of a shipment.

Two Dimensions
The most straightforward, and often user-friendly, manner to accomplish this type of tracking is to have the same hierarchy in two dimensions, with each member prefixed differently (i.e. the Source location members might be prefixed with "TO:" while the Destination members would be prefixed with "FROM:").

Net Shipments
A simple metric to consider in this type of model is often Net Shipments. In a multidimensional database, to calculate Net Shipments, shipments out of a location needs to be subtracted from shipments in (or vice-versa, depending on your preference). Assuming this is being calculated for Location A, the formula might look like:
Shipments -> TO:A -> Total Sources - Shipments -> From:A -> Total Destinations
(I.E. take all of the shipments that are sent to "A", regardless of where they are from, and subtract all of the shipments from "A" regardless of where they go to).

The Problem
The formula is simple and straightforward for one location, and remains similar for all other locations, but, every time a location is added, renamed, or deleted the calculation must be updated manually.

The Solution
Assuming the hierarchies are carefully constructed to always require "TO:" before the source location and "FROM:" before the destination location, Essbase can programatically determine the corresponding member by removing and replacing the prefix.

Components
Several string manipulation functions are required:

  • @NAME or @ALIAS: Used to pass either an Essbase Member Name or the respective member's Alias Name to another function as a string.
  • @SUBSTRING: This function will return a portion of a string passed to it.
  • @CONCATENATE: Used to join two strings together.
  • @MEMBER: Turns a string into a reference to a member name.
For this example, I'll also utilize @CURRMBR. This function returns the current member being calculated from a given dimension.

So, to determine the corresponding destination from a member in the source dimension:
  1. Turn the current member in the source dimension into a string: @NAME(@CURRMBR("Source"))
  2. Remove the prefix "TO:" from the string: @SUBSTRING(@NAME(@CURRMBR("Source")),3)
  3. Prefix the new string with "FROM:": @CONCATENATE("FROM:",@SUBSTRING(@NAME(@CURRMBR("Source")),3))
  4. Convert the string into a reference to a member: @MEMBER(@CONCATENATE("FROM:",@SUBSTRING(@NAME(@CURRMBR("Source")),3)))
Using this formula, you can fix on the portion of a hierarchy in the Source dimension, and have access to each member's corresponding Destination member. The original example of calculating Net Shipments might look like:
Fix(@RELATIVE("Source",0))
"Net Shipments"(
"Shipments"->"Total Destination" - "Shipments"->"Total Source"->@MEMBER(@CONCATENATE("FROM:",@SUBSTRING(@NAME(@CURRMBR("Source")),3)));
);
ENDFIX


Friday, March 5, 2010

The Impact of IFRS for EPM Reporting – Part 7

In Part 7, I want to provide more detail on the similarities and differences regarding Foreign Currency Matters. This week I'll keep it short due to other projects I am working on... but again the details below were from a presentation during a company sponsored educational seminar about IFRS.


Foreign Currency Matters


Similarities

  • Similar approaches to foreign currency translation, guidance is different, but generally results in the same determination
  • Both consider the same economies to be hyperinflationary
  • Both required foreign currency transactions to be remeasured into functional currency with amounts resulting from changes in exchange rates being reported in income
  • Both require assets and liabilities to be translated at period-end rate, and income statement amounts generally at average rates, with differences in equity


Differences


Translation in hyperinflationary economy

  • US GAAP – Local functional currency remeasured as if it was the reporting currency (parent)
  • IFRS – local functional currency financial statements are indexed using a general price index and then translated to the reporting currency at the current rate

Consolidation of foreign operations

  • US GAAP – step by step method is used
  • IFRS – method is not specified. Either the direct or the step by step method is used.