Work item linking in GC context in ELM 7.0.2

In my earlier post welcoming IBM Engineering Lifecycle Management (ELM) 7.0.2, I summarized some key changes in 7.0.2 to how the system resolves links between work items in IBM Engineering Workflow Management (EWM) and versioned artifacts in a global configuration context. If you are using global configurations and linking work items, or plan to do so in future, it is critical to understand the changes and new system behaviour, and update your processes — and users — accordingly.

Nick Crossley (esteemed ELM architect) and I recently published the first in a planned series of Jazz.net articles exploring these changes in more detail: Work item linking in a global configuration context: Overview. I’ve included some key points in this post, but encourage you to read the complete article.

Highlights of what is new and different for work item linking in 7.0.2:

EWM work item editors provide a configuration context menu, which determines the target configuration for outgoing links to versioned artifacts.

Work items are not versioned, and don’t have any configuration context. However, in 7.0.2 you can set the configuration context menu in EWM to define the global configuration (GC) for resolving links to versioned artifacts.

EWM work item context menu

When you set the GC context, all the work item’s outgoing links resolve in that context – regardless of any Link Type/Attribute mappings or release associations to global configurations, which EWM used for link resolution in previous releases. When you work in DOORS Next, Engineering Test Management (ETM), or Rhapsody Model Manager (RMM), the system still uses those settings to determine and resolve incoming work item links; EWM now uses only the configuration context menu setting. (Note: each EWM project area has an option to enable the configuration context menu; if not set, links from the work item resolve as if no configuration is specified.)

Link EWM releases to global configurations in the GCM application, instead of in EWM.

Before 7.0.2, you associated releases and global configurations in EWM where you defined the releases. In 7.0.2, you define those relationships in the GCM application, where global configurations now have a defined link type for releases. (If you have existing associations in EWM, there is a utility in 7.0.2 to move them to the GCM application.)

Linking global configuration to releases

You can link multiple releases to a global configuration, and multiple global configurations can link to the same release.

Prior to 7.0.2, the GC-release mapping was one-to-one, which could be problematical if you wanted work item links to resolve in more than one global configuration context – for example, global baselines, different versions, different levels of hierarchy, or personal streams. Now you can link a release to any relevant global configuration, and a global configuration can link to multiple releases, for example, if you have multiple teams working in the context of a broad and deep configuration hierarchy. When you derive a new global configuration from an existing one, release links are initialized to match the parent. You can also define predecessor relationships between releases to easily include work item links from earlier related releases without having to specify each release every time.

You can filter work item links in Report Builder reports.

By default, LQE includes work items in all global configuration scopes: if a work item links to a versioned artifact in one configuration context, that link would appear in reports for all contexts that include that artifact. In 7.0.2, you can set an option in your report that filters linked work items based on the global configuration-release mappings. Set it in all reports where you want filtering to occur.

Report Builder option for filtering work item links

If your teams link work items and versioned artifacts in global configurations, ensure you understand the new capabilities and behaviours in 7.0.2 and the impact to administrators and end users. Educate your users on the correct configuration contexts to use and what to expect when creating, viewing, and navigating links in a configuration context.

For more details, please read the Jazz.net article in full, and watch for more in the series! (I’ll announce them here too.)

A 2020 holiday gift: ELM 7.0.2

IBM Engineering Lifeycle Management (ELM) 7.0.2 became available today, 11 Dec 2020, just in time for the holidays! You can read all the details in the New & Noteworthy on jazz.net, but I thought I’d highlight some of the new capabilities that I’m particularly pleased about.

One area of focus for the ELM team has been consistency across the suite, starting with some of the UI changes you’ve seen in recent releases. Progress continues, and there are a couple of great features that aid with efficiency and ease of use that have been implemented across the suite:

  • Drag-and-drop linking within and across applications. Yes, you read that right. You can now drag a requirement, or a work item, or a test artifact, and drop it on to another requirement or work item or test artifact. You can also drag multiple artifacts. In Engineering Workflow Management (EWM), you can drag between work items and plans as well. Having spent far too much time using the link creation dialog to link requirements to multiple test cases, one at a time, I’m delighted that this capability is now available and at a suite level. Oh, and you can also copy and paste-as-link as well.
Dragging requirement links to a test artifact
  • “Smart people picker” (user selection dialog). In previous releases, when you selected users for an attribute like owner or to subscribe them to a work item, the selection dialog exposed all the Jazz users. Not only could that take a while to search (especially with a large user base), the search didn’t take access control into account, so results included users that were not valid to select. The new people picker is available across ELM applications, and it does take access control into account. Not only that, it automatically populates the 10 most recent users you’ve chosen, hopefully reducing searches. It also shows users already selected, so you can add and remove users all at once. Another time-saver.
The new smart people picker
  • Logging out actually logs you out of all the ELM applications. As someone who frequently has multiple tabs open with different applications, I’d frequently log out of one and discover I was still logged in elsewhere. (If you’re trying to switch users for a demo, that can be quite frustrating!) Now logging off in one application takes affect across all of them.

Another significant change for those using global configuration management is how links between work items and versioned artifacts resolve in a global configuration (GC) context. The system still uses the relationship between GCs and EWM Releases; however, you now define those relationships on the GC side, in the GCM. With this change, you can now associate a single GC with multiple EWM Releases, which eliminates a significant limitation in earlier releases. In EWM, you can set the GC context to determine resolution of links to versioned artifacts. (Note that work items are still not versioned and have no local configuration. )

GCM query showing GCs mapped to Releases
EWM with configuration context setting

You can reflect these associations in your Report Builder reports, excluding work items with versioned-artifact links that aren’t applicable in the selected GC context; in previous releases, reports included linked work items regardless of the configuration context. For more details on these changes and how to take advantage of them, see the New & Noteworthy for the GCM application and for Jazz Reporting Service (JRS). And stay tuned for more details here too 🙂

Speaking of reporting, JRS now lets you choose your default report grouping (by tag, by folder, by owner..). And if you’ve ever searched across a lot of groupings, you’ll be happy to know that search now includes collapsed groupings – meaning you don’t have to open the containing group in order for search to find hits within it. Publishing (PUB) enabled selective sharing of schedules and generated documents. DOORS Next now includes template for generating a configuration comparison document, and the audit history report works for modules.

There are many other improvements both large and small; you can read all the New & Noteworthy entries for the full scoop. Automotive Compliance 1.0.2 is also being released, featuring improved support for functional safety and ISO 26262 (read more here).

And for those of you who want to be on the cutting edge, there are several technical previews you might want to check out (in a non-production environment, of course):

While you might not have time to deploy 7.0.2 before the holidays, I encourage you to read up on all the new capabilities in anticipation of putting this gift to good use in the new year. Enjoy!

ELM streams and components: SCM vs GCM

[Note: This post was originally published in Nov 2019, when I accidentally published it as a separate page. That has always irked me, so I’m moving it to the main page. Please feel free to enjoy it again!]

The Engineering Lifecycle Management (ELM) solution provides configuration management capabilities across its suite of applications, supporting versioning of component artifacts across multiple streams or configurations.

There’s a common misconception that components and streams work differently in ELM source code management (SCM – part of the Engineering Workflow Management application) and the other ELM applications, including global configuration management (GCM). In fact, the implementation is mostly consistent, but can be confusing at first glance. Let’s take a closer look to dispel the confusion.

Note: for a complete list of product names and acronyms, see my blog post or Jazz.net.

The following concepts apply across the entire ELM suite:

  • A component is a container of artifacts. The component is also the unit of configuration, meaning it is the basis for streams and baselines.
  • A stream is a modifiable configuration of the artifacts.
  • A baseline is a non-modifiable configuration of the artifacts.

Both SCM and the GCM application support a hierarchy of components.

Although an SCM stream might appear to include multiple SCM components, it actually includes configurations from those components; those same components can also contribute configurations to other streams. An SCM stream can include other SCM streams in a complex hierarchy.

SCM stream showing its component contributions in EWM

Likewise, a GCM global configuration includes configurations from one or more components from across the ELM suite (including SCM); a global configuration can include other global configurations.

gcm stream

Like a global configuration, the SCM stream can have metadata or attributes, but does not contain any artifacts directly; it includes only configurations from other components.  So an SCM stream really is analogous to a global configuration, specifying component configurations and hierarchy but within a single domain.

There are a few differences which I believe drive much of the confusion:

  • The GCM application explicitly defines a global component as a basis for global configurations. In SCM, the concept of a component at the top of the configuration hierarchy was never exposed, so the top-level stream does not have an associated component (at least not that a user can see).
  • The term for a configuration hierarchy that includes only baselines is a “global baseline” in GCM, but a “snapshot” in SCM. (Individual SCM components do have baselines.) The terms are different, but the constructs are analogous.
  • The DOORS Next and Engineering Test Management (ETM) applications do not support a component hierarchy like SCM. Instead, the GCM application defines the hierarchy for DNG and ETM applications. (Rhapsody Model Manager (RMM) uses SCM to store its artifacts, and thus also has inherent configuration hierarchy.)

To summarize, SCM defines higher-level configurations that group source code component configurations into hierarchies, and GCM defines higher-level configurations that group component configurations from across the ELM suite, including SCM hierarchical configurations. While there are some minor differences, the capabilities are actually quite consistent.

Trend reporting – you can do that with LQE??

Trend reporting, a view of metrics or measures over time, is a valuable way of getting insight into project progress indicators such as defect arrival and closure, test progress and success, and so on.

Sample trend report: daily test results by verdict

In the Engineering Lifecycle Management (ELM) solution, the Data Warehouse (DW) provides a rich set of metrics, especially for work items. You might not realize that the Lifecycle Query Engine (LQE) also supports trend reporting, albeit on a smaller set of metrics. I recently published a Jazz.net article that describes trend reporting with LQE in more detail, including metrics for configurations. This post shares highlights from that article – hopefully enough to make you want to read the article in its entirety!

A key point to note: You must enable metrics collection for LQE by defining and scheduling one or more collection tasks. Collection can be resource-intensive, so it is disabled by default to ensure there is no impact to LQE performance. The LQE Administrator defines the tasks in the Data Metrics page of the LQE administration.

Defining metrics collection task

If you have enabled configuration management, your collection tasks must specify which configurations to calculate metrics for. To calculate metrics for non-enabled project areas, you need a separate task that does not specify any configuration. 

The Jazz.net article lists the metrics available from LQE and the dimensions for each, which you can use to subdivide or group results or to set filtering conditions. It also provides more detail on defining collection tasks, authoring and running trend reports, and administration over time.

To summarize the recommended practices for using LQE trend reports:

  • Be selective in which metrics you collect and for which configurations. Collect only what you will report on. You can define multiple collection tasks to be more granular in your selections.
  • To minimize impact to the LQE server, schedule metric tasks to run when usage is low, and monitor LQE performance. 
  • Document and communicate what metrics you are collecting to report authors and users so they know what they can choose for their reports. If collection is not enabled for a configuration, running a trend report against it will yield no results.
  • LQE calculates metrics only if the server is running; if the server is down for maintenance or another reason at the scheduled task time, the collection does not happen. If continuous collection is important, consider deploying a parallel LQE server to improve availability as described in the Jazz.net article “Scaling the configuration-aware reporting environment“.
  • Follow general best practices for Report Builder in terms of limiting scope, naming and tagging conventions, and so on. Also consider what time range to apply: a longer time range means more data, which typically takes longer to load. Test your reports for performance.
  • Review metrics tasks periodically to ensure they still reflect your needs, and disable or delete those you no longer require. 
  • For more extensive metrics for work items, consider using the DW. Even if you have enabled configuration management, you can still use the DW to report on unversioned artifacts and work items.

I encourage you to read the complete article on Jazz.net to understand the details and rationale behind these recommendations, and how best to leverage the trend reporting that LQE offers. Enjoy exploring!

Filtering the LQE type system model

If you’ve enabled configuration management in the Engineering Lifecycle Management (ELM) solution, reporting on versioned artifacts with Lifecycle Query Engine (LQE), you have likely encountered challenges with conflicting or duplicate types. (In fact, you might have seen type conflicts for project areas that are not configuration-enabled; however, the information in this post applies only to reporting with configuration management enabled.)

Following the guidance outlined in the Jazz.net articles Defining URIs for artifact types and Maintaining the DOORS Next type system can help minimize those conflicts. However, conflict might be unavoidable in some circumstances, especially with versioned types: for example, where a type system evolves over time due to process, organization, or product line changes, or where different partners want to see their own labels reflected in each variant of a component. In version 7.0, ELM introduced new capabilities to address these scenarios. This post highlights key points from the more comprehensive Jazz.net article I recently published on Filtering the LQE type system model by configuration.

The type conflicts exist because LQE merges information about artifact types, attributes, relationships, and values from all of your project areas to build a combined model of your type system. The same model is used whether you choose LQE or LQE scoped by a configuration as your reporting source. Note: LQE and LQE scoped by a configuration are actually the same data source; the difference lies in what project areas are included in the scope (configuration-enabled or not) and whether a configuration context is required at run time. If type definitions, including labels and URIs, are inconsistent across components or configurations, duplicates and conflicts appear.

As of ELM 7.0, you can define filtered subsets of the LQE type system model, based on a specific configuration – typically a global configuration for broader application, and typically a baseline so the type system is well-defined and unchanging. The filtered model is based on the type definitions within that global configuration hierarchy.

create config-scoped data source

Note: Report Builder currently displays the filtered models as data sources, both for the administrator and for the report author creating the report, and some documentation might refer to them as “configuration-scoped data sources”. You aren’t really creating a new data source, but instead adding a new type system model to the existing LQE data source. Future releases might change how the filtered models are shown, labeled, and described for clarity.

Some important things to understand:

  • You are not filtering the data by that configuration, you are filtering the type system model based on the types in that configuration. You can then use that filtered model to report against any other configurations that share the same type system. For example, for a given Solution Global Configuration, you might have a filtered model for Partner 1 that you use to report against all versions and variants of that Solution for Partner 1. Or you might have a filtered model for all Solution configurations prior to a specific version when type changes happened, and another that reflects the new type system model.
  • These capabilities do not remove the need to manage your type system as described in the articles mentioned earlier. Each type system model uses LQE disk and memory resources, which can impact performance, as well as impacts on administration and report usage. Be selective in defining filtered models. Additionally, filtered models can still include conflicts if types are defined inconsistently between contributions of the selected configuration.
  • Report authors must clearly understand when to choose which type system model when creating reports in Report Builder. Similarly, report users must understand which reports are valid to run against which configuration. Use meaningful names and descriptions when defining the filtered models and the reports based on them, and document their intended use.

reports by data source

  • LQE automatically refreshes its default type system model at regular intervals. It does not refresh the filtered models. If there are changes to the filtered type system model – either because the administrator has edited it to choose a different configuration, or because it was based on a stream which has evolved (not recommended) – the administrator must manually refresh the filtered model. The administrator or delegate must then validate the reports created with that model and determine any required updates for them to run correctly.

If you are interested in defining filtered type system models in your environment, please read the Jazz.net article in its entirety. And remember – it’s still important to manage your type system and URIs!

URIs – you should use them!

I can’t believe I haven’t already penned a post on the importance of defining URIs for the type systems in your IBM Engineering Lifecycle Management (ELM) applications. We have certainly mentioned it in many articles and presentations. I’ve finally published an article on Jazz.net all about URIs, how to define and apply them, and best practices.

If you haven’t read it yet – please do! Here’s a quick synopsis to whet your appetite…

A “uniform resource locator” (URI) unambiguously identifies a resource. In this case, the resources we’re talking about aren’t your actual data (requirements, work items, or what have you), but rather the type system that describes that data:  artifact types, attributes, enumerated values, and link types.

The type system can vary by project area, and in DOORS Next Generation, by component and configuration. For link resolution and reporting, it’s critical that equivalent resources have the same URI. So for example, any component or configuration that defines a Business Requirement artifact type associates the same URI to that type. Conflicting URIs, or conflicting labels for the same URIs, can cause duplicate or unexpected options in Report Builder, and make cross-project reporting a challenge.

In defining URIs, you can reuse terms from existing vocabularies (groupings of URIs, also called namespaces), including those defined by Open Services for Lifecycle Collaboration(OSLC) and on Jazz.net, where semantics are consistent with your type definitions. Define your own URIs for your custom types and properties, as described in my Jazz.net article. Document your own terms and share with your teams, integrating them into templates where possible to reduce manual effort (and errors).

example dng enum uris cropped

An enumerated type with URIs from multiple vocabularies

The best practices bear repeating:

  • Define URIs for all type system elements as early as possible in your adoption.
  • Reuse terms from existing vocabularies where possible, creating your own when needed.
  • Keep labels for the resources consistent as much as possible.
  • Document your vocabulary terms, including extension points, and ensure your teams use and adopt them.
  • Save URIs in your project area and component templates. (There are also ways to share properties between DOORS Next and Global Configuration Management project areas – see the article for details).
  • Define processes for extending and evolving your vocabularies.

For all the details, please read the full article on Jazz.net – and make sure you define your URIs!

ELM Baseline staging streams and how to use them

In the IBM Engineering Lifecycle Management (ELM) solution, the baseline staging stream is a special type of global configuration for assembling global baselines. In this post, we explore how to use baseline staging streams, and perhaps more importantly, how not to use them.

First, it helps to understand global baselines and the process of creating them. A global baseline is a global configuration (GC) that is immutable (unchangeable); all of its contributions are baselines as well, and you can’t remove or replace them. Typically, you create a global baseline at lifecycle milestones, to record a snapshot of your solution at that specific point in time – so later you can refer to it, or even use it as the starting point for a new global configuration.

When you baseline a GC, the system can automatically generate baselines for each of its stream contributions. While convenient, that automation is not always appropriate in more complex environments, where many component teams might manage their own local baselines asynchronously. The alternative is to stage the baseline, creating a baseline staging stream – kind of a “baseline in progress”.

stage baseline action

The baseline staging stream serves as an assembly area for the Configuration Lead to manually substitute baselines for each of the stream contributions, based on input from the different teams. Once all substitutions are made, you can then commit the global baseline.

replace stream in BSS

The baseline staging stream can also be useful should you realize a global baseline includes incorrect contributions, which you can’t remove or replace.  You could return to the original GC and start the baseline process again, but for limited changes, it might be more expedient to generate a baseline staging stream from the global baseline itself. You can then substitute the appropriate baseline contributions in the staging stream, and commit the new global baseline once complete.

While it’s common to progressively stabilize your working stream as you approach a lifecycle milestone, the baseline staging stream is not a stabilization stream. It is not intended as a context where engineering teams perform work or make changes. Do not add a baseline staging stream to a working global configuration, or try to create change sets or make content changes in the context of the staging stream. The system will prevent some of those operations (some in the near future), and they will not yield correct results. The only changes made in a baseline staging stream should be to replace the contributions with baselines.

If you require stabilization streams, ensure that you plan your stream strategy accordingly. Because a GC can include both stream and baseline contributions, you can replace stream contributions with baselines as different teams and components freeze, using parallel streams for stabilization and ongoing work; you might also define the configuration hierarchy to support lower-level GCs that can baseline on different schedules. Where appropriate, you can assign stabilization, “work ahead”, or lower-level GCs to dedicated team areas for the specific users who will work in them.

To summarize, the baseline staging stream is a temporary assembly area for the Configuration Lead to create a global baseline when automated recursive baselines aren’t appropriate. As a “baseline in progress”, it is not a stream for engineering teams to perform work.

ELM directional linking with CM enabled

Enabling configuration management (CM) in the IBM Engineering Lifecycle Management (ELM) applications causes subtle but important changes to the behaviour and implementation of linking between artifacts across applications, and across components within an application. In many cases, these changes are not obvious to users. However, it’s important to take them into consideration as you plan your CM strategy, processes, and procedures. It’s also helpful for users to have at least a rudimentary understanding to avoid confusion and assist with trouble-shooting.

[Note: this post assumes you are already familiar with ELM CM capabilities. If you need more background, start with this Knowledge Centre overview topic. For ELM application names and acronyms, see my blog post or Jazz.net.]

Here are some key concepts to grasp around linking when CM is enabled:

  • Linking across lifecycle domains or components require a global configuration context. When you create or traverse a link across lifecycle domains, for example from a requirement to a test, you must be in a global configuration context that includes configurations for both artifacts. The link resolves based on the content of the global configuration. This is also true for links between versioned artifacts across component configurations in the same domain, and for links to work items (which are tied to global configurations by release associations, as described in this Jazz.net article).
  • All links are directional, with no “back links”. For each link relationship, there is a source artifact (with the “outgoing link”), and a target artifact (with the “incoming link”). Applications typically use icons to indicate outgoing and incoming link direction: undefined Only the application for the source artifact actually stores the (outgoing) link. Without CM, the system would also modify the target artifact and store a “back-link” for the incoming link. With CM enabled, the system does not change the target artifact to add the incoming link. Instead, when the user accesses the artifact, the ELM applications use the Link Index Provider (LDX) and internal queries to discover and display the incoming links. So the user still sees all the artifact’s outgoing and incoming links as before, even though the underlying implementation has changed.
  • Link direction and storage are independent of where you initiate link creation. The link is always stored with the source artifact, based on the link relationship. Typically, the downstream application (that matures later in the lifecycle) stores links to upstream application artifacts that mature earlier, as described in the table that follows. Links internal to an application (for example, between requirements) are also directional, based on the link definition; the application still stores the outgoing link with the source artifact, and queries internally for incoming links across component configurations.
ApplicationStores links toQueries for incoming
links from
RM (DOORS Next)RMAM, CCM, QM
AM (RMM)AM, RMCCM, QM
QM (ETM)QM, AM, RMCCM
CCM (EWM)CCM, QM, AM, RM 

For example, the ETM application stores all links between test artifacts and requirements (because typically the requirement freezes before the test). If you add a “Validated-by” link to a DOORS Next requirement, the system does not modify the requirement; ETM stores the link as a “Validates requirement” link for the source test artifact. DOORS Next then uses the Link Index Provider to display the incoming “Validated-by” link with the requirement.

Note: While this means you can create an incoming link on a baselined or non-modifiable artifact, in some cases the application UI restricts the ability to do so from the context of that artifact. In those cases, you’ll need to go to the source (modifiable) artifact to create the link.

Why does this matter, if the system handles all the storage and resolution? In many scenarios, the user doesn’t need to know or care about the linking implementation. However, in more complex environments or error situations, confusion can arise. Examples include:

  • Links disappear. Typically this happens with incoming links, which aren’t stored with the artifact, and can occur because:
    • The source artifact with the outgoing link is no longer in your global configuration context (because you changed context, or deleted something in the stream, etc).
    • Errors occurred in accessing LDX to query or index links.
  • Links resolve incorrectly or not at all. This can happen if the target artifact is no longer in your global configuration context (because you changed context or deleted the target, etc), or a different version of the target artifact exists in your current context. There are a couple of special scenarios:
    • In a DOORS Next change set, you can create links to artifacts in other streams or applications. If the target artifact for the link is in the change set, the outgoing link is saved in the other stream or application. If you then discard the change set, the outgoing link persists for the source artifact, and likely does not resolve as intended.
    • When you work in a personal stream, incoming links from EWM work items to requirements don’t display correctly due to the current mechanism that maps work item Releases to a single GC context. Those links should display correctly when you return to the correct GC context.
  • Incoming link information doesn’t appear in the artifact change history. Incoming links don’t modify the artifact, so don’t appear in its change history. Outgoing links do appear in the artifact’s change history.
  • Incoming links aren’t included in change set delivery or configuration merge. Again, incoming links do not modify the artifact, so they have no corresponding change to deliver. Outgoing links do represent a change, and are included in change set delivery and merge operations.

Understanding link direction, storage, and the need for correct GC context can help you define the right processes and workflows to avoid unintended results and user confusion, and to trouble-shoot issues should they occur.

P.S. I didn’t touch on reporting (LQE constructs relationships so you can report on links from either source or target artifact), or link validity (which replaces suspect linking as a mechanism to track requirement change). Those might be topics for a later post.

Tips for creating Report Builder reports

The Report Builder (RB) interface in the IBM Engineering Lifecycle Management (ELM) offering makes it relatively easy to build reports with little training. However, it’s also easy to make choices that cause reports to perform poorly, especially when using the LQE data sources, which do not have a pre-defined schema like the data warehouse. This article outlines some practices that can help you avoid report performance issues.

rb

First, it’s important to use good design principles (true for any reporting technology).  There are various resources available online related to report design; a couple of principles that I find important are:

  • Create separate reports for different consumers and needs.

Trying to make one report serve many purposes can produce large result sets that take a long time to run. They are also typically difficult to consume, as users have to identify and tune out the extraneous data.

  • Keep in mind what decision or action the report should trigger.

“Manage by exception” is a good mantra. Make it easy to find the issues or outliers that need attention. Consider what users can easily and quickly understand, and how best to present the data to that end; for example, it can be hard to find the critical data in a multi-page table, or even in a graph with dozens of segments and colors.

In terms of using the mechanisms in Report Builder, consider the following:

  • Limit scope as much as possible, as early as possible. Consider reuse, and how that relates to project similarities and groupings. (You might reuse a single report across many projects, or you might need a small set of reports that better reflects project specifics). At run time, always scope the report to the specific project(s). You want to avoid the query collecting a lot of data that you will later discard.
  • Avoid exclusion conditions (where the value is “none of” those available or something does not exist); they run more slowly than conditions that look for the existence or specific values.
  • Minimize many-to-many traceability relationships, and where possible, one-to-many relationships. The query engine takes longer to resolve the complex relationships.
  • When you do use multiple traceability paths, if your report is cross-product, choose to append the results rather than merge them. That avoids an extra step of joining the results. Of course, there are some use cases where you do need to merge results – for example to find test cases with defects and their affected requirements, where you want the result on a single line with the test case.
  • Minimize optional relationships where possible; they run more slowly than required relationships.
  • In the formatting step, eliminate columns that you don’t need to see in the results. For example, setting a condition on an attribute automatically includes a column for that attribute in the results. If you don’t need to see those values, remove the column (it won’t affect the condition).
  • If you want a graph, build the table first to ensure you’re getting the correct data. Then massage it into graphical format.
  • If you need to customize the SQL or SPARQL queries (in the Advanced section of RB), save a copy of your report before you make any changes, make a copy, and then modify the copy.  That way you can start over from the original if necessary. Once you edit the Advanced section, that becomes the only way you can make any further changes to the report.

If you find reports are running slowly, consider whether your LQE server is adequately sized. Ensure that your dashboards are designed to minimize content on well-used tabs, with larger reports on separate tabs that users must explicitly choose to open. You can also leverage report scheduling, to run reports at low usage times and then share the generated output with multiple users.  (Note also that when you output to a spreadsheet, the number of results is not limited as it is in the RB UI.)

If you’ve followed all the best practices and you are still having issues with reports running slowly, contact IBM Support. Good luck, and have fun experimenting with Report Builder!

Jazz Reporting Service: Data warehouse or LQE?

My previous post described the Jazz reporting solutions, including the two data sources used by the Jazz Reporting Service (JRS): the Data Warehouse (DW) and the Lifecycle Query Engine (LQE). So which one should you use? It depends on your reporting needs. This post provides guidance on why you’d use one or the other – or maybe both.

A reminder of the overall reporting architecture:

arch overview 6.x

Jazz reporting architecture

The DW is the more mature data store, and has been part of the solution for many years (DCC is slightly more recent, debuting in 2014).  It has a well-defined and documented schema. The JRS “ready-to-use” and “ready-to-copy” reports rely on the DW, as do many of the BIRT reports available in Rational Team Concert (RTC) and Rational Quality Manager (RQM).  If you plan to use any of those reports, you need the DW. The DW includes some data not available in LQE, such as build data, and a rich set of metrics and history for trend reports, particularly for work items.

However, if you use configuration management for project areas in RQM or DOORS Next Generation (DNG), you must use the LQE (scoped by a configuration) data source for those project areas; the DW does not support versioned artifacts.

rb select data source

Changing data source in Report Builder

To use Rational Engineering Lifecycle Manager (RELM), you also need LQE as the data source.

You can use LQE to report on project areas that aren’t enabled for configurations too, and there are benefits to doing so. The data in LQE is refreshed in nearly real time, while scheduled DCC jobs typically update the DW less frequently. LQE constructs its metamodel dynamically based on your data, which can make some reports easier to build than with the predetermined DW schema. Some reports are easier to build with the “schema-less” model. With the best practice of defining external URIs for artifact types and attributes, you can equate attributes across project areas to facilitate cross-project reporting. LQE also includes some data not available in the DW, especially for RQM.

LQE does have some disadvantages. The DW offers much richer history and metrics data. LQE has limited sample reports, and customizing queries requires SPARQL knowledge, which might be less familiar than SQL.

If you need LQE for configuration-enabled project areas, you might choose to continue using the DW for some reports. In particular, data for RTC work items continues to be available in the DW (since work items aren’t versioned). Even if all of your DNG and RQM project areas are configuration-enabled, you can use the DW to run out-of-the-box and trend reports for work items.

dw wi trend options

WI trend reports from DW

For non-enabled DNG/RQM projects, you might choose to build some reports using the DW, and others using LQE to take advantage of the dynamic schema and frequent updates.

As you decide on data sources, you do need to consider system resources as well.  The DW uses a database for storage and the Data Collection Component (DCC) application to extract and load the data; the LQE application acts as both the data indexer and data store.  Both require adequate resources for your data and usage scale. (See the Jazz.net Deployment wiki for sizing strategy and performance reports for DCC and LQE.)  With respect to sizing and system resources:

  • If you’re not using the DW for any reporting, you can disable DCC jobs from running. If most of your RM and QM project areas are enabled for configuration management, you’ll need less space for the DW database to grow, since those project areas do not contribute to the DW.
  • If you enable LQE, it collects data for all project areas in the registered application data sources; currently you can’t filter the data in the TRS feeds to reduce the size of the data store. However, from a query performance perspective, if you continue to use the DW for some number of reports, you reduce the reporting load on the LQE server, which could contribute to performance. That said, if you don’t use configuration management in RM or QM project areas, you might not want to invest in the extra resources for LQE.

There is another reporting option that uses neither of these data sources: Rational Publishing Engine (RPE) extracts data directly from the applications using the reportable REST API to generate document-style reports and spreadsheets. In some cases, RPE can access data not readily available in either the DW or LQE, and can also handle more complex data manipulation and formatting. RPE is available as a separate offering; it is not included with JRS.

rpe-preview_view

RPE preview view (v6.0.6)

In closing, carefully consider your reporting needs as you decide whether to use the DW or LQE – or maybe even both.