1. Who should we invite to be registered members (i.e., users) of the platform? How do most of your customers approach this?
The big difference between registered and non-register members is that registered members can access and navigate the result dashboards. Note that all members can complete assessments via a unique assessment link.
Given this capability, most organizations invite only the organization members (i.e., coaches, scrum masters) who will be facilitators of an assessment into the platform as registered members. This eliminates the need to invite all assessment participants to the platform, which requires team assignment and maintenance.
Facilitators often share the assessment results with participants (i.e., teams) by sharing their screen and utilizing the dashboard PDF export function.
2. What permission role should I provide to those that will facilitate assessments?
Most organizations will assign a permission role of ‘facilitator” for facilitators. Imagine that ☺
Facilitators can launch and conduct assessments, and view results for assigned teams. The difference between “facilitator” and “super user” is that “super users” can modify and create assessment templates.
1. What is an Entity?
An Entity is a construct defined in the platform that enables Individuals, Teams, and Agile at Scale Programs to conduct assessments and analyze results. They align directly with the “assessment types” in Lean Agile Intelligence.
2. How do we go about setting up Groups for Team assessments? How do most of your customers approach this?
Grouping is the primary vehicle used to display aggregate results of all child entities (i.e., teams), so this should be your first consideration.
Most organizations will set up Groups to mimic organizational structure. There is no limit to the number of subgroups you want to define. Examples of Groups include organization units/departments, product families, and agile release trains.
3. What if we purchased Agile at Scale and Individual Entities along with Team Entities? How do most of your customers incorporate them into the grouping hierarchy?
Most organizations create subgroups for the Agile at Scale and Individual Entities under each Primary Group (i.e., units/departments, product families, ARTs) so they can see aggregate results across all assessment levels for that Primary Group.
Dashboard template filtering can be leveraged to see aggregate results for one specific template across the organization.
1. Should we create custom Dimensions? How do most of your customers approach this?
Most organizations stick with out-of-the-box Dimensions. The decision by organizations who do choose to create new Dimensions is typically driven by the need to accommodate an existing reporting requirement. For example, some organizations adopting LAI may have already established Dimensions (i.e., focus areas) for transformation progress reporting. Instead of mapping the out-of-the-box Dimensions to their existing ones, the platform provides the flexibility to re-create them and associate questions.
2. What about the Metrics Dimension? How is that different from the others?
All out-of-the-box quantitative Questions (i.e., Throughput, Cycle Time) are assigned to the Metrics Dimension. This enables the ability to see the aggregate average of all quantitative Questions to gain a holistic view of performance.
There could be a need to segment the Metrics dimension further to see different performance areas of metric/ quantitative Questions. For example, an organization may want to separate its DevOps metrics from others. In this case, they would create a new Dimension with the quantitative measurement type and assign the related DevOps Questions.
3. Should we create custom Outcomes? How do most of your customers approach this?
Most organizations stick with the out-of-the-box Outcomes because their transformation objectives/outcomes fit within one of the out-of-the-box Outcomes. However, like Dimensions, creating custom Outcomes is typically driven by the need to accommodate an existing reporting requirement for progress on transformation objectives/outcomes.
4. Do your customers typically define custom Metrics in the platform? If so, how do they use them?
The Custom Metrics module is relatively new. To date, about half of our customers have adopted it. The adoption is typically driven by the ability to complement the quantitative Question Ratings with the actual metric data used to derive the Rating. This enables the identification of outliers and data trends. An enhancement for 2023 utilizes the Custom Metrics import to automatically generate a Rating for quantitative Questions.
1. What are you referring to when you say, “Assessment Strategy”?
An Assessment Strategy includes the Assessment Templates selected (the what), who will participate in the assessment (the who), and the cadence of the Assessments (the when).
2. What is the difference between Quantitative and Qualitative Assessment Templates? Which one should I use?
Qualitative Assessment Templates evaluate the behaviors and practices of an Entity (i.e., Team). They are typically conducted using a Self-Assessment format where the participants of the Entity (i.e., Team) self-assess themselves quarterly and identify improvements.
Quantitative Assessments Templates evaluate the performance of an Entity (i.e., Team) in multiple metrics. They are usually conducted with a single participant (i.e., scrum master, tech lead) because the answers are data derived from an external data source (i.e., Jira).
We recommend that customers use both Qualitative and Quantitative Assessment Templates for Entities. Our dashboards will provide cumulative results of the completed Assessments, which will assist in identifying cause and effect between the qualitative and quantitative Questions and data-driven coaching.
3. Is there a limit on how many Assessments can be run for an Entity?
No, Assessments are unlimited for purchased Entities.
1. For Team & Multi-Team Agility Assessment Templates, how do you determine whether a Question is in the basic or advanced version?
Assignments of the questions are based on our experience and feedback from our coaching partners. The assignment aligns with a tried-and-true coaching philosophy. New teams typically need to be stabilized before becoming high-performing. The team master must focus on foundational practices and behaviors that lead to engagement, collaboration, and balance of supply and demand (i.e., being predictable and responsive). Questions in our basic Qualitative Assessment Templates are geared toward driving employee satisfaction, responsiveness, and predictability. Questions in the basic Quantitative Assessment Template measure these outcomes.
Stabilized teams are more likely to become high-performing. High-performing teams deliver high-quality solutions that meet customer needs quickly to market. Questions in our basic Qualitative Assessment Templates are geared toward driving time to market, customer satisfaction, and reliability. Questions in the advanced Quantitative Assessment Template measure these outcomes.
2. What is the difference between the Team & Multi-Team Agility Assessment Templates and those listed in their categories?
Our Team and Multi-Team Agility Assessment Templates are framework agnostic. They include a broad set of Questions that are best practices beneficial to any agile team. Other templates in the category (i.e., Agile at Scale, Team, Individual) are narrower and focus on a specialty area of Agility (i.e., DevOps, Product).
3. What about Scrum and ProKanban Assessment Templates? How are they different from the Team and Multi-Team Agility Assessment Templates?
The Scrum and ProKanban templates are based on their respective framework/methodology guiding document (i.e., Scrum Guide, The Kanban Guide).
1. How are the contents of the out-of-the-box Questions derived? When are the contents changed?
The Question Outcomes and Growth Stage Criteria were derived from the influence of many sources authored by experts in the field (these sources are listed on our References page) and feedback from our coaching partners.
The Questions are reviewed twice a year. Minor revisions (i.e., language) informed by customer/partner feedback are occasionally made and included in release notes. If a Question requires a significant revision, a replacement Question is created, and the original is preserved. Customers can then opt to adopt the new Question if they choose.
2. Can you explain how you determine which Growth Criteria are in what Stage?
Again, the Growth Criteria is mostly derived from influential sources and customer/partner feedback; however, there is another variable.
Our maturity model for each Question is progressive. Each subsequent Stage builds off the last one. That is not to say the improvement cycle for every team will be linear to our Stage Growth Criteria. We all know that is not the case in a complex adaptive system. However, the Growth Criteria in later stages will be easier and more effectively achieved if the Growth Criteria in previous Stages have been met.
1. Should I customize an Assessment Template or use one of your out-of-the-box Templates? How do most of your customers approach this?
It typically depends on the size of the organization. Smaller to mid-size organizations with a minimal agile coaching capacity typically will adopt our out-of-the-box Assessment Templates. Larger organizations that have more coaching capacity and understand the context and complexity of their transformation will often customize. That said, the approach is typically to start with an out-of-the-box Assessment Template and tweak it to their needs and context.
2. Should I customize an out-of-the-box Question? How do most of your customers approach this?
The reason for customizing questions is typically driven by the Question’s language not reflecting the organization’s language, or a Growth Criteria might not be relevant to the customer’s context.