King's Product Prioritization Framework (RICE Model) for 400+ Team Members!
King's Product Prioritization Framework (RICE Model) for 400+ Team Members!
Join Jaco Els as he shares the transformative journey of strategic prioritisation within King's expansive product teams. Discover the transformation from traditional prioritisation methods to a value-driven approach that spans 22 product teams and involves over 400 team members. Jaco will share how King has redefined their operating model, integrating tailored prioritization frameworks like the RICE model, to foster transparency and streamline project delivery. Attendees will gain actionable insights on optimising prioritisation to drive efficiency and alignment in large-scale product environments
Jaco Els, Head of Product Shared Tech,King
Talking about how we improved and evolved our product delivery process at King. My name is Jaco Els, and I look after the product team for ShareTech at King. I started working in the gaming industry with a lucky break, getting a job at Electronic Arts around 2011. I've spent most of the time since then working in different adjacent industries to the gaming industry, and I joined King about four years ago. Initially, I looked after the product team for our data and machine learning teams, and then about two years ago, I took over product for ShareTech.
King is a company that was founded in 2003. It started as a business that made web games, released through its own web portal. I know some of you might remember back then; it was quite common to have web portals with hundreds of Flash games that people played in their browsers. The trend that followed was Facebook opening up publishing of web games inside of Facebook. The first business that was really successful with that was Zynga, with a game called FarmVille. King followed that trend and started to publish their web games on Facebook, using the web portal as a way to quickly iterate new game concepts and mechanics, then releasing the ones that gained traction on Facebook. The first game that was really successful was Bubble Witch Saga, a bubble shooter on Facebook that grew rapidly. Less than a year later, King released what would become their biggest hit to date—a little game called Candy Crush Saga. I think everybody's mom played it at some point during the last ten years.
Facebook was big because it provided access to a large audience, but it was when mobile became the next trend after Facebook that the scale really took off. When King launched Candy Crush on mobile, the company started to grow exponentially. Following Candy Crush's success, the strategy—and the common wisdom at the time—was to find the next big hit. So, the company invested in building a portfolio of IP. We released several games during that period: Farm Heroes Saga, Pet Rescue Saga, other games in the Candy franchise. Each was successful in its own right and as a standalone game, but none matched the scale and success of Candy Crush. Candy Crush remains one of the biggest mobile games in the world.
By 2019-2020, our strategy shifted from growing a portfolio to focusing investment in Candy Crush. As the platforms and the game itself evolved, it became clear that certain games have a long lifespan. To invest in a game as large as Candy Crush, we needed a lot of people. King employs about 2,000 people, with studios across multiple cities: Berlin, Malmö, Stockholm, London, and Barcelona. The two largest teams in the company are the Candy Crush team, which has about 500 people across seven product areas, and ShareTech, where I work, with product teams covering four product domains that build tools and in-house platforms used by the game teams to create content.
Our four teams within ShareTech are:
- Content Creation Tools - focusing on game engine technology and artist tools.
- Live Operations Tooling - managing the scheduling, targeting, and in-game economy for any ephemeral or seasonal content created by the game teams.
- Data and Machine Learning - responsible for analyzing data, structuring A/B tests, and building our machine learning and AI capabilities.
- Core Platforms and Infrastructure - overseeing our Google Cloud platform and handling incident management, among other tasks.
Across those four domains, there are about 22 individual product teams servicing the game teams. Within the game teams, it’s about creating and crafting fun player experiences that get pushed out to players through the live operations teams. These teams typically include level designers, who create seasonal content (for example, we recently had Halloween-themed content in the game), and operations teams that deploy, test, and measure this content.
As King went through exponential growth, we found the best approach was to hire smart people and create relatively autonomous teams that could move fast, identify opportunities, and iterate quickly. In the early days, we were very engineering-focused, with a culture of building things ourselves rather than buying off-the-shelf solutions. This approach helped support the rapid growth of Candy Crush over the years. However, over time, we accumulated technical debt as people moved on and various in-house developed tools were abandoned.
Candy Crush recently celebrated its 10th birthday, and over a decade, you can imagine how much debt and complexity accumulates. What we started to see was a slowdown in our ability to iterate and deliver value to our players. This slowdown became apparent in the levels of frustration expressed by stakeholders, who were grappling with the complexity of the organization and the coordination required across numerous teams. It became hard to keep track of who was working on what, which teams were delivering on projects, why certain tasks were behind schedule, and how to get new opportunities prioritized within an already large book of work.
These frustrations led to frequent questions like, "If you have 400 people in ShareTech, why do we struggle to get the things we need?" This scenario reminds me of a scene from the movie Office Space, where a development manager is interviewed by consultants who ask him to explain what he actually does day-to-day.
We began analyzing our internal processes and saw an accumulating slowdown. This issue was apparent in our inability to deliver value to our internal stakeholders, which in turn prevented us from delivering value to our players. We reached an inflection point with specific projects, such as our CRM system. We went to the market to evaluate whether to build or buy a CRM system and identified that a market-leading CRM system could improve the quality of CRM and in-game content delivery at a lower cost, allowing us to deprecate a lot of legacy tools we'd built over the years. This also meant our engineers could move on to higher-value projects.
It was a no-brainer, so we put the business case forward, got approval, and planned to deliver the new system in about six months. The project ended up taking three times the estimated duration—18 months to deploy the CRM system with only limited initial functionality. We noticed similar issues with other projects, so we reviewed our internal processes and found, as Rory mentioned in the previous talk, some key misalignments.
First, we found a misalignment in priorities between ShareTech and the game teams. Game teams, like most business teams, are typically measured by their ability to drive revenue and grow revenue, so they generally think in terms of quarters. They get targets and are measured against them, which guides their day-to-day focus. Meanwhile, ShareTech, as a strategic partner, often runs projects that span multiple quarters. We optimize for operational efficiency, creating new capabilities that drive revenue, but we found this difference in mindset made it difficult to align on which work should be prioritized.
The next issue was a lack of transparency. Moving fast and being engineering-led had created a culture of autonomy, which is great for speed but hinders collaboration and connectivity between efforts, resulting in a lack of transparency. Inside ShareTech, for example, it was challenging to answer questions like, "Which teams are doing work for the in-app purchase team in Candy Crush?" All these teams were delivering value, but understanding dependencies was unnecessarily difficult. This issue was exacerbated by a complex organizational structure with relatively flat teams, intended to enable better decision-making but often making it hard to know who to speak to about specific needs.
Furthermore, we lacked standardized processes. Teams used different tools and methods—some used Trello, some used Jira—making it challenging to get a consolidated view of work and transparency, leading to projects that should take months dragging on for years.
We reached an inflection point with the CRM project, which was a significant example, but we also faced similar challenges with other initiatives, like a player identity project that took over a year to complete, even though it shouldn’t have. Seeing these trends and challenges, we took a step back in ShareTech to deeply examine how we were operating and where improvements could be made. We identified four key areas where we could make changes and called this our new operating model: - Prioritization - This is not just about figuring out the order of tasks but establishing a consistent way of discussing value. We wanted to use the same language around value within ShareTech and in conversations with our stakeholders, aligning our metrics with those of the game teams, who primarily think in terms of revenue and efficiency.
- Tracking - This was a controversial area for King due to the autonomy that teams had, and engineers were particularly resistant as it felt like monitoring. But our goal with tracking wasn’t surveillance; it was about gaining insight into what teams were working on so we could better support them.
- Alignment and Communication - This involved improving transparency between teams and stakeholders, clarifying roles and responsibilities.
- Roles and Responsibilities - Defining responsibilities to ensure everyone understands their contribution to project delivery.
For prioritization, we adopted the RICE framework. The value of using a framework like RICE is in having a consistent way to talk about value. It’s not about the specific framework; rather, it’s about choosing one that fits the business. RICE worked well for us because it aligned with our approach to business and value. Here’s how we apply each element:
- Reach - We assess how many players a piece of work will impact. Everything we do should ultimately enhance the game experience.
- Impact - We break down impact into four elements: revenue, cost savings, efficiency, and risk mitigation. For example, regulatory compliance is essential to avoid significant business risks.
- Confidence - This reflects how sure we are of the impact. The highest confidence rating comes from A/B tests with proven outcomes, while lower ratings might come from more speculative ideas.
- Effort - This is estimated using a T-shirt sizing approach (small, medium, large, extra-large), with larger efforts typically lowering the RICE score.
The RICE score helps us stack-rank priorities. Importantly, the absolute score doesn’t carry inherent meaning; it’s about the relative ranking of priorities. Managers aren’t bound to follow the scores blindly; they can override them based on context. The framework is meant to support conversations about value, not dictate the order of work. Including our finance partners in these discussions has also added a lot of value, sharpening our forecasts and integrating these assessments into business planning, so teams are accountable for the forecasted impact of their work.
Tracking was another area we focused on by classifying work into four broad categories: - Mission Work - Annual objectives and OKRs, which are our strategic investments.
- New Requests - Stakeholder requests that come in over the course of the year.
- New Opportunities/Innovation - This includes research, spikes, and proof of concepts.
- Operational Work - Day-to-day support for platforms.
Tracking allowed us to get insights across autonomous teams, enabling us to identify trends and better support the teams. For example, if a team consistently fails to meet its strategic goals, it might be because of an excess of operational work due to accumulated tech debt, or perhaps too many stakeholder requests. For instance, our game economy team, which provides the in-app purchase platform inside Candy Crush, is regularly challenged by business-driven requests, making it hard for them to focus solely on strategic investments.
Next, we looked at standardizing processes and assigning roles and responsibilities. Here’s an internal slide we created to define our process. None of this is revolutionary, but it provides a consistent way of working without being overly prescriptive. Green on the slide represents product roles, blue represents engineering. These colors signify ownership and accountability, though they don’t exclude participation from other roles. It’s vital, for example, that the engineering manager and product manager work closely on effort estimation.
Standardizing our processes has allowed us to create a single system of record, opting to use Jira. While Jira has its challenges, the underlying data model is strong, enabling us to capture metadata about the work. This led us to build our SDD Dashboard in Jira, a set of reports that answers operational questions such as “Which teams are working on features for the game economy team?” or “What’s the queue length for specific requests?” This transparency allows stakeholders to have full access to these reports, fostering closer relationships and transparency between product teams and stakeholders.
The benefits extended to our annual budgeting and forecasting processes. Previously, long-range planning was a labor-intensive, manual process, but with the data captured in Jira, a task that might have taken senior people two to three weeks was condensed to a single afternoon for a finance person. We could extract all data, including cost estimates, team sizes, and forecasted effort, to plan more efficiently.
In summary, here’s what worked well: - Establishing a consistent language for discussing value helped us prioritize and focus on delivering the right work.
- Reducing the cost of reporting has been a significant unlock.
- Improving communication has led to better-informed and happier stakeholders, enabling us to build stronger relationships.
There’s still room for improvement, though. Teams sometimes game the RICE scoring system, but as long as there’s accountability, this isn’t a major issue. Motivating ShareTech teams around revenue can be challenging, as revenue is a lagging metric that’s somewhat disconnected from their daily work. We’ve started exploring other KPIs and measurements within the teams to address this. Additionally, we found discrepancies in scoring between domain teams. While each team managed internal priorities well, cross-team comparisons required senior managers to step in and align priorities.
Moving forward, we’re tweaking our impact measurement with KPIs focused on time-to-value, productivity, and enhancements that speed up game teams. We’ve created a single stack-ranked priority list for all 400 people, reviewed biweekly to guide prioritization. We’re also implementing virtual teams with joint resource allocation, forming project-based teams that bring together resources from game teams and ShareTech.
Finally, one note on change management: structured change management is crucial, with dedicated people managing feedback loops and driving change. Large organizations are more resistant to change than we’d like to think. Having a structured effort around messaging and collecting feedback has been essential for driving adoption of new processes and ways of working.
Thank you very much.